The Linear Normal Model can be expressed as this expression for the observation vector:
~{#Y} _ _ = _ _ #{&mu.} + &sigma.#~e
where _ &sigma. >= 0 , _ and _ #~e &in. &reals.^~n is normally distributed with mean zero and covariance matrix I_~n.
The Factorial (Linear Normal) Model is therefore:
~{#Y} _ _ = _ _ sum{X_F#{&alpha.}^F, F &in. @M , _ } _ + _ &sigma.#~e
So _ ~Y_1 ... ~Y_~n _ are mutually independent where
~Y_~i _ _ = _ _ sum{&alpha.^F_{F(~i)}, F &in. @M, _ } _ + _ &sigma. ~e_~i , _ _ _ where _ ~e_~i ~ N (0,1)
i.e. the variance is the same for each observation irrespective of factor. We will now look at a slightly more general model.
A #~{Variance Component Model} has the form:
~{#Y} _ _ = _ _ sum{X_F#{&alpha.}^F, F &in. @M , _ } _ + _ sum{&sigma._G X_G #~u^G , G &in. @B , _ }
where @M, @B are subsets of &Delta. , _ @M &intersect. @B _ = _ &empty. , _ &sigma._G >= 0, G &in. @B, are unknown constants, and #~u^G &in. &reals.^{|G|}, G &in. @B, are mutually independent normally distributed with zero mean and covariance matrix I_{|G|}
In components
~Y_~i _ _ = _ _ sum{&alpha.^F_{F(~i)}, F &in. @M, _ } _ + _ sum{&sigma._G ~u^G_{G(~i)},G &in. @B, _ }
So ~Y_~i is a sum of the #~{systematic effects} , _ &alpha.^F_{F(~i)} , _ and the #~{random effects} , _ &sigma._G ~u^G_{G(~i)} .
The variance of ~Y_~i is: _ var ~Y_~i _ = _ sum{&sigma._G^2,G &in. @B, _ } . _ The &sigma._G^2 are called #~{variance components} of the model.
The model can be summed up:
E #~Y _ = _ sum{X_F#{&alpha.}^F, F &in. @M , _ }
var #~Y _ = _ sum{&sigma._G X_G X_G^T , G &in. @B , _ } |
The class of models described above is very broad. To make theoretical analysis possible we restrict the models which we study in the following way:
|
Further we insist that no systematic effects factor, F &in. @M , is finer (or equal) to a random effects factor, G &in. @B.
|
Conditions 1 - 3 ensure that @B is itself an orthogonal design and therefore it induces a decomposition of &reals.^~n
&reals.^{~n} _ = _ oplus{ U_G ,G &in. @B, } where _ _ U_G _ = _ im R_G and _ _ R_G _ = _ P_G prod{( P_G - P_F ),{F&in.@B , F < G}, _ } |
P_G, P_F being the projection matrices of G, F respectively.
Now _ _ Q_G _ = _ P_G prod{( P_G - P_F ),{F&in.&Delta. , F < G}, _ }
_ _ _ = _ P_G prod{( P_G - P_F ),{F&in.@B , F < G}, _ }prod{( P_G - P_F ),{F&nin.@B , F < G}, _ }
_ _ _ = _ _ _ array{R_G prod{( P_G - P_F ),{F&nin.@B , F < G}, _ },{, _ _ G &in. @B}/ P_G prod{( P_G - P_F ),{F&nin.@B , F < G}, _ },{, _ _ G &nin. @B}, { , _ since \{F&in.@B , F < G\} = &empty. _ if _ G &nin. @B , _ by condition 5.} }