Factorial Linear Normal Models

Page Contents

Model Form

Suppose that the observations we are making are assumed to be of the independent random variables ~Y_1, ... , ~Y_{~n}, such that for ~{#Y} = ( ~Y_1, ... , ~Y_{~n} )

~{#Y} _ _ ~ _ _ N(#{&mu.}, &sigma.^2I)

Let &Delta. be a design on this set of observations. Then a ~#{model} (or ~#{model form}) is a subset of the design: _ @{M} &subset. &Delta..

#{Example}
In the one-sided anova case we have one factor, and our hypothesis (or model) is that #{&mu.} &in. L_F, where L_F is the subspace corresponding to the factor F, or more explicitly:

E( ~{#Y} ) _ = _ X_F #{&alpha.}

where X_F is the ~n # |F| design matrix for F, and #{&alpha.} is any |F| # 1 column vector (&alpha._{~j} being the (unknown) mean of the observations on the ~j^{th} level).


In the general case the model (form) @M determines the model, through the formula:

E( ~{#Y} ) _ = _ sum{X_F #{&alpha.}^F,F &in. @M, _ }

where #{&alpha.}^F are the |F| # 1 column vectors of unknown parameters. Alternatively:

E( ~{#Y} ) _ _ &in. _ _ L _ = _ sum{L_F,F &in. @M, _ }

So L is the linear space corresponding to the model form.

Of course this is just an example of a Linear Normal Model , and we can apply that theory for estimating the mean and variance, as we will do below.


#{Example}
In two-sided anova (of rows (R) and columns (C)) with cross product, the model:

E( ~{#Y} ) _ = _ X_R #{&alpha.}^R + X_C #{&alpha.}^C + X_{R # C} #{&alpha.}^{R # C}

corresponding to the model form _ @M = \{ R , C , R # C \}.

If we hypothesise additivity, the model becomes:

E( ~{#Y} ) _ = _ X_R #{&alpha.}^R + X_C #{&alpha.}^C

and the model form is now _ @M_1 = \{ R , C \}.

#{Important note}: _ The #{&alpha.}^R and #{&alpha.}^C are not the same in the two models, this is just a convenient way of writing vectors of unknown means.

Contrasts

Note also that in general the representation in terms of parameters is not unique, e.g. in the additive two-way model, for any constant &gamma. we could put

&beta._{i}^R _ = _ &alpha._{i}^R + &gamma., _ ~i=1, ... |R|; _ _ _ _ _ _ _ _ &beta._{j}^C _ = _ &alpha._{j}^C &minus. &gamma., _ ~j=1, ... |C|

In which case _ X_R #{&beta.}^R + X_C #{&beta.}^C _ = _ X_R #{&alpha.}^R + X_C #{&alpha.}^C.

So we cannot hope to get estimates for the absolute values of the parameters. What we can do is estimate the differences between parameters for different levels of the same factor, as &alpha._{i}^R &minus. &alpha._{h}^R _ = _ &beta._{i}^R &minus. &beta._{h}^R, _ i.e. these differences are constant and so, in theory, estimable. These differences are called #~{contrasts}.

Estimation of Variance

Consider two factors F, H &in. &Delta. and further F &in. @M. If _ H &le. F _ then _ L_H &subseteq. L_F , _ so _ L_F + L_H = L_F.

Let _ @M* = \{ H &in. &Delta. | H &le. F, some F &in. @M \}. _ @M* is called a #~{maximum model form}, _ so _ G =< H, H &in. @M* _ => _ G &in. @M*.


#{Example}
Consider a 3-factor model, with factors A , B , and C. _ &Delta. = \{ I , A # B # C , A # B , A # C , B # C , A , B , C , O \}. _ Let _ @M _ = _ \{ A # B , B \} , _ then _ @M* _ = _ \{ A # B , A , B , O \}


Now consider the linear space L, corresponding to the model @M:

L _ = _ sum{L_F,F &in. @M, _ } _ = _ sum{oplus{V_G,G &le. F, _ },F &in. @M, _ }

_ _ = _ sum{V_G,G &in. @M*, _ } _ = _ oplus{V_G,G &in. @M*, _ }

_

So

P _ = _ sum{Q_G,G &in. @M*, _ }, _ _ _ _ _ 1 &minus. P _ = _ sum{Q_G,G &nin. @M*, _ }

From Linear Normal Model theory:

est{&sigma.}^2 _ = _ fract{|| (1 &minus. P )#~y ||^2,dim L^{&perp.}} _ = _ fract{sum{|| Q_G #~y ||^2,G &nin. @M*, _ },sum{d_G,G &nin. @M*,}}

_ _ _ _ _ _ _ = _ fract{sum{SSD_G,G &nin. @M*, _ },sum{d_G,G &nin. @M*,}}

The quantity &sum._{G &nin. @M*}SSD_G is known as the #~{Residual Sum of Squares} (or sometimes the ~{Error Sum of Squares}), and &sum._{G &nin. @M*}d_G is it's corresponding degrees of freedom. These will be denoted by RSS and df_R in the subsequent notes.

All the quantities necessary to calculate the estimate are available in the Table of Variances .

Anova Table

Above we defined the #~{Residual Sum of Squares} as &sum._{G &nin. @M*}SSD_G, with its corresponding degrees of freedom. Similarly we define the #~{Explained Sum of Squares} (or sometimes the #~{Model Sum of Squares}) as &sum._{G &in. @M*}SSD_G and &sum._{G &in. @M*}d_G is it's corresponding degrees of freedom. These will be denoted by ESS and df_E in the subsequent notes.

All these quantities, together with their totals are usually displayed in a table (known as the ANOVA table for the model) together with the mean square (the sum of squares divided by the corresponding degrees of freedom) and the ratio of the explained mean square to the residual mean square (more about this later):

#{ANOVA table for model}

_ Sum of Squares degrees of freedom Mean Square ~r
Explained ESS = &sum._{G &in. @M*} SSD_G df_E = &sum._{G &in. @M*} d_G MS_E = ESS / df_E MS_E ./ MS_R
Residual RSS = &sum._{G &nin. @M*} SSD_G df_R = &sum._{G &nin. @M*} d_G MS_R = RSS / df_R
Total TSS = &sum._{G &in. &Delta.} SSD_G df_T = &sum._{G &in. &Delta.} d_G _ _

Note that the estimate for the variance is given by the mean square of the residual MSR, which is also known as the mean squared error (MSE), and the estimate for the standard deviation is just the square root of this (hence root mean squared error).

Model Reduction