Uniform Normal Distribution

Page Contents

Estimating the Parameters

Consider the case where we have a number of independent observations, all of which are assumed to come from the same normal distribution. _ I.e.

_ _ H_1 : _ _ _ #~y &tilde. N ( #{~{&mu.}}, &sigma.^2#I ) , _ _ &mu._1 = &mu._2 = ... = &mu._{~n} _ _ [ that is: ~Y_~i &tilde. N ( &mu., &sigma.^2 ) , _ all ~i. ]

So #{~{&mu.}} &in. L_1, dim L_1 = 1, and L_1 is spanned by #{~e}, where #{~e} = ( 1, ... , 1 )

( #~y - p_1 ( #~y ) ) &dot. #~e _ = _ #0

#~y &dot. #~e _ = _ p_1 ( #~y ) &dot. #~e

Now

#~y &dot. #~e _ = _ sum{~y_~i,1,~n} _ _ _ _ and _ _ _ _ p_1 ( #~y ) &dot. #~e _ = _ ( est{&mu.}, ... , est{&mu.} ) &dot. #~e _ = _ ~n est{&mu.}

est{&mu.} _ = _ fract{1,~n}rndb{sum{~y_~i,1,~n}} _ = _ ${~y}


p_1 ( #~y ) _ = _ ( ${~y}, ... , ${~y} )

RSS _ = _ || #~y - p_1 ( #~y ) || ^2 _ = _ sum{ ( ~y_~i - ${~y} ) ^2,1,~n}

~s_1^2 _ = _ fract{1,~n - 1} sum{ ( ~y_~i - ${~y} ) ^2,1,~n}

Note that all we´ve done so far is to estimate the mean and standard deviation of the distribution. We haven´t tested for uniform mean ( or for that matter that the variables are normally distributed in the first place and have uniform standard deviation ) , and we should in fact first test to see if the model we have assummed is a reasonable one given the data. In this case we cannot test this with the methods developed here, as this is our basis model, is not a nested hypothesis so we cannot use the F-test. To test for normality we can use probit diagrams and chi-squared for goodness of fit. In later examples we will introduce tests for uniform mean and variance. This is not possible in this example, as a large spread in the values of the observations could either mean that they come from distributions having different means, or that the variance is large.

Once we have accepted that the model is a reasonable one, we can estimate the mean and standard deviation, and then test nested hypotheses as we shall now proceed to do.

Testing for a Zero Mean

Let´s assume that we´ve accepted that the observations come from the same normal distribution, as described above, and now we want to test the hypothesis that the value of the mean is in fact zero.

_ _ H_2 : _ _ _ #~y &tilde. N ( #0, &sigma.^2#I ) , _ i.e. &mu._1 = &mu._2 = ... = &mu._{~n} = 0, _ i.e. #{~{&mu.}} &in. L_2 = #O, dim L_2 = 0.

p_2 ( #~y ) _ = _ #0

TSS _ = _ || #~y || ^2 _ = _ sum{~y_~i^2,1,~n}

~s_2^2 _ = _ fract{1,~n} sum{~y_~i^2,1,~n}

ESS _ = _ || p_1 ( #~y ) - p_2 ( #~y ) || ^2 _ = _ || p_1 ( #~y ) || ^2 _ = _ ~n ( ${~y} ) ^2

Anova Table:

  Sum of Squares d.f. Mean Square ~r
Explained ~n ( ${~y} ) ^2 1 ~n ( ${~y} ) ^2

fract{~n ( ${~y} ) ^2,~s_1^2}

Residual

sum{ ( ~y_~i - ${~y} ) ^2,1,~n}

~n - 1

fract{1,~n - 1} sum{ ( ~y_~i - ${~y} ) ^2,1,~n} = ~s_1^2

Total

sum{~y_~i^2,1,~n}

~n

fract{1,~n}sum{~y_~i^2,1,~n} = ~s_2^2

 

So in this case

~r _ = _ fract{~n ( ${~y} ) ^2,~s_1^2} &tilde. F ( 1, ~n-1 )

So the significance probability of the hypothesis ( zero mean ) is _ _ SP = 1 - F ( ~r ) , _ _ where F is the F ( 1, ~n-1 )-distribution function.

However a F ( 1, ~n-1 )-distributed variable is the square of a Students-t ( ~n-1 )-distributed variable. So

fract{&sqrt.~n ( ${~y} ) ,~s_1} &tilde. t ( ~n-1 ) , _ _ _ _ _ _ where ~s_1 = &sqrt.~s_1^2

Testing for Given Non-Zero Mean

Let´s go back to the original model, i.e. ~n observations from the same normal distribution. Suppose now that we want to test the hypothesis that the distribution has a given, non-zero mean.

_ _ H#~'_2 : _ _ _ #~y &tilde. N ( #{~{&mu.}}, &sigma.^2#I ) , _ _ &mu._1 = &mu._2 = ... = &mu._{~n} = &eta. &neq. 0.

Now put _ _ #{~Z} = #~y - #{~{&eta.}}, _ where #{~{&eta.}} = ( &eta., &eta., ... , &eta. ) .

Then under this hypothesis #{~Z} &tilde. N ( #0, &sigma.^2#I ) and this can be tested using the t-test derived in the previous section:

fract{&sqrt.~n ( ${~z} ) ,~s_1 ( ~z ) } &tilde. t ( ~n-1 ) , _ _ _ _ where ~s_1 ( ~z ) ^2 = fract{1,~n - 1} sum{ ( ~{z_i} - ${~z} ) ^2,1,~n}

but _ ~{z_i} = ~y_~i - &eta. _ and _ ${~z} = ${~y} - &eta. _ so _ ~s_1 ( ~z ) ^2 = ~s_1 ( ~y ) ^2 = ~s_1^2

so we can rewrite the above expression as:

fract{&sqrt.~n ( ${~y} &minus. &eta. ) ,~s_1} &tilde. t ( ~n-1 ) , _ _ _ _ _ _ where _ _ _ _ ~s_1^2 = fract{1,~n - 1} sum{ ( ~y_~i - ${~y} ) ^2,1,~n}