Page Contents

# Expectation of a Random Variable

We have defined expectation separately for discrete and continuous random variables. Once expectation has been defined the definition of higher moments, including variance, can be given in terms of expectation, and is uniform for both discrete and continuous random variables.

This is the usual way forward in elementary probability theory, and in most cases gives a perfectly practical means of calculation. However more advanced theory gives a unified definition of expectation based on Lesbesgue integration. It is not the intention of this page to give a thorough presentation of the more advanced theory of probability, for this see for example Williams (1991) or Jacod , but give a quick overview, and link the elementary definitions in to the unified definition of expectation.

## Simple Random Variable

A random variable, ~Y: ( &Omega. , @A , P ) &rightarrow. ( &reals. , @B ) is simple if it takes a finite number of values, i.e.

~Y _ _ _ = _ _ sum{~a_~i I_{A_~i},1,~n} , _ _ _ _ _ _ _ _ A_~i &in. @A.

Note that for any _ &omega. &in. &Omega. , _ &omega. will be in ~{one and only one} A_~i, since the A_~i form a partition of &Omega..

So, for example, ~X is a integer-valued random variable with finite range space.

~X( &omega.) _ _ _ = _ _ sum{~x_~i I_{A_~i}( &omega. ),1,~n} , _ _ _ _ _ _ _ where A_~i _ _ = _ _ \{ &omega. | ~X( &omega. ) = ~x_~i \}.

_ _ _ _ _ _ _ _ _ = _ _ ~x_~j , _ _ _ _ _ _ _ _ _ one particular ~j.

~Y simple random variable, its #~{expectation} is defined

E( ~Y ) _ _ = _ _ sum{~a_~i P( A_~i ),1,~n}

This ties in with our earlier definition of expectation for finite integer-valued random variables.

E( ~X ) _ _ = _ _ sum{~x_~i P( A_~i ),1,~n} _ _ = _ _ sum{~x_~i P( ~X = x_~i ),1,~n}

Let ~Y: (&Omega., @A, P) &rightarrow. (&reals., @B) be any positive valued random variable, i.e. Y(&omega.) &ge. 0, _ &forall. &omega. &in. &Omega.

The ~{expectation} of ~Y is defined as

E(X) _ = _ sup \{ E(H)\}

The supremum being taken over all positive simple random variables, H, where 0 =< H =< X.

## Real-valued Random Variable

Let X be a real valued random variable, define:

X^+(&omega.) _ = _ array{X^+(&omega.) _ _ _ _ , X^+(&omega.) >= 0/ 0 _ _ _ _ , X^+(&omega.) < 0}

and:

X^{&minus.}(&omega.) _ = _ array{0 _ _ _ _ , X^+(&omega.) >= 0/ &minus.X^+(&omega.) _ _ _ _ , X^+(&omega.) < 0}

Then

X _ = _ X^+ &minus. X^{&minus.}

Both X^+ and X^{&minus.} are positive real-valued random variables, and so have expectation E(X^+) and E(X^{&minus.}), respectively. These are not necessarily finite. However if both X^+ and X^{&minus.} have finite expectations, we can define the expectation of X:

E(X) _ = _ E(X^+) &minus. E(X^{&minus.})

[This is also written ~{∫} X(&omega.)P(&omega.) _ or simply _ ~{∫} X dP]

## Finite Expectation

Let @L^1 be the set of discrete random variables on (&Omega., @A, P) which have a finite expectation, then

1. @L^1 is a vector space.
2. E is a linear operator on @L^1: _ E(aX + bY) = aE(X) + bE(Y); _ X, Y &in. @L^1, a, b &in. &reals.
3. E is positive, i.e. X &in. @L^1, X &ge. 0 then E(X) &ge. 0.
4. more generally X &ge. Y then E(X) &ge. E(Y).
5. @L^1 contains all bounded random variables.
6. if X &equiv. ~a, _ E(X) = ~a
7. .