Two subspaces L_1 and L_2 of &reals.^{~n} are #~{orthogonal}, and we write _ L_1 &perp. L_2 , _ if _ &forall. ~{#v}_1 &in. L_1, ~{#v}_2 &in. L_2, _ ~{#v}_1 #. ~{#v}_2 = #0. _ (I.e. any vector in L_1 is orthogonal to any vector in L_2.) _ Note that orthogonality implies that L_1 &intersect. L_2 = \{#0\}.
Now consider any two subspaces of &reals.^~n, not neccessarily orthogonal, _ if L_1 &intersect. L_2 != \{#0\} then we can find V_1 and V_2 , _ where _ L_1 = V_1 &oplus. (L_1 &intersect. L_2), _ L_2 = V_2 &oplus. (L_1 &intersect. L_2) , _ so that _ V_1 &perp. (L_1 &intersect. L_2) _ and _ V_2 &perp. (L_1 &intersect. L_2). _ If L_1 &intersect. L_2 = \{#0\} then put V_1 = L_1 and V_2 = L_2. _ [The symbol &oplus. represents the direct sum of two spaces.]
L_1 and L_2 are said to be ~#{geometrically orthogonal} if _ V_1 &perp. V_2 .
Of course orthogonality implies geometric orthogonality. As an example consider three dimensional space. A line (through the origin) perpendicular to a plane (through the origin) are orthogonal, their intersection being just the origin (i.e. the null vector). Two planes (through the origin) which are perpendicular to each other are geometrically orthogonal.
#{Lemma}: _ If ~p_1 and ~p_2 are orthogonal projections onto L_1 and L_2 respectively, then
L_1 and L_2 are geometrically orthogonal _ &iff. _ ~p_1~p_2 = ~p_2~p_1.
Proof: _ If L_1 and L_2 are geometrically orthogonal then
&reals.^{~n} _ _ _ = _ _ _ V_0 &oplus. V_1 &oplus. V_2 &oplus. V_3
where _ V_1 and V_2 are as before, i.e. _ L_1 = V_1 &oplus. ( L_1 &intersect. L_2 ) , _ and _ L_2 = V_2 &oplus. ( L_1 &intersect. L_2 ) . _ Then put _ V_3 = L_1 &intersect. L_2 _ and _ V_0 = (V_1 &oplus. V_2 &oplus. V_3)^{&perp.}.
If ~{#v} &in. &reals.^{~n}, then ~{#v} = ~{#v}_0 + ~{#v}_1 + ~{#v}_2 + ~{#v}_3 _ (where ~{#v}_{~i} &in. V_{~i}). _ Then ~p_1(~{#v}) = ~{#v}_1 + ~{#v}_3 &in. V_1 &oplus. V_3 = L_1, etc.
So _ ~p_1~p_2~{#v} = ~p_1(~{#v}_2 + ~{#v}_3) = ~{#v}_3 _ and _ ~p_2~p_1~{#v} = ~p_2(~{#v}_1 + ~{#v}_3) = ~{#v}_3. _ (I.e. the projection onto L_1 &intersect. L_2 )
Convesely, if _ ~p_1~p_2 = ~p_2~p_1 _ then ~p_1~p_2~{#v} &in. L_1, ~p_2~p_1~{#v} &in. L_2, so ~p_1~p_2~{#v} &in. L_1 &intersect. L_2. _ If ~{#v}_1 &in. V_1 then _ ~p_2~{#v}_1 = ~p_2~p_1~{#v}_1 &in. L_1 &intersect. L_2. _ But V_1 &perp. (L_1 &intersect. L_2), so ~p_2~{#v}_1 = #0 for any ~{#v}_1 &in. V_1 _ i.e. V_1 &perp. L_2, in particular V_1 &perp. V_2, _ so L_1 and L_2 are geometrically orthogonal.
#{Corollary}: _ If L_1 and L_2 are geometrically orthogonal _ then _ ~p_1~p_2 is the projection onto L_1 &intersect. L_2.
(In fact if L_1 ... L_{~k} are pairwise geometrically orthogonal then ~p_1~p_2 ... ~p_{~k} is the projection onto L_1&intersect.L_2&intersect. ... &intersect.L_{~k}.)
Two Factors, F and G are said to be ~{orthogonal} if their corresponding subspaces L_F and L_G are geometrically orthogonal.
The following follows immediately from the lemma:
F and G are orthogonal _ _ _ _ <=> _ _ _ _ ~p_F ~p_G _ = _ ~p_G ~p_F
Furthermore we can show that:
F and G are orthogonal _ _ _ _ <=> _ _ _ _ ~p_F ~p_G _ = _ ~p_{F&min.G}
To see this consider the corresponding matrices:
P_F P_G = P_{F&min.G} _ => _ P_F P_G = P_{F&min.G} = P_{F&min.G}^T = (P_F P_G)^T = P_G^T P_F^T = P_G P_F _ => _ F, G orthogonal.
Conversely if F, G orthogonal then ~p_F ~p_G is the projection onto L_F &intersect. L_G = L_{F&min.G}, _ so ~p_F ~p_G = ~p_{F&min.G}
To sum up, in terms of matrices:
F and G are orthogonal _ _ _ _ <=> _ _ _ _ P_G P_F _ = _ P_F P_G _ = _ P_{F&min.G} |
If F and G are orthogonal if, and only if
&hash.(F # G)^{-1}(~{f, g}) _ # _ &hash.(F&min.G)^{-1}(~h) _ _ = _ _ ~{n_f} ~{n_g} , _ _ _ _ &forall. ~f , ~g &in. G _ for which _ &hash.(F # G)^{-1}(~{f, g}) != 0,
where ~h is the level of F&min.G for which _ F^{-1}(~f) &in. (F&min.G)^{-1}(~h) _ and _ F^{-1}(~g) &in. (F&min.G)^{-1}(~h), _ i.e. the level that "contains" ~f and ~g.
Putting _ ~{n_{fg}} = &hash.(F # G)^{-1}(~{f, g}) _ and _ ~{n_h} = &hash.(F&min.G)^{-1}(~h) , _ then we can write more succinctly
~{n_{fg}} ~{n_h} _ = _ ~{n_f} ~{n_g} _ <=> _ F and G are orthogonal.
#{Proof}
It was shown above that _ F and G are orthogonal _ <=> _ P_F P_G _ = _ P_{F&min.G} . _ Now recall that
( P_F )_{~i, ~j} _ = _ array{ {1/~n_{F({~i})}}, _ _ ,if _ F(~i) = F(~j)// 0,,otherwise}
so
(P_F P_G)~{_{i, j}} _ = _ sum{ ~f_{~i,~k} ~g_{~k,~j},~k = 1 ... ~n,}
where
~{f_{i,k} g_{k,j}} _ = _ array{ fract{1,~n_{F({~i})}}fract{1,~n_{G({~j})}}, _ _ , if _ F(~i) = F(~k) _ and _ G(~j) = G(~k)/ _ /0,, otherwise. }
but, due to orthogonality
(P_F P_G)~{_{i, j}} _ = _ sum{(P_F)_{~{i,k}}(P_G)_{~{k,j}},~k,}
_ _ _ _ _ = _ sum{fract{1,~n_{F(~i)}} _ array{ &chi. ( ~k )/\{~m | F(~m) = F(~i)\}} _ _ fract{1,~n_{G(~j)}} _ array{ &chi. ( ~k )/\{~m | G(~m) = G(~j)\}} ,~k,}
( Putting _ ~f = F_{~i}, _ ~g = G_{~j}, _ _ and &chi._A( ) represents the characteristic function [ &chi._A(~x) = 1 if ~x &in. A, 0 otherwise.] )
_ _ _ _ _ = _ sum{fract{1,~{n_f n_g}} array{ &chi. ( ~k )/\{~m | F(~m) = ~f and G(~m) = ~g\} } ,~k,}
_ _ _ _ _ = _ fract{~{n_{fg}},~{n_f n_g}}
Now, turning to matrix corresponding to the minimum
( P_{F&min.G} )_{~i, ~j} _ = _ array{ {1/~n_{F&min.G({~i})}}, _ _ ,if F&min.G(~i) = F&min.G(~j)// 0,,otherwise}
and since _ P_F P_G _ = _ P_{F&min.G} , _ then by comparing the matrix elements the result follows.
Note: _ If _ F&min.G = 0 _ we have: _ F and G are orthogonal _ &iff. _ ~{n_{fg}} = (~{n_f} ~{n_g}) ./ ~n
Example
Consider the example of the group of students, summarized here on the right.
This table represents the quantity _ &hash.(F # G)^{-1}(~{f, g}) . In the three tables below the other quantities are displayed.
Remember that F&min.G has two levels: _ \{ (Blond, Blue) , (Blond, Green) \} _ and _ \{ (Dark, Brown) , (Red, Brown) \}.
Note that in the last two tables, there is correspondence on all the elements which are not zero in the first table, so these two factors are orthogonal.
&hash.(F&min.G)^{-1}(~h) |
&hash.(F # G)^{-1}(~{f, g}) # &hash.(F&min.G)^{-1}(~h) |
~{n_f} ~{n_g} |