Orthogonal Projection Onto a Subspace?

Danny89
Messages
3
Reaction score
0
Hey,

I have a linear algebra exam tomorrow and am finding it hard to figure out how to calculate an orthogonal projection onto a subspace.

Here is the actual question type i am stuck on:
q2b.jpg


I have spent ages searching the depths of google and other such places for a solution but with no look. I am really stuck and it would be greatly appreciated if someone could maybe give me a helping hand and try explain this to me.

Thanks.
 
Physics news on Phys.org
Notice that your u and v are already orthogonal unit vectors. So the component of w in the u direction would be w dot u and in the v direction would be w dot v. So the nearest point in the plane to w would be (w dot u)u + (w dot v)v. Subtract this from w to get the orthogonal projection.
 
Hey thanks for the reply.

I'm a bit confused though. You see i actually managed to get the solution for this problem from my lecturer just there:

ans.jpg


But he doesn't subtract from w that i know of and his result for <w,u>u is
1/2 [5]
[0]
[-5]

should it not be:
1/2 [2]
[0]
[-3]?

do you know why this is? i have been tricking around with it and just can't seem to understand it. It would appear to be straightforward enough but i just can't see it.
 
What he has calculated is the "shadow" of w on the subspace which is what he apparently meant by the orthogonal projection onto the subspace. What I suggested to you was the component of w orthogonal to the subspace. Stick with his definition. His calculation is correct; just don't subtract it from w.
 
Thanks a million, a real help!
 
Danny89 said:
his result for <w,u>u is
1/2 [5]
[0]
[-5]

should it not be:
1/2 [2]
[0]
[-3]?

do you know why this is?
The quantity <w,u> is a plain old number. In this case, <w,u>=-5. He then multiplies the vector [1,0,-1] by it. To get the scalar multiple of a vector, you just multiply each component by the scalar, so you get [5,0,-5].
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top