Find matrix corresponding to Linear Tranformation (Orth. Projection)

In summary: Now since dim(ker L) = n-1, there are n-1 linearly independent vectors in ker L. So if we pick a u in S, we can write it in terms of these n-1 vectors.This means that for every u in S, L(u) = 0. Since u was arbitrary, this means that for every u in S, L(u) is mapped to the orthogonal projection of u onto span(v).Therefore, the transformation L is given by the matrix v v^T, where v is a unit vector in S (and therefore also in ker L).In summary, a linear operator L:Rn→Rn is called a projection if L^2=L. A
  • #1
nlykkei
4
0
A linear operator L:Rn→Rn is called a projection if L^2=L. A projection L is an orthogonal projection if ker L is orthogonal to L(Rn).

I've shown that the only invertible projection is the identity map I_Rn by using function composition on the identity L2(v)=L.

Question: Now suppose that L is an orthogonal projection whose image L(Rn) has rank 1. Show there exist a unit vector v such that L is defined by the matrix vv^T. Conclude L is the orthogonal projection on S=Span(v).

I know I can write the orthogonal projection PS:Rn→S as uuT using an orthonormal basis {u} for S. And since v is a unit vector, {v} is an orthonormal basis for S, so PS=vv^T. So the only thing to show is that there exist such a vector v with the stated property ?

http://math.stackexchange.com/quest...r-v-such-that-l-is-defined-by-the-matrix-v-vt
 
Physics news on Phys.org
  • #2
nlykkei said:
A linear operator L:Rn→Rn is called a projection if L^2=L. A projection L is an orthogonal projection if ker L is orthogonal to L(Rn).

I've shown that the only invertible projection is the identity map I_Rn by using function composition on the identity L2(v)=L.

Question: Now suppose that L is an orthogonal projection whose image L(Rn) has rank 1. Show there exist a unit vector v such that L is defined by the matrix vv^T. Conclude L is the orthogonal projection on S=Span(v).

I know I can write the orthogonal projection PS:Rn→S as uuT using an orthonormal basis {u} for S. And since v is a unit vector, {v} is an orthonormal basis for S, so PS=vv^T. So the only thing to show is that there exist such a vector v with the stated property ?

http://math.stackexchange.com/quest...r-v-such-that-l-is-defined-by-the-matrix-v-vt

If L has rank 1, then L(R^n) is one dimensional, so it's spanned by a single vector w. Let v=w/|w|. Isn't that a candidate for the vector you want?
 
  • #3
Indeed it is. But how do I show that every vector in R^n is mapped equal by L and v v^T ?

I know I can write a vector h = i + j, where i lies in Span(v) and j lies in the orthogonal complement of Span(v).

Then L(h) = L(i) + L(j), but i = cv and suppose u is mapped to v: v = L(u) = L(L(u) = L(v), so v is mapped to itself.

Thus L(h) = i + L(j) - But how do I show that L(j) is mapped to 0 ? Then I have shown that every vector is mapped to its orthogonal projection on Span(v) !

Please if you know tell me, I have given this a lot of thought.
 
  • #4
How about using the rank-nullity theorem to determine dim(ker L)?
 
  • #5
dim(ker L) = n - 1, since dim(L(R^n)) = 1. But this doesn't show that L is given by v v^T for some vector v in L(R^n) ?

In the notation from above, how can I use this to conclude L(j) = 0 ?

Also I know ker L is a subset of the orthogonal complement to span(v) - But how do I show the opposite ?
 
  • #6
I was thinking that since the orthogonal complement of span(v) is (n-1)-dimensional, and ker L is an (n-1)-dimensional subspace of that, they have to be the same.
 
  • #7
Ahh, yeah we have ker L is a subset of S := orthogonal complement of span(v) and dim(ker L) = dim(S)) implies ker L = S ?

So u in S implies L(u) = 0, so for any w in R^n we have that L(w) is mapped to the orthogonal projection of w on span(v) !

And the matrix for this transformation is indeed v v^T (using a well-known theorem)

Thanks so much !
 
  • #8
nlykkei said:
Ahh, yeah we have ker L is a subset of S := orthogonal complement of span(v) and dim(ker L) = dim(S)) implies ker L = S ?
Right. The idea is that if there's a u in S that isn't a linear combination of n-1 linearly independent vectors from ker L, then S is at least n-dimensional. So if we have also proved that S is (n-1)-dimensional, we have a contradiction, and can conclude that every u in S is a linear combination of vectors in ker L. This implies that S is a subset of ker L.

The problem statement tells us that ker L is a subset of S, so now (assuming that we know that ker L is a subspace) we know that S=ker L.
 

1. What is a linear transformation?

A linear transformation is a mathematical function that maps one vector space to another vector space while preserving the basic structure of the original vector space. It can be represented by a matrix and is commonly used in various fields of science and engineering.

2. What is orthogonality in the context of linear transformations?

Orthogonality refers to the perpendicular relationship between two vectors or subspaces. In the context of linear transformations, orthogonality ensures that the transformation preserves the angles and lengths between vectors, resulting in a more accurate representation of the original vector space.

3. How is a matrix corresponding to a linear transformation found?

To find the matrix corresponding to a linear transformation, we first need to determine the basis of the original vector space and the transformed vector space. Then, we apply the transformation to each basis vector and record the resulting vectors in a matrix. The resulting matrix is the matrix corresponding to the linear transformation.

4. What is the purpose of finding a matrix corresponding to a linear transformation?

The matrix corresponding to a linear transformation allows us to easily apply the transformation to any vector in the original vector space. It also allows us to perform calculations and manipulations on the transformed vectors, making it a useful tool in solving various mathematical problems.

5. Can any linear transformation be represented by a matrix?

Yes, any linear transformation can be represented by a matrix. This is known as the matrix representation theorem, which states that every linear transformation can be represented by a unique matrix. This makes it easier to work with linear transformations and perform calculations.

Similar threads

  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
3K
  • Calculus and Beyond Homework Help
Replies
13
Views
4K
  • Calculus and Beyond Homework Help
Replies
11
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
8K
Back
Top