Proving the Orthogonal Projection Formula for Vector Subspaces

In summary, the orthogonal projection of a vector to a subspace spanned by unit vectors can be achieved by multiplying the vector by a projector and writing it as a linear combination of the unit vectors. This can be shown using the inner product notation, and orthogonality implies linear independence in this case.
  • #1
member 428835
Hi PF!

I've been reading and it appears that the orthogonal projection of a vector ##v## to the subspace spanned by ##e_1,...,e_n## is given by $$\sum_j\langle e_j,v \rangle e_j$$ (##e_j## are unit vectors, so ignore the usual inner product denominator for simplicity) but there is never a proof in the texts. It's always given by definition or I see "trivial" next to the proof. Surely this is something we prove, right?
 
Physics news on Phys.org
  • #2
it really depends on what you're trying to do here, as there are lots of flavors of this. Suppose we have vectors in ##\mathbb C^m## with ##m \geq n## and use the standard inner product.

you have some vector ##\mathbf u## and you project it to a subspace with these n mutually orthornormal vectors. You can do this explicitly with ##\mathbf v = P\mathbf u## where ##P## is a projector -- i.e. ##P^2 = P## and ##P = P^*##.

so we have a generating set for this subspace and use it to write ##\mathbf v## as a linear combination of these. (Note: ignoring the zero vector: if you have an inner product, orthogonality implies linear independence, why?)

so
##\mathbf v = \sum_{j=1}^n \alpha_j \mathbf e_j##

multiply on the left by ##\mathbf e_k^*## (conjugate transpose) to see

##\mathbf e_k^*\mathbf v = \mathbf e_k^*\sum_{j=1}^n \alpha_j \mathbf e_j = \sum_{j=1}^n \alpha_j \mathbf e_k^*\mathbf e_j = \alpha_k \mathbf e_k^*\mathbf e_k = \alpha_k##

now re-write this with general inner product notation -- same result.
 
  • Like
Likes member 428835
  • #3
StoneTemplePython said:
(Note: ignoring the zero vector: if you have an inner product, orthogonality implies linear independence, why?)
This is simple to show: take ##c_1v_1+...+c_nv_n = 0## and dot this into some ##v_j##. Then orthogonality implies for non-zero ##v##, ##c_j=0##, and thus linear independence.

And thanks! Good to see it.
 
Last edited by a moderator:

1. What is the orthogonal projection formula for vector subspaces?

The orthogonal projection formula for vector subspaces is a mathematical formula that allows us to find the closest vector in a subspace to a given vector. It is used to project a vector onto a subspace in a way that minimizes the distance between the two vectors.

2. Why is it important to prove the orthogonal projection formula for vector subspaces?

Proving the orthogonal projection formula for vector subspaces is important because it provides a mathematical justification for its use in various applications. It also helps us understand the underlying principles and properties of the formula, which can aid in solving more complex problems.

3. What are the steps involved in proving the orthogonal projection formula for vector subspaces?

The steps involved in proving the orthogonal projection formula for vector subspaces typically include defining the necessary terms and concepts, setting up the equations, using properties of vector spaces and inner products, and finally, showing that the formula holds true for all vectors in the subspace.

4. Can the orthogonal projection formula be extended to higher dimensions?

Yes, the orthogonal projection formula can be extended to higher dimensions. In fact, the formula is often used in higher-dimensional spaces, such as in linear regression and multivariate analysis. The principles and steps for proving the formula remain the same, but the calculations may become more complex.

5. How is the orthogonal projection formula for vector subspaces used in real-world applications?

The orthogonal projection formula for vector subspaces has many real-world applications, including in physics, engineering, and computer graphics. It is used to find the best fit line or plane in data analysis, to minimize errors in signal processing, and to create realistic 3D graphics by projecting objects onto a 2D screen. It is also used in machine learning algorithms and in solving optimization problems.

Similar threads

Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
206
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
372
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Back
Top