Linear Algebra orthogonal basis and orthogonal projection

You can try to prove that this is equivalent to the other definitions you know, if you want. In summary, the conversation discusses the concept of orthogonal basis and orthogonal projection in the context of linear algebra. The problem at hand involves finding an orthogonal basis for a given plane and using it to solve for the orthogonal projection of a vector onto that plane. The conversation also touches upon other related concepts such as inner product, norm, orthogonal, linearly independent, basis, subspace, and span. The main area of confusion for the person is understanding linear independence.
  • #1
infinitylord
34
1
I was placed into honors calculus III for school. I was happy about this and I consider myself to be a pretty quick learner in math. However, my teacher is using many notations and terms that I am completely unfamiliar with. Mostly, I believe, because I've never taken linear algebra. I am familiar with the idea of vectors and many parts of linear algebra from my physics courses, but I'm still not at the level I think I need. I was assigned this as a "challenge problem."

Let b=(3,1,1) and P be the plane through the origin given by x+y+2z=0.

(a) Find an orthogonal basis for P. That is, find two nonzero orthogonal vectors V[itex]_{1}[/itex], V[itex]_{2}[/itex] [itex]\in[/itex] P.

(b)find the orthogonal projection of b onto P. That is, find Proj[itex]_{V_{1}}[/itex]b+Proj[itex]_{V_{2}}[/itex]b.

Homework Equations


[itex]\vec{P}[/itex]=([itex]\vec{v}[/itex][itex]\bullet[/itex][itex]\vec{a}[/itex]/||[itex]\vec{a}||^{2}[/itex])[itex]\vec{a}[/itex]

The Attempt at a Solution


I have no idea where to even begin. I tried looking up what an orthogonal basis is, but everywhere I looked they used topological symbols I wasn't the slightest bit familiar with. It's also not in my book. I know how to solve for an orthognonal projection for the most part by using the equation above, but I really have no idea what an orthogonal prjection actually is or what its significance is. I really don't want anyone to just give me the answer. I really need to understand this all. Thank you!
 
Physics news on Phys.org
  • #2
Are you familiar with the concept of basis? Then an orthogonal basis is just a basis of vectors which are mutual perpendicular. That means in your two dimensional space that v1• v2 = 0, where the • rappresent, in your case, the usual scalar product between vectors in R^n.
Once you have figured out a basis you can solve the projection problem.
 
  • #3
actually, I am not familiar with the concept of a basis or a projection. But I do understand the idea of orthogonality and the fact that the dot product of two orthogonal vectors will be equal to zero.
 
  • #4
I think you need to take a little break from the problem and study the concepts "inner product", "norm", "orthogonal", "linearly independent", "basis", "subspace" and "span" (as in "the subspace spanned by the set S"). Ask a question if you get stuck on something.

Are you somewhat familiar with sets, so that you understand notation like this? ##x\in A\subseteq B##. This would make it easier to explain some things.
 
  • #5
"Let P be the plane through the origin given by x+y+2z=0[/quote]
The first thing to do is to recognize that any point in P can be written as (x, y, z) with x+ y+ 2z= 0, That equation is the same as x= -y- 2z so that any point in P can be written as (-y- 2z, y, z)= y(-1, 1, 0)+ z(-2, 0, 1). That tells you that {(-1, 1, 0), (-2, 0, 1)} is a basis for P.

Now, how is the "dot product" defined in R3?
 
  • #6
Thank you for your help.
-Inner product I am very familiar with.
-Norm is just the length or magnitude of a vector.
-Orthogonal is the same thing as perpendicular. And the inner product of two orthogonal vectors will be equal to zero.
I just looked up the rest and I believe I understand it for the most part. (tell me if I've got it)
-span is basically all of the possible linear combinations of a set of vectors.
-a subspace is where any vectors that are contained within the subspace can be added together or multiplied by some scalar and still remain in the subspace. It also must contain the zero vector.
-I know that a set of vectors has to be either linearly dependent or independent. And it's where zero is the only solution to all of the vectors? So does that mean that all of the vectors in the set are in different spans?
-And a basis is basically when a set of vectors span the vector space (which is every possible point on the graph?) and are also linearly independent.
Really my only confusion still is with the idea of linear independence.
And I understand the ∈ symbol means that x is an element of A, but I'm not sure what the other one means.
 
  • #7
infinitylord said:
I just looked up the rest and I believe I understand it for the most part. (tell me if I've got it)
-span is basically all of the possible linear combinations of a set of vectors.
It's also the smallest subspace that has that set of vectors as a subset.

infinitylord said:
-a subspace is where any vectors that are contained within the subspace can be added together or multiplied by some scalar and still remain in the subspace. It also must contain the zero vector.
I like to define it as a subset that's also a vector space. A subset is a vector space if and only if the conditions you mentioned are satisfied.

infinitylord said:
-I know that a set of vectors has to be either linearly dependent or independent. And it's where zero is the only solution to all of the vectors? So does that mean that all of the vectors in the set are in different spans?
One way of describing a linearly independent set S is to say that no element of S is a linear combination of the other elements of S.

infinitylord said:
-And a basis is basically when a set of vectors span the vector space (which is every possible point on the graph?) and are also linearly independent.
Yes, this is a good way to define it. There are many equivalent ways. Two examples: 1. It's a maximal linearly independent set (i.e. a linearly independent set that isn't a proper subset of any linearly independent set). 2. It's a minimal spanning set for the vector space (i.e. a set that spans the vector space, and doesn't have any proper subsets that span the vector space).

infinitylord said:
Really my only confusion still is with the idea of linear independence.
That's the most difficult of these concepts. I like this statement of the definition: A finite set ##\{x_1,\dots,x_n\}## is said to be linearly independent if the following implication holds for all ##a_1,\dots,a_n\in\mathbb R##.
If ##\sum_{i=1}^n a_i x_i =0## then ##a_i=0## for all ##i\in\{1,\dots,n\}##.​
An infinite set is said to be linearly independent if all of its finite subsets are.

infinitylord said:
And I understand the ∈ symbol means that x is an element of A, but I'm not sure what the other one means.
##A\subseteq B## means "A is a subset of B", i.e. that every element of A is an element of B. Many books use the symbol ##\subset## instead of ##\subseteq##.

Well done. It looks like you understand these things pretty well. I will try to answer your original questions in the next post.
 
  • Like
Likes 1 person
  • #8
Let V be a vector space. A set ##S\subseteq V## is said to be orthonormal if

a) For all ##x,y\in S##, x is orthogonal to y.
b) For all ##x\in S##, ##\|x\|=1##.

An orthonormal basis is a basis that's also an orthonormal set.

For example, the set ##\{(1,0),(1,1)\}## is a basis for ##\mathbb R^2##, but not an orthonormal basis. ##\{(1,0),(0,1)\}## is an orthonormal basis.

Now, regarding orthogonal projections...

Let V be a finite-dimensional inner product space. Let U be a subspace of V. There's a theorem that says that for each ##x\in V##, there's a unique pair ##(y,z)## such that ##y\in U##, ##z## is orthogonal to every element of U, and ##x=y+z##. The vector y is called the orthogonal projection of x onto U.

I recently explained orthogonal projections onto a 1-dimensional subspace in another thread. This explanation may be useful to you too, even though you're interested in a 2-dimensional subspace. So I'll quote myself.

Fredrik said:
You need to know how to find the projection of an arbitrary vector ##\vec x## onto a 1-dimensional subspace U. Let ##\vec y## and ##\vec z## be the unique vectors such that ##\vec y## is in U, ##\vec z## is orthogonal (perpendicular) to every vector in U, and ##\vec x=\vec y+\vec z##.

We're looking for a formula for ##\vec y##. If ##\vec u## is a unit vector in U, there's a unique real number r such that ##\vec y=r\vec u##. So we can write ##\vec x=r\vec u+\vec z##. Now what do you get if you use this formula to compute ##\vec u\cdot\vec x##? The result is simply r. (You should verify that). So we have ##\vec y=r\vec u=(\vec u\cdot\vec x)\vec u##.
The vector ##\vec y## is by definition the orthogonal projection of ##\vec x## onto U.

Fredrik said:
Right, two vectors are perpendicular if and only if their dot product is zero. So we get
$$\vec u\cdot\vec x=\vec u\cdot (r\vec u+\vec z)=r(\vec u\cdot\vec u)+\vec u\cdot\vec z=r|\vec u|^2+0=r.$$ And since ##\vec y=r\vec u##, this means that ##\vec y=(\vec u\cdot\vec x)\vec u##.
 
  • #9
Okay, the mathematical representation you put of linear independence makes sense to me. And orthonormal is basically orthogonal unit vectors then? Since they must be perpendicular and have a norm of 1.
The question is beginning to make more sense now, although admittedly I am still not completely sure where to begin. I basically have to find two vectors (besides the zero vector) that are contained within the subspace P in R^2 and that are perpendicular to each other. Does this men there is more than one correct answer? And while researching the equation of a plane, I found that I need to find the normal vector. Which would be <1,1,2> in this case, correct?
 
Last edited:
  • #10
infinitylord said:
Okay, the mathematical representation you put of linear independence makes sense to me. And orthonormal is basically orthogonal unit vectors then? Since they must be perpendicular and have a norm of 1.
The question is beginning to make more sense now, although admittedly I am still not completely sure where to begin. I basically have to find two vectors (besides the zero vector) that are contained within the subspace P in R^2 and that are perpendicular to each other. Does this men there is more than one correct answer? And while researching the equation of a plane, I found that I need to find the normal vector. Which would be <1,1,2> in this case, correct?
Right, there are infinitely many orthonormal bases, and yes, that is a normal vector of that plane.

As for part b), recall that the decomposition x=y+z of a vector into a sum of one vector that's in the subspace, and one that's orthogonal to it, is unique. So if you can think of any way to write b as a sum of an element of P and one that's orthogonal to P, you will have found your orthogonal projection.
 
  • #11
Thank you very much! I believe I understand it all now.
a) For [itex]\vec{V_{1}}[/itex], [itex]\vec{V_{2}}\in[/itex]P. [itex]\vec{V_{1}}[/itex] can be arbitrary so long as it satisfies the condition a+b+2c=0. So I chose [itex]\vec{V_{1}}[/itex]=(-1,1,0)
[itex]\vec{V_{2}}[/itex] must be orthogonal to [itex]\vec{V_{1}}[/itex], so I chose it to be (1,1,0). And that is the answer I stuck with.
b) I used the formula I wrote in the original question. Once for [itex]\vec{V_{1}}[/itex] and once for [itex]\vec{V_{2}}[/itex]. I got Proj[itex]_{\vec{V_{1}}}[/itex]b=(1,-1,0) and Proj[itex]_{\vec{V_{2}}}[/itex]b=(2,2,0).
 
  • #12
Almost there :thumbs: Unfortunately V2 doesn't belong to your subspace but you are definitely on the right way. Once you'll find the projection of b on v1 and v2, then the projection on the plane is the sum of the two, am I right @Fredrik?
 
  • Like
Likes 1 person
  • #13
Oh! you're right, I hadn't even realized it didn't belong to the subspace. I just re-did it and got [itex]\vec{V_{2}}[/itex]=(1,1,-1).
it satisfies a+b+2c=0, and <[itex]\vec{V_{1}}[/itex],[itex]\vec{V_{2}}[/itex]>=-1+1+0=0. Therefore, they are orthogonal. This would then make the projection for b onto [itex]\vec{V_{2}}[/itex]=(1,1,-1).
And the sum of the projections i.e. the projection of b onto P should then be (2,0,-1). Right?
 
  • #14
valerioperi said:
Almost there :thumbs: Unfortunately V2 doesn't belong to your subspace but you are definitely on the right way. Once you'll find the projection of b on v1 and v2, then the projection on the plane is the sum of the two, am I right @Fredrik?
Yes, that's right.

infinitylord, can you find an orthonormal basis for ##\mathbb R^3##, with two of the vectors in P and the third orthogonal to P, and then write b as a linear combination of these three basis vectors?
 

1. What is an orthogonal basis in linear algebra?

An orthogonal basis in linear algebra is a set of vectors that are mutually perpendicular (or orthogonal) to each other. This means that the dot product of any two vectors in the basis is equal to 0. An orthogonal basis is also a basis, meaning that it spans the entire vector space and any vector in the space can be written as a linear combination of the basis vectors.

2. How is an orthogonal basis useful in linear algebra?

An orthogonal basis is useful in linear algebra because it simplifies calculations involving vectors and matrices. This is because orthogonal vectors are easier to work with mathematically, and they also have many useful properties. For example, orthogonal vectors make it easier to find the magnitude of a vector, and they also make it easier to solve systems of linear equations.

3. What is an orthogonal projection in linear algebra?

An orthogonal projection in linear algebra is a method of representing a vector as a linear combination of orthogonal vectors. This is done by finding the projection of the vector onto each orthogonal basis vector and multiplying it by the basis vector. The sum of these projections is the orthogonal projection of the original vector onto the orthogonal basis.

4. How is an orthogonal projection useful in linear algebra?

An orthogonal projection is useful in linear algebra because it allows us to find the closest approximation of a vector in a vector space. This is helpful for solving optimization problems and for finding the least squares solutions to systems of linear equations. It also simplifies calculations involving matrices and transformations.

5. Can an orthogonal basis and orthogonal projection be used in any vector space?

Yes, an orthogonal basis and orthogonal projection can be used in any vector space. However, in order for a vector space to have an orthogonal basis, it must be an inner product space. This means that the space has a defined inner product (or dot product) operation, which is necessary for determining orthogonality between vectors.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
363
  • Calculus and Beyond Homework Help
Replies
5
Views
940
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
193
  • Calculus and Beyond Homework Help
Replies
6
Views
3K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
0
Views
449
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Back
Top