Question about Gram-Schmidt orthogonalization

  • Thread starter Thread starter pamparana
  • Start date Start date
pamparana
Messages
123
Reaction score
0
Hello everyone,

I have a query regarding the Gram-Schmidt factorization:

Say I have 3 independent vectors, u, v, w and I used the factorization scheme to get U, V, W vectors that are orthonormal to each other.

So U, V, W are orthogonal to each other.

Is it also true that V is orthogonal to u (small u)
and W is orthogonal to small u and v.

In my mind, I am quite convinced it is so. However, what is the exact mathematical reason for this. I think that W is going to be orthogonal to the whole plane described by u and v and V is going to be orthogonal to the line described by u. I am just a tad unsure why this would be.

I am trying to understand that when we do QR factorization why we get an upper triangular matrix and that seems to depend on the above statement being true.

Many thanks,

Luca

Edit: I thought about this a bit more and have the following explanation. Please let me know if it sounds plausible

If we consider the vectors U and V. They form the basis for a 2D subspace in a higher dimension space. So, the vector W lies in the null space of this 2D subspace and this null space will be orthogonal to the row space of the 2D subspace. Hence, W is orthogonal to u and v as well as they lie in the same subspace.

Does this make sense?
 
Last edited:
Physics news on Phys.org
Yes, it is true. When you look at the process you can see two things going on (I'm assuming you maintain the vectors in their current order of u,v,w)

1) You orthogonalize your current vector by removing the components in the space spanned by your previous (now orthonormal) vectors
2) You normalize your vector


Ignore step 2. Here's your job:
Prove that if you have a set of vectors u1, u2,..., uk linearly independent and you do Grahm Schmidt to get U1,..., Uk then the spaces spanned by u1,...ur and U1[/sub,...Ur are the same for each r (use induction. The base case is trivial).

Then since Ur+1 when inner producted with any element of the space spanned by U1,...Ur must come out to 0 (since it's orthogonal to each of those) Ur+1 must be orthogonal to u1,... ur as you suspected

If you're really and truly only interested in the case of three vectors, you don't even have to do induction, and can just prove the above directly, but it's not as satisfying
 
The Gram-Schmidt process starts with vector u and finds U that has length 1 and is in the same direction as u. It then, given vector v, finds v' that is orthogonal to vector U and then V that has length 1 and is in the same direction as v', not v. Finally, given vector w, it finds w' that is orthogonal to both U and V and then W that has length 1 and is in the same direction as w', not w. U is in the same direction as u, V and W are not necessarily in the same direction as v and w.

Of course, you could do the Gram-Schmidt orthogonalization process starting with v or w instead of u which would change the result.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
Back
Top