1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How can the orthonormal basis of four vectors be found?

  1. Jan 21, 2009 #1
    1. The problem statement, all variables and given/known data
    How can I find the orthonormal basis of four vectors?

    The vectors are:
    (0, 3, 0, 4), (4, 0, 3, 0), (4, 3, 3, 4) and (4, -3, 3, -4).

    3. The attempt at a solution
    I am not sure, whether I should use Gram-Schmidt process or the process of finding
    eigenvalues, eigenvectors and then normalizing.

    I would like to use the latter method. However, I am not sure whether the
    process works here.
    Last edited: Jan 21, 2009
  2. jcsd
  3. Jan 21, 2009 #2
    What's the purpose of Gram-Schmidt again? Perhaps you should look it up on Wikipedia...
  4. Jan 21, 2009 #3
    It is a method for orthogonalizing a set of vectors in an inner product space.
    However, the method is rather risky, because it involves many steps. It makes me unsure whether it is the best method or not.

    Is it possible to solve the problem with eigenvectors and so determining the orthonormal basis?
  5. Jan 21, 2009 #4


    User Avatar
    Science Advisor
    Homework Helper

    Put them into a matrix and row reduce it. No eigenvectors involved. It's simpler than that. Once you found a spanning basis, then use Gram-Schmidt.
  6. Jan 21, 2009 #5


    Staff: Mentor

    IMO, Gram-Schmidt is much simpler than what you propose with eigenvectors.
  7. Jan 21, 2009 #6
    I have got:

    (1 0 1 -1
    0 1 1 1
    0 0 0 0
    0 0 0 0)

    This seems to mean that I have to use the Gram-Schmidt process only for two
    vectors, like

    Select v1 = (1 0 1 -1)

    v2 = w2 - (w2 * v1) / (v1 * v1) *v1[/tex]
    = w2 - 0
    = w2,
    where w2 = (0 1 1 1).

    Then, normalizing

    u1 = [tex] \frac {(1\ 0\ 1\ -1)} {\sqrt3}[/tex]
    u2 = [tex] \frac {(0\ 1\ 1\ 1)} {\sqrt3}[/tex]

    I can't believe that this is the answer. There may be some mistakes.
  8. Jan 21, 2009 #7
    Well, there's an easy way to check it. Take the dot product of each presumed orthonormal vector with every other presumed orthonormal vector. All dot products should be zero. If this is true, then you're done. In your case, you only have two vectors, so just calculate [tex]$\langle u_1,u_2\rangle$[/tex].
  9. Jan 21, 2009 #8
    We're done!
    All dot products are zero.

    Thank you!
  10. Jan 21, 2009 #9


    User Avatar
    Science Advisor
    Homework Helper

    Yeah, fine. But can you find a linear combination of (1,0,1,-1) and (0,1,1,1) that makes (0,3,0,4)? I don't think so. It think you row reduced it wrong.
  11. Jan 21, 2009 #10
    I cannot find that.
    Hmm... I double checked the initial values in my calculator. They are correct. I inserted the initial vectors to my calculator such that one column for one vector. This can be the mistake. However, I don't think I should have each vector as a row.
  12. Jan 21, 2009 #11


    User Avatar
    Science Advisor
    Homework Helper

    I can't speak to calculator problems, but I do know that if you claim that a*(1,0,1,-1) and b*(0,1,1,1) span the space, and (0,3,0,4) is in it then b=3 and a=0. And that doesn't work.
  13. Jan 22, 2009 #12
    I get the same row reduced matrix also by hand.

    Do you mean that it is not possible to present the vectors of the basis as a linear combination of the initial vectors?

    What does it mean if we cannot present the vectors as a linear combination of the initial vectors?
  14. Jan 22, 2009 #13


    User Avatar
    Science Advisor
    Homework Helper

    I think what you are doing is putting the vectors in as columns, doing row operations, then pulling the vectors out as rows. Don't do that. Put them in as ROWS, do row operations, then pull them out as rows.
  15. Jan 22, 2009 #14
    I did that like you say.

    The answer changes:

    The matrix reduces to:
    (1 0 3/4 0
    0 1 0 4/3
    0 0 0 0
    0 0 0 0)

    Let's use Gram-Schmidt process again:
    Select v1 = (1 0 3/4 0)

    v2 = w2 - (w2 * v1) / (v1 * v1) *v1
    = w2 - 0
    = w2

    Then, normalizing
    u1 = (4/5) [1 0 3/4 0]
    u2 = (3/5) [0 1 0 4/3]

    This problem raised a few questions.

    Dick made previously an elegant error check. Can I compare these formulae to any initial vectors ta make an error check?

    Does it matter how we put initial vectors to the matrix? So can we put vectors to the matrix either as rows or columns?
    Last edited: Jan 22, 2009
  16. Jan 22, 2009 #15


    User Avatar
    Science Advisor
    Homework Helper

    I think you have a typo, u2=(3/5)[0,1,0,4/3], right? The purpose of the initial row reduction is to reduce the initial four vectors to the two vectors which are linearly independent and span the whole subspace. That gives you a smaller set to apply Gram-Schmidt to. In doing this always put the vectors in and take them out as rows.
  17. Jan 22, 2009 #16
    Can we always do the row reduction?

    For example, I today had a similar problem with the following matrix:
    (1 2 0 0
    2 1 0 0
    0 0 1 2
    0 0 2 1)

    The question asked me to diagonalize orthogonally the previous symmetric
    matrix P and calculate [tex]P^{T}AP[/tex].

    I reduced it to [tex]I^{4}[/tex]. This made me very unsure.
    I thought that it cannot be that easy: normalize, put the vectors in P and
    calculate [tex]P^{T}AP[/tex] by putting the eigenvalues in.

    My answer to [tex]P^{T}AP[/tex] was [tex]I^{4}[/tex].

    I started to think that I should have first put the lambdas to the diagonal,
    solve eigenvalues, and then solve the eigenvectors.

    At home, I have got different answers with both methods. Now, I am not sure
    which method is the correct one.

    Please, let me know your opinion.
    Last edited: Jan 22, 2009
  18. Jan 22, 2009 #17


    User Avatar
    Science Advisor
    Homework Helper

    That's a completely different problem. No, you can't row reduce every matrix and expect to get the same answer to every problem. The row reduction in the previous problem had NOTHING to do with any matrix. It was just to get a minimal spanning set.
  19. Jan 22, 2009 #18
    So your point is: when we need to find the basis for the matrix, we always want the minimum amount of vectors, by which we can present the other vectors in the initial space. For instance, we presented the four vectors at the start of this post by two vectors.

    Your other point is that: only use row reduction initially about basis. For example, in the problems of finding eigenvalues and -vectors, we put the lambdas in and do not row reduce.

    Thank you! You really have put me on the right track :)
  20. Apr 10, 2009 #19
    Your original set of vectors is not linearly independent. The first and second add up to the third vector, and the first and -1 times the second make the fourth. This is why the row-reduced matrix has only two nonzero rows. Gram-Schmidt only works on a linearly independent list. You can use it on the first two vectors (with or without row-reducing; you'll get the same result). If you try it on the third and fourth vectors, you'll get meaningless or undefined (0/0?) basis vectors.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: How can the orthonormal basis of four vectors be found?
  1. Orthonormal basis (Replies: 7)

  2. Orthonormal Basis (Replies: 2)

  3. Orthonormal basis (Replies: 1)

  4. Orthonormal basis (Replies: 1)