Orthogonal basis

  1. Is it correct to assume that there is no such thing as non-orthogonal basis? The orthogonal eigenbasis is the "easiest" to work with, but generally to be a basis a set of vectors has to be lin. indep and span the space, and being "lin. indep." means orthogonal.
    Is it correct?
  2. jcsd
  3. Hurkyl

    Hurkyl 15,987
    Staff Emeritus
    Science Advisor
    Gold Member

    The word "orthogonal" is meaningless until you define an inner product on your vector space. There's no reason basis vectors should be orthogonal.
  4. To expand on what Hyrkyl's said:

    Take the space [tex]C^2[/tex]. The most usual basis in this space are the vectors

    [tex]|0\rangle =\left(\begin{array}{cc}1\\0\end{array}\right)[/tex]
    [tex]|1\rangle =\left(\begin{array}{cc}0\\1\end{array}\right)[/tex]

    So we can then get to any point in [tex]C^2[/tex] with the linear combination

    [tex]|anywhere\rangle = \alpha |0\rangle +\beta |1\rangle[/tex].

    Now, as a counterexample to your claim that a basis set has to be orthogonal, let me define the vectors [tex]|g\rangle , |h\rangle[/tex] as:

    [tex]|g\rangle =\left(\begin{array}{cc}1\\1\end{array}\right)[/tex]
    [tex]|h\rangle =\left(\begin{array}{cc}1\\0\end{array}\right)[/tex]

    I can still define any point on [tex]C^2[/tex] by using these vectors, but they are not orthogonal. In essense, you can consider it 'wasting' information to use a non-orthogonal basis set:

    [tex]\left(\begin{array}{cc}x\\y\end{array}\right) = a \left(\begin{array}{cc}1\\1\end{array}\right) + b \left(\begin{array}{cc}1\\0\end{array}\right)[/tex]

    Which gives


    So, as g and h aren't orthogonal the expression for x also contains information about the position y.

    So, basis vectors don't have to be orthogonal, but they are usually chosen to be. This is important when you start working out solution of linear equations, or doing stuff like quantum mechanics.
  5. Hurkyl

    Hurkyl 15,987
    Staff Emeritus
    Science Advisor
    Gold Member

    And for the sake of being explicit, James did define an inner product on his vector space -- it comes as a "standard" part of the definition of C2.
  6. So, I cannot make a jump from linearly indep. to orthogonal, but if vectors are orthogonal, they must be lin. indep. Right?
  7. HallsofIvy

    HallsofIvy 41,268
    Staff Emeritus
    Science Advisor

    Yes, if in an "inner product space", a set of vectors are all othogonal then it must be linearly independent:

    Suppose {v1,v2,. . . , vn} are orthogonal vectors, C1v1+ C2v2+ . . .+ Cnvn= 0. For each i between 1 and n, take the inner product on each side with vi. You obviously get 0 on the right- what do you get on the left?

    Your original statement "there is no such thing as a non-orthogonal basis" is "sort of" right- because "orthogonal" depends on your choice of basis. Given any basis there exist an inner product such that the basis is orthogonal with that inner product. You get like this: Given basis {v1,v2,. . . , vn}, define the inner product <u, v>, of vectors u and v like this: write u and v in terms of the basis:
    u= A1 v1+ A2v2+ . . .+ Anvn, v=B1 v1+ B2v2+ . . .+ B nvn, and define
    <u, v>= A1 B1+ A2B2+ . . .+ AnBn. With that inner product, the basis is orthonormal.
    Last edited: Apr 20, 2005
  8. Hurkyl

    Hurkyl 15,987
    Staff Emeritus
    Science Advisor
    Gold Member

    See if you can prove it. Suppose you're given a linear combination of orthogonal vectors that sums to zero -- can you prove the coefficients must be zero?
  9. dextercioby

    dextercioby 12,328
    Science Advisor
    Homework Helper

    Exclude the null vector and assume all vectors are finite norm.

  10. mathwonk

    mathwonk 9,957
    Science Advisor
    Homework Helper

    dextercioby is giving a counterexample to halls' statement that a set of orthogonal vectors is independent, since the zero vector could be in the set.

    so neither property, orthogonal or independent, implies the other.

    but for non zero vectors, orthogonal deos imply independent.

    i.e. two non zero vectors are independent if they are not parallel. but just saying they are not parallel does not mean they are perpendicular.

    on the other hand for non zero vectors, being perpendicular does imply they are not parallel.
    Last edited: Apr 20, 2005
  11. That was the exercise I was working on when I thought of my original question.
    Basically I wrote lin. indep. equation:
    Let B = {P1, P2, P3} be orthogonal basis for R(3) all vectors are non-zero, which means that
    P1 (dot) P2 = 0
    P2 (dot) P3 = 0
    P1 (dot) P3 = 0;
    Then equation looks like this:
    aP1 + bP2 + cP3 = 0
    Next, I took dot-product of both sides of the equation:
    aP1 (dot) P1 + bP1 (dot) P2 ... = P1 (dot) 0
    So it turns out that
    a|P1|^2 = 0 and so on... a = 0, since P1 is non-zero.
    and so on for other cases.
    I see mathwonk's point, but if the basis is eigenbasis then zero vector is excluded.
    Thanks to all of you! I have a better idea now.
  12. mathwonk

    mathwonk 9,957
    Science Advisor
    Homework Helper

    Although you are problem solving at the moment, hence more flexibility is allowed, when writing it up, instead of "orthogonal basis" you should perhaps say "orthogonal eigenbasis".

    I notice that physicists are fond of introducing hypotheses after the fact to get themselves out of trouble, but mathematicians require them to be stated "up front".

    (i could never solve physics problems partly for this reason, and was somewhat miffed as a college student, to note that the "solution" seemed always to include an additional hypothesis that the solver stated was 'obviously true" but which he had not mentioned in stating the problem.

    This also occurred in trying to read relativity books later on. The writer would state that he was going to deduce some property from some other different one. I was unable to do so, then read the solution which began as follows: " since we know space is homogeneous" ..... (which had not been assumed at all). And I seem to recall I was reading the best physicists, such as Pauli, Einstein, Planck...)

    Indeed as bombadillo mentioned in his analysis of mathematicians thinking, I allow myself this license when brainstorming, but not when writing up proofs. since a proof is an attempt to communicate with others, it should leave no essential point in doubt.)
    Last edited: Apr 21, 2005
  13. You're right, I sort of re-defined the problem. But the problem does not say that the basis is eignebasis, all it says is that the set is orthogonal.
    So in orthogonal set, zero may be included? It's something you mentioned earlier. Doesn't that make it linearly depenent?
  14. Mathwonk: I have all sorts of fun along the lines you're mentioning. I am a Physicist (final year of my degree in the UK) and as such make assumptions along the lines you mention. However, I'm always careful to prove any such assumptions to myself as although I take them as true, I find it helps understanding on a deeper level than the problem at hand if you fully understand the framework supporting it.

    With that in mind, I'm finding it quite interesting taking a 4th year module in Quantum Computing and Quantum Information Theory that is taught by the Mathematics department (we can take this 4th year Maths module in our 3rd year Physics) as everything is definined very formally, which is different to Physics where there is a certain element of what seems to be hand waving but is actually saving time by telling you certain things are true. If you want to go and prove these things then that's fine!

    A case in point is a post on this sub-forum on orthogonal basis sets. I gave a counterexample to someone's claim that a Physicist would quite happily take, but Hyrkyl added that (in this instance) a certain fundamental property of what I was talking about (i.e. the space [itex]C^2[/itex] having an inner product) to 'formalise' things.

    Bloody mathematicians :)

    Edit: Please ignore certain gramatical inconsitancies in the post above but I'm slightly less than sober right now...
  15. mathwonk

    mathwonk 9,957
    Science Advisor
    Homework Helper

    {(0,0), (1,0)} is an example of a dependent, but mutually orthogonal set.
  16. Ok, I am not trying to be annoying, but why does Penney (author of my textbook) define orthogonal set as {P1, ...., Pn}, Pi != 0 ?
  17. It's an odd restriction that I've never seen before. If the set was orthonormal, then you'd need the condition that Pi != 0.
  18. HallsofIvy

    HallsofIvy 41,268
    Staff Emeritus
    Science Advisor

    In order to make my post right, of course! :biggrin:
  19. matt grime

    matt grime 9,395
    Science Advisor
    Homework Helper

    No you wouldn't, that would be automatically true. But that may be semantics.
  20. Orto

    and... what about quaternions, octonions, sedenions... the 1 in quateniones is ortogonal to i, j, k? ???

    i-j-k are orthonormal to 1???

    idem in octonions, sedenions... n-ions?
  21. matt grime

    matt grime 9,395
    Science Advisor
    Homework Helper

    Do you think they possess an inner product in which it makes sense to talk of angles?
Know someone interested in this topic? Share this thead via email, Google+, Twitter, or Facebook

Have something to add?