Is it correct to assume that there is no such thing as non-orthogonal basis? The orthogonal eigenbasis is the "easiest" to work with, but generally to be a basis a set of vectors has to be lin. indep and span the space, and being "lin. indep." means orthogonal. Is it correct? Thanks.
The word "orthogonal" is meaningless until you define an inner product on your vector space. There's no reason basis vectors should be orthogonal.
To expand on what Hyrkyl's said: Take the space [tex]C^2[/tex]. The most usual basis in this space are the vectors [tex]|0\rangle =\left(\begin{array}{cc}1\\0\end{array}\right)[/tex] [tex]|1\rangle =\left(\begin{array}{cc}0\\1\end{array}\right)[/tex] So we can then get to any point in [tex]C^2[/tex] with the linear combination [tex]|anywhere\rangle = \alpha |0\rangle +\beta |1\rangle[/tex]. Now, as a counterexample to your claim that a basis set has to be orthogonal, let me define the vectors [tex]|g\rangle , |h\rangle[/tex] as: [tex]|g\rangle =\left(\begin{array}{cc}1\\1\end{array}\right)[/tex] [tex]|h\rangle =\left(\begin{array}{cc}1\\0\end{array}\right)[/tex] I can still define any point on [tex]C^2[/tex] by using these vectors, but they are not orthogonal. In essense, you can consider it 'wasting' information to use a non-orthogonal basis set: [tex]\left(\begin{array}{cc}x\\y\end{array}\right) = a \left(\begin{array}{cc}1\\1\end{array}\right) + b \left(\begin{array}{cc}1\\0\end{array}\right)[/tex] Which gives [tex]x=a+b[/tex] [tex]y=a[/tex] So, as g and h aren't orthogonal the expression for x also contains information about the position y. So, basis vectors don't have to be orthogonal, but they are usually chosen to be. This is important when you start working out solution of linear equations, or doing stuff like quantum mechanics.
And for the sake of being explicit, James did define an inner product on his vector space -- it comes as a "standard" part of the definition of C^{2}.
So, I cannot make a jump from linearly indep. to orthogonal, but if vectors are orthogonal, they must be lin. indep. Right?
Yes, if in an "inner product space", a set of vectors are all othogonal then it must be linearly independent: Suppose {v_{1},v_{2},. . . , v_{n}} are orthogonal vectors, C_{1}v_{1}+ C_{2}v_{2}+ . . .+ C_{n}v_{n}= 0. For each i between 1 and n, take the inner product on each side with v_{i}. You obviously get 0 on the right- what do you get on the left? Your original statement "there is no such thing as a non-orthogonal basis" is "sort of" right- because "orthogonal" depends on your choice of basis. Given any basis there exist an inner product such that the basis is orthogonal with that inner product. You get like this: Given basis {v_{1},v_{2},. . . , v_{n}}, define the inner product <u, v>, of vectors u and v like this: write u and v in terms of the basis: u= A_{1} v_{1}+ A_{2}v_{2}+ . . .+ A_{n}v_{n}, v=B_{1} v_{1}+ B_{2}v_{2}+ . . .+ B _{n}v_{n}, and define <u, v>= A_{1} B_{1}+ A_{2}B_{2}+ . . .+ A_{n}B_{n}. With that inner product, the basis is orthonormal.
See if you can prove it. Suppose you're given a linear combination of orthogonal vectors that sums to zero -- can you prove the coefficients must be zero?
dextercioby is giving a counterexample to halls' statement that a set of orthogonal vectors is independent, since the zero vector could be in the set. so neither property, orthogonal or independent, implies the other. but for non zero vectors, orthogonal deos imply independent. i.e. two non zero vectors are independent if they are not parallel. but just saying they are not parallel does not mean they are perpendicular. on the other hand for non zero vectors, being perpendicular does imply they are not parallel.
That was the exercise I was working on when I thought of my original question. Basically I wrote lin. indep. equation: Let B = {P1, P2, P3} be orthogonal basis for R(3) all vectors are non-zero, which means that P1 (dot) P2 = 0 P2 (dot) P3 = 0 P1 (dot) P3 = 0; Then equation looks like this: aP1 + bP2 + cP3 = 0 Next, I took dot-product of both sides of the equation: aP1 (dot) P1 + bP1 (dot) P2 ... = P1 (dot) 0 So it turns out that a|P1|^2 = 0 and so on... a = 0, since P1 is non-zero. and so on for other cases. I see mathwonk's point, but if the basis is eigenbasis then zero vector is excluded. Thanks to all of you! I have a better idea now.
Although you are problem solving at the moment, hence more flexibility is allowed, when writing it up, instead of "orthogonal basis" you should perhaps say "orthogonal eigenbasis". I notice that physicists are fond of introducing hypotheses after the fact to get themselves out of trouble, but mathematicians require them to be stated "up front". (i could never solve physics problems partly for this reason, and was somewhat miffed as a college student, to note that the "solution" seemed always to include an additional hypothesis that the solver stated was 'obviously true" but which he had not mentioned in stating the problem. This also occurred in trying to read relativity books later on. The writer would state that he was going to deduce some property from some other different one. I was unable to do so, then read the solution which began as follows: " since we know space is homogeneous" ..... (which had not been assumed at all). And I seem to recall I was reading the best physicists, such as Pauli, Einstein, Planck...) Indeed as bombadillo mentioned in his analysis of mathematicians thinking, I allow myself this license when brainstorming, but not when writing up proofs. since a proof is an attempt to communicate with others, it should leave no essential point in doubt.)
You're right, I sort of re-defined the problem. But the problem does not say that the basis is eignebasis, all it says is that the set is orthogonal. So in orthogonal set, zero may be included? It's something you mentioned earlier. Doesn't that make it linearly depenent?
Mathwonk: I have all sorts of fun along the lines you're mentioning. I am a Physicist (final year of my degree in the UK) and as such make assumptions along the lines you mention. However, I'm always careful to prove any such assumptions to myself as although I take them as true, I find it helps understanding on a deeper level than the problem at hand if you fully understand the framework supporting it. With that in mind, I'm finding it quite interesting taking a 4th year module in Quantum Computing and Quantum Information Theory that is taught by the Mathematics department (we can take this 4th year Maths module in our 3rd year Physics) as everything is definined very formally, which is different to Physics where there is a certain element of what seems to be hand waving but is actually saving time by telling you certain things are true. If you want to go and prove these things then that's fine! A case in point is a post on this sub-forum on orthogonal basis sets. I gave a counterexample to someone's claim that a Physicist would quite happily take, but Hyrkyl added that (in this instance) a certain fundamental property of what I was talking about (i.e. the space [itex]C^2[/itex] having an inner product) to 'formalise' things. Bloody mathematicians :) Edit: Please ignore certain gramatical inconsitancies in the post above but I'm slightly less than sober right now...
Ok, I am not trying to be annoying, but why does Penney (author of my textbook) define orthogonal set as {P1, ...., Pn}, Pi != 0 ?
It's an odd restriction that I've never seen before. If the set was orthonormal, then you'd need the condition that Pi != 0.
Orto and... what about quaternions, octonions, sedenions... the 1 in quateniones is ortogonal to i, j, k? ??? i-j-k are orthonormal to 1??? idem in octonions, sedenions... n-ions?