# Orthogonal Basis: Correct to Assume No Non-Orthogonal?

• EvLer
In summary, Daniel Dexteriby's counterexample to the claim that a set of orthogonal vectors is independent shows that this is not always the case.
EvLer
Is it correct to assume that there is no such thing as non-orthogonal basis? The orthogonal eigenbasis is the "easiest" to work with, but generally to be a basis a set of vectors has to be lin. indep and span the space, and being "lin. indep." means orthogonal.
Is it correct?
Thanks.

The word "orthogonal" is meaningless until you define an inner product on your vector space. There's no reason basis vectors should be orthogonal.

To expand on what Hyrkyl's said:

Take the space $$C^2$$. The most usual basis in this space are the vectors

$$|0\rangle =\left(\begin{array}{cc}1\\0\end{array}\right)$$
$$|1\rangle =\left(\begin{array}{cc}0\\1\end{array}\right)$$

So we can then get to any point in $$C^2$$ with the linear combination

$$|anywhere\rangle = \alpha |0\rangle +\beta |1\rangle$$.

Now, as a counterexample to your claim that a basis set has to be orthogonal, let me define the vectors $$|g\rangle , |h\rangle$$ as:

$$|g\rangle =\left(\begin{array}{cc}1\\1\end{array}\right)$$
$$|h\rangle =\left(\begin{array}{cc}1\\0\end{array}\right)$$

I can still define any point on $$C^2$$ by using these vectors, but they are not orthogonal. In essense, you can consider it 'wasting' information to use a non-orthogonal basis set:

$$\left(\begin{array}{cc}x\\y\end{array}\right) = a \left(\begin{array}{cc}1\\1\end{array}\right) + b \left(\begin{array}{cc}1\\0\end{array}\right)$$

Which gives

$$x=a+b$$
$$y=a$$

So, as g and h aren't orthogonal the expression for x also contains information about the position y.

So, basis vectors don't have to be orthogonal, but they are usually chosen to be. This is important when you start working out solution of linear equations, or doing stuff like quantum mechanics.

And for the sake of being explicit, James did define an inner product on his vector space -- it comes as a "standard" part of the definition of C2.

So, I cannot make a jump from linearly indep. to orthogonal, but if vectors are orthogonal, they must be lin. indep. Right?

Yes, if in an "inner product space", a set of vectors are all othogonal then it must be linearly independent:

Suppose {v1,v2,. . . , vn} are orthogonal vectors, C1v1+ C2v2+ . . .+ Cnvn= 0. For each i between 1 and n, take the inner product on each side with vi. You obviously get 0 on the right- what do you get on the left?

Your original statement "there is no such thing as a non-orthogonal basis" is "sort of" right- because "orthogonal" depends on your choice of basis. Given any basis there exist an inner product such that the basis is orthogonal with that inner product. You get like this: Given basis {v1,v2,. . . , vn}, define the inner product <u, v>, of vectors u and v like this: write u and v in terms of the basis:
u= A1 v1+ A2v2+ . . .+ Anvn, v=B1 v1+ B2v2+ . . .+ B nvn, and define
<u, v>= A1 B1+ A2B2+ . . .+ AnBn. With that inner product, the basis is orthonormal.

Last edited by a moderator:
So, I cannot make a jump from linearly indep. to orthogonal, but if vectors are orthogonal, they must be lin. indep. Right?

See if you can prove it. Suppose you're given a linear combination of orthogonal vectors that sums to zero -- can you prove the coefficients must be zero?

Exclude the null vector and assume all vectors are finite norm.

Daniel.

dextercioby is giving a counterexample to halls' statement that a set of orthogonal vectors is independent, since the zero vector could be in the set.

so neither property, orthogonal or independent, implies the other.

but for non zero vectors, orthogonal deos imply independent.

i.e. two non zero vectors are independent if they are not parallel. but just saying they are not parallel does not mean they are perpendicular.

on the other hand for non zero vectors, being perpendicular does imply they are not parallel.

Last edited:
Hurkyl said:
See if you can prove it. Suppose you're given a linear combination of orthogonal vectors that sums to zero -- can you prove the coefficients must be zero?
That was the exercise I was working on when I thought of my original question.
Basically I wrote lin. indep. equation:
Let B = {P1, P2, P3} be orthogonal basis for R(3) all vectors are non-zero, which means that
P1 (dot) P2 = 0
P2 (dot) P3 = 0
P1 (dot) P3 = 0;
Then equation looks like this:
aP1 + bP2 + cP3 = 0
Next, I took dot-product of both sides of the equation:
aP1 (dot) P1 + bP1 (dot) P2 ... = P1 (dot) 0
So it turns out that
a|P1|^2 = 0 and so on... a = 0, since P1 is non-zero.
and so on for other cases.
I see mathwonk's point, but if the basis is eigenbasis then zero vector is excluded.
Thanks to all of you! I have a better idea now.

Although you are problem solving at the moment, hence more flexibility is allowed, when writing it up, instead of "orthogonal basis" you should perhaps say "orthogonal eigenbasis".

I notice that physicists are fond of introducing hypotheses after the fact to get themselves out of trouble, but mathematicians require them to be stated "up front".

(i could never solve physics problems partly for this reason, and was somewhat miffed as a college student, to note that the "solution" seemed always to include an additional hypothesis that the solver stated was 'obviously true" but which he had not mentioned in stating the problem.

This also occurred in trying to read relativity books later on. The writer would state that he was going to deduce some property from some other different one. I was unable to do so, then read the solution which began as follows: " since we know space is homogeneous" ... (which had not been assumed at all). And I seem to recall I was reading the best physicists, such as Pauli, Einstein, Planck...)

Indeed as bombadillo mentioned in his analysis of mathematicians thinking, I allow myself this license when brainstorming, but not when writing up proofs. since a proof is an attempt to communicate with others, it should leave no essential point in doubt.)

Last edited:
You're right, I sort of re-defined the problem. But the problem does not say that the basis is eignebasis, all it says is that the set is orthogonal.
So in orthogonal set, zero may be included? It's something you mentioned earlier. Doesn't that make it linearly depenent?

Mathwonk: I have all sorts of fun along the lines you're mentioning. I am a Physicist (final year of my degree in the UK) and as such make assumptions along the lines you mention. However, I'm always careful to prove any such assumptions to myself as although I take them as true, I find it helps understanding on a deeper level than the problem at hand if you fully understand the framework supporting it.

With that in mind, I'm finding it quite interesting taking a 4th year module in Quantum Computing and Quantum Information Theory that is taught by the Mathematics department (we can take this 4th year Maths module in our 3rd year Physics) as everything is definined very formally, which is different to Physics where there is a certain element of what seems to be hand waving but is actually saving time by telling you certain things are true. If you want to go and prove these things then that's fine!

A case in point is a post on this sub-forum on orthogonal basis sets. I gave a counterexample to someone's claim that a Physicist would quite happily take, but Hyrkyl added that (in this instance) a certain fundamental property of what I was talking about (i.e. the space $C^2$ having an inner product) to 'formalise' things.

Bloody mathematicians :)

Edit: Please ignore certain gramatical inconsitancies in the post above but I'm slightly less than sober right now...

{(0,0), (1,0)} is an example of a dependent, but mutually orthogonal set.

mathwonk said:
{(0,0), (1,0)} is an example of a dependent, but mutually orthogonal set.
Ok, I am not trying to be annoying, but why does Penney (author of my textbook) define orthogonal set as {P1, ..., Pn}, Pi != 0 ?

EvLer said:
Ok, I am not trying to be annoying, but why does Penney (author of my textbook) define orthogonal set as {P1, ..., Pn}, Pi != 0 ?

It's an odd restriction that I've never seen before. If the set was orthonormal, then you'd need the condition that Pi != 0.

EvLer said:
Ok, I am not trying to be annoying, but why does Penney (author of my textbook) define orthogonal set as {P1, ..., Pn}, Pi != 0 ?

In order to make my post right, of course!

Lonewolf said:
It's an odd restriction that I've never seen before. If the set was orthonormal, then you'd need the condition that Pi != 0.

No you wouldn't, that would be automatically true. But that may be semantics.

Orto

and... what about quaternions, octonions, sedenions... the 1 in quateniones is ortogonal to i, j, k? ?

i-j-k are orthonormal to 1?

idem in octonions, sedenions... n-ions?

Do you think they possesses an inner product in which it makes sense to talk of angles?

quaternions

matt grime said:
Do you think they possesses an inner product in which it makes sense to talk of angles?

If u talk about quaternions, Hamilton developed it to emulate the relationship between plane-complex numbers. He found a model that has the relationship between 3D-quaternions, but there's something strange to me: the scalar.

i did not understand that. what does "Pi ! = 0" mean?

(dave penney is a friend of mine by the way.)

but even if he meant to say pi.pj = 0, still that is quite true, but it does not imply they are independent.

The quaternions are a 4-d real vector space, so one can define an inner product on them in which 1,i,j,k are orthonormal. And if you read all the posts in this thread you'd understand why, raparacio, and also that this has no necessary bearing on "angles you can visualize", it is a formality.

And != is computer programming speak for not equal to.

I guess you could say !=~...

Thanks

matt grime said:
The quaternions are a 4-d real vector space, so one can define an inner product on them in which 1,i,j,k are orthonormal. And if you read all the posts in this thread you'd understand why, raparacio, and also that this has no necessary bearing on "angles you can visualize", it is a formality.

And != is computer programming speak for not equal to.

Thanks.

thanks matt, i thought my browser was mistranslating some symbols. it sure helps to know what the symbols mean. was that a book by dave penney, by the way? If so I could just ask him why he did it! (But I like halls' answer.)

mathwonk said:
thanks matt, i thought my browser was mistranslating some symbols. it sure helps to know what the symbols mean. was that a book by dave penney, by the way? If so I could just ask him why he did it! (But I like halls' answer.)

Oh, sorry, I did not know how to short-hand "not equal to" so I used the only notation I am familiar with. The author's full name is Richard C. Penney (he is a faculty here).
Why don't mathematicians agree on what they teach! I think he was trying to simplify the problem for us by excluding the zero vector, but that is not helping, since I and other students take everything in the textbook as absolute truth, when it is not so as I discovered through this post.

well the nice feature of mathematics is that the authors DO usually tell you what the words mean before they use them, whereas other people do not. You for example used != without telling me what it meant.

so beware that not all mathematicians use the same words in the same sense, and remember to look up in each book, what the words mean there.

of course mathematicians are sometimes guilty also of not defining their terms. One story that affected me was this: In a research paper I wrote I referred to a certain property as due to another mathematician and cited his paper. Then I gave it a name, but not the name he had given it. He in fact had given it no name. I did not make this clear. I said "the concept of such and such defined so and so, is due to whoever". I meant the definition itself, not the name, was due to whoever.

Apparently many people read my paper and not his, to learn this concept. As a result my name became the standard one in use for this concept. Thus when these people cited this concept, who had learned it in my paper instead of his, they did not realize that the name I was using was not his, and called it by my given name, yet referred to his paper for it.

Therefore anyone reading those people's works, and trying to find out what the name meant, looked in whoever's paper instead of mine for something by that name, but could not find it, because the name did not occur there. Thus the people using my chosen name should have explained exactly what it meant when they reused it, rather than referring people to a source where it could not be found. And they should have checked the source they gave to see if what they were citing could actually be found there.

Your author Richard Penney, should probably have said, "we are going to give a non standard definition of the term 'orthogonal', so beware that other authors do not include the property that all vectors are !=0" (where the symbol "!=0" means not equal to zero.)

Of course since he resides at your school, to find out the answer to the mystery, you only have to ask him yourself. try it. it is often worthwhile to at least speak to the professors.

Last edited:
This thread is already kind of long, but for those interested: I took mathwonk's advise and contacted the professor (he actually is not the one teaching the class though). No, there is no reason for excluding the zero-vector in the definition of the orthogonal set other than make it look lin. indep. for certain excercises, he also metioned he wished he had done it the other way. I did not realize that mathemetics is a science where you can condition your own definitions, to some extent.
Thanks all for the input, even though some questions that were brought up here were over my head (-nions)

Definitions, to some extent are very fluid, and they often reflect the author's predisposition, but once stated they are fixed for that instance. There are several interpretations of even such a simple thing as RING. Strictly speaking a ring is not necessarily commutative or with 1, however almost all books will assume at least one of those is true, hopefully by expressly saying so at the beginning. So someone opening the book halfway through and seeing "show that if R is ring and S a simple R-module that Hom(S,S) is a field", would be unable to prove it, since it is false in general, but is true if the book states at the start "All rings will be assumed to be commutative". This happened in this very forum recently.

Anyway, don't worry about the *ions bit - the quaternions are just a 4-dimensional real vector space that also possesses a multipication and divisoin operation (though it is not commutative under multiplication). As with any finite dimensional real vector space it possesses an inner product that allows us to formally define angles. Indeed any space that has a pairing ( , ) that satisfies some rules can be used to formally define an anlgle between things even if it seems an odd thing to do.

In particular, given the set of continuous real valued functions on, say, the interval [0,1] I can define the pairing

$$(f,g) = \int_0^1 f(x)g(x)dx$$

and i can declare the length of f to be $|f| = \sqrt{(f,f)}$

then the "angle" between f and g is defined to be the angle theta in the range [-pi/2,pi/2] satisfying cos(theta) = (f,g)/|f||g|

## 1. What is an orthogonal basis?

An orthogonal basis is a set of vectors in a vector space that are all mutually perpendicular to each other. This means that the dot product of any two vectors in the basis is equal to 0.

## 2. Why is it important for a basis to be orthogonal?

Having an orthogonal basis allows for easier calculations and simplification of equations. It also makes it easier to find the coordinates of a vector in the basis, as the dot product can be used to find the coefficients.

## 3. How do you determine if a basis is orthogonal?

To determine if a basis is orthogonal, you can take the dot product of all the vectors in the basis with each other. If all the dot products are equal to 0, then the basis is orthogonal.

## 4. Can a basis be both orthogonal and non-orthogonal?

No, a basis cannot be both orthogonal and non-orthogonal. These are two mutually exclusive properties. A basis can either be orthogonal or non-orthogonal, but not both.

## 5. Is it always correct to assume that a basis is non-orthogonal?

No, it is not always correct to assume that a basis is non-orthogonal. In fact, many commonly used bases, such as the standard basis in Euclidean space, are orthogonal. It is important to check if a basis is orthogonal or not before making any assumptions.

• Linear and Abstract Algebra
Replies
9
Views
641
• Linear and Abstract Algebra
Replies
9
Views
956
• Linear and Abstract Algebra
Replies
20
Views
2K
• Linear and Abstract Algebra
Replies
3
Views
1K
• Linear and Abstract Algebra
Replies
5
Views
1K
• Linear and Abstract Algebra
Replies
14
Views
2K
• Linear and Abstract Algebra
Replies
2
Views
872
• Linear and Abstract Algebra
Replies
3
Views
2K
• Linear and Abstract Algebra
Replies
3
Views
1K
• Linear and Abstract Algebra
Replies
5
Views
1K