# How can the orthonormal basis of four vectors be found?

• soopo
In summary, to find the orthonormal basis of four vectors, you can use the Gram-Schmidt process or the process of finding eigenvalues and eigenvectors. The Gram-Schmidt process involves orthogonalizing the vectors in an inner product space, while the method of finding eigenvalues and eigenvectors involves row reducing a matrix. It is important to put the initial vectors in as rows and take them out as rows when reducing the matrix. To check for errors, you can use the dot product of each presumed orthonormal vector with every other presumed orthonormal vector, which should result in all dot products being zero.
soopo

## Homework Statement

How can I find the orthonormal basis of four vectors?

The vectors are:
(0, 3, 0, 4), (4, 0, 3, 0), (4, 3, 3, 4) and (4, -3, 3, -4).

## The Attempt at a Solution

I am not sure, whether I should use Gram-Schmidt process or the process of finding
eigenvalues, eigenvectors and then normalizing.

I would like to use the latter method. However, I am not sure whether the
process works here.

Last edited:
What's the purpose of Gram-Schmidt again? Perhaps you should look it up on Wikipedia...

phreak said:
What's the purpose of Gram-Schmidt again? Perhaps you should look it up on Wikipedia...

It is a method for orthogonalizing a set of vectors in an inner product space.
However, the method is rather risky, because it involves many steps. It makes me unsure whether it is the best method or not.

Is it possible to solve the problem with eigenvectors and so determining the orthonormal basis?

Put them into a matrix and row reduce it. No eigenvectors involved. It's simpler than that. Once you found a spanning basis, then use Gram-Schmidt.

IMO, Gram-Schmidt is much simpler than what you propose with eigenvectors.

Dick said:
Put them into a matrix and row reduce it. No eigenvectors involved. It's simpler than that. Once you found a spanning basis, then use Gram-Schmidt.

I have got:

(1 0 1 -1
0 1 1 1
0 0 0 0
0 0 0 0)

This seems to mean that I have to use the Gram-Schmidt process only for two
vectors, like

Select v1 = (1 0 1 -1)

v2 = w2 - (w2 * v1) / (v1 * v1) *v1[/tex]
= w2 - 0
= w2,
where w2 = (0 1 1 1).

Then, normalizing

u1 = $$\frac {(1\ 0\ 1\ -1)} {\sqrt3}$$
u2 = $$\frac {(0\ 1\ 1\ 1)} {\sqrt3}$$

I can't believe that this is the answer. There may be some mistakes.

Well, there's an easy way to check it. Take the dot product of each presumed orthonormal vector with every other presumed orthonormal vector. All dot products should be zero. If this is true, then you're done. In your case, you only have two vectors, so just calculate $$\langle u_1,u_2\rangle$$.

phreak said:
Well, there's an easy way to check it. Take the dot product of each presumed orthonormal vector with every other presumed orthonormal vector. All dot products should be zero. If this is true, then you're done. In your case, you only have two vectors, so just calculate $$\langle u_1,u_2\rangle$$.

We're done!
All dot products are zero.

Thank you!

Yeah, fine. But can you find a linear combination of (1,0,1,-1) and (0,1,1,1) that makes (0,3,0,4)? I don't think so. It think you row reduced it wrong.

Dick said:
Yeah, fine. But can you find a linear combination of (1,0,1,-1) and (0,1,1,1) that makes (0,3,0,4)? I don't think so. It think you row reduced it wrong.

I cannot find that.
Hmm... I double checked the initial values in my calculator. They are correct. I inserted the initial vectors to my calculator such that one column for one vector. This can be the mistake. However, I don't think I should have each vector as a row.

I can't speak to calculator problems, but I do know that if you claim that a*(1,0,1,-1) and b*(0,1,1,1) span the space, and (0,3,0,4) is in it then b=3 and a=0. And that doesn't work.

Dick said:
I can't speak to calculator problems, but I do know that if you claim that a*(1,0,1,-1) and b*(0,1,1,1) span the space, and (0,3,0,4) is in it then b=3 and a=0. And that doesn't work.

I get the same row reduced matrix also by hand.

Do you mean that it is not possible to present the vectors of the basis as a linear combination of the initial vectors?

What does it mean if we cannot present the vectors as a linear combination of the initial vectors?

soopo said:
I get the same row reduced matrix also by hand.

Do you mean that it is not possible to present the vectors of the basis as a linear combination of the initial vectors?

What does it mean if we cannot present the vectors as a linear combination of the initial vectors?

I think what you are doing is putting the vectors in as columns, doing row operations, then pulling the vectors out as rows. Don't do that. Put them in as ROWS, do row operations, then pull them out as rows.

Dick said:
I think what you are doing is putting the vectors in as columns, doing row operations, then pulling the vectors out as rows. Don't do that. Put them in as ROWS, do row operations, then pull them out as rows.

Dick said:
I think what you are doing is putting the vectors in as columns, doing row operations, then pulling the vectors out as rows. Don't do that. Put them in as ROWS, do row operations, then pull them out as rows.

I did that like you say.

The matrix reduces to:
(1 0 3/4 0
0 1 0 4/3
0 0 0 0
0 0 0 0)

Let's use Gram-Schmidt process again:
Select v1 = (1 0 3/4 0)

v2 = w2 - (w2 * v1) / (v1 * v1) *v1
= w2 - 0
= w2

Then, normalizing
u1 = (4/5) [1 0 3/4 0]
u2 = (3/5) [0 1 0 4/3]

This problem raised a few questions.

Dick made previously an elegant error check. Can I compare these formulae to any initial vectors ta make an error check?

Does it matter how we put initial vectors to the matrix? So can we put vectors to the matrix either as rows or columns?

Last edited:
I think you have a typo, u2=(3/5)[0,1,0,4/3], right? The purpose of the initial row reduction is to reduce the initial four vectors to the two vectors which are linearly independent and span the whole subspace. That gives you a smaller set to apply Gram-Schmidt to. In doing this always put the vectors in and take them out as rows.

Dick said:
The purpose of the initial row reduction is to reduce the initial four vectors to the two vectors which are linearly independent and span the whole subspace. That gives you a smaller set to apply Gram-Schmidt to. In doing this always put the vectors in and take them out as rows.

Can we always do the row reduction?

For example, I today had a similar problem with the following matrix:
(1 2 0 0
2 1 0 0
0 0 1 2
0 0 2 1)

The question asked me to diagonalize orthogonally the previous symmetric
matrix P and calculate $$P^{T}AP$$.

I reduced it to $$I^{4}$$. This made me very unsure.
I thought that it cannot be that easy: normalize, put the vectors in P and
calculate $$P^{T}AP$$ by putting the eigenvalues in.

My answer to $$P^{T}AP$$ was $$I^{4}$$.

I started to think that I should have first put the lambdas to the diagonal,
solve eigenvalues, and then solve the eigenvectors.

At home, I have got different answers with both methods. Now, I am not sure
which method is the correct one.

Last edited:
That's a completely different problem. No, you can't row reduce every matrix and expect to get the same answer to every problem. The row reduction in the previous problem had NOTHING to do with any matrix. It was just to get a minimal spanning set.

Dick said:
That's a completely different problem. No, you can't row reduce every matrix and expect to get the same answer to every problem. The row reduction in the previous problem had NOTHING to do with any matrix. It was just to get a minimal spanning set.

So your point is: when we need to find the basis for the matrix, we always want the minimum amount of vectors, by which we can present the other vectors in the initial space. For instance, we presented the four vectors at the start of this post by two vectors.

Your other point is that: only use row reduction initially about basis. For example, in the problems of finding eigenvalues and -vectors, we put the lambdas in and do not row reduce.

Thank you! You really have put me on the right track :)

Your original set of vectors is not linearly independent. The first and second add up to the third vector, and the first and -1 times the second make the fourth. This is why the row-reduced matrix has only two nonzero rows. Gram-Schmidt only works on a linearly independent list. You can use it on the first two vectors (with or without row-reducing; you'll get the same result). If you try it on the third and fourth vectors, you'll get meaningless or undefined (0/0?) basis vectors.

## 1. What is an orthonormal basis?

An orthonormal basis is a set of vectors that are mutually perpendicular (orthogonal) and have a length of 1 (normalized). This means that each vector in the set is perpendicular to all the other vectors and they all have a unit length.

## 2. Why is an orthonormal basis important?

An orthonormal basis is important because it simplifies many mathematical calculations, particularly in linear algebra. It allows for easy representation of vectors in a coordinate system and makes it easier to solve systems of equations.

## 3. How can the orthonormal basis of four vectors be found?

To find the orthonormal basis of four vectors, you can use the Gram-Schmidt process. This involves taking the first vector as it is, then subtracting its projection onto the second vector from the second vector. The resulting vector is then normalized to get the second orthonormal vector. This process is repeated for the remaining vectors to get the full orthonormal basis.

## 4. Can an orthonormal basis of four vectors always be found?

Yes, an orthonormal basis of four vectors can always be found as long as the vectors are linearly independent. This means that none of the vectors can be written as a combination of the others.

## 5. How is the orthonormal basis of four vectors used in applications?

The orthonormal basis of four vectors is used in many applications, particularly in computer graphics and physics. It is used to represent rotations and translations in 3D space, as well as in solving systems of equations in physics problems.

• Calculus and Beyond Homework Help
Replies
16
Views
1K
• Linear and Abstract Algebra
Replies
14
Views
2K
• Calculus and Beyond Homework Help
Replies
5
Views
5K
• Calculus and Beyond Homework Help
Replies
3
Views
1K
• Calculus and Beyond Homework Help
Replies
4
Views
1K
• Calculus and Beyond Homework Help
Replies
14
Views
2K
• Calculus and Beyond Homework Help
Replies
58
Views
3K
• Calculus and Beyond Homework Help
Replies
4
Views
1K
• Calculus and Beyond Homework Help
Replies
22
Views
2K
• Calculus and Beyond Homework Help
Replies
2
Views
484