How Can You Complete a Set of Vectors to Form a Basis in R^5?

  • Thread starter Thread starter ELB27
  • Start date Start date
  • Tags Tags
    Basis
ELB27
Messages
117
Reaction score
15

Homework Statement


Consider in the space ##\mathbb{R}^5## vectors ##\vec{v}_1 = (2,1, 1, 5, 3)^T## , ##\vec{v}_2 = (3, 2, 0, 0, 0)^T## , ##\vec{v}_3 = (1, 1, 50, 921, 0)^T##.
a) Prove that these vectors are linearly independent.
b) Complete this system of vectors to a basis.
If you do part b) first you can do everything without any computation.

Homework Equations

The Attempt at a Solution


If I were to do a) first, I would put the 3 vectors in a matrix, get it to echelon form by row reduction and note that there is a pivot in every column. Even better - I could do the row reduction with additional two arbitrary vectors and choose their components such that the final echelon form has a pivot in every row and column. However, this method is cumbersome and requires tedious calculations. The question clearly suggests I do b) first to avoid all calculations (that's probably the reason for the hint and the ugly numbers in ##\vec{v}_3##).
However, I do not see a way to choose two more vectors not belonging to span(v1,v2,v3) to complete to a basis without guessing or using the tedious row reduction suggested earlier (I could do it, but I prefer to find a more elegant approach).
Any suggestions on the best method to solve this one?

Any suggestions comments will be greatly appreciated!
 
Physics news on Phys.org
There isn't much to it. Row reducing the vectors to prove they are linearly independent will show that ##x_4## and ##x_5## are free.

Writing out the solution set will show the span of two vectors forms a linearly independent basis.

I think it would be difficult to see this basis directly, unless you assume the conclusion of a) is true at the outset of the problem. Then you would know how to assume the form of the linearly independent spanning vectors.
 
  • Like
Likes ELB27
Zondrina said:
There isn't much to it. Row reducing the vectors to prove they are linearly independent will show that ##x_4## and ##x_5## are free.

Writing out the solution set will show the span of two vectors forms a linearly independent basis.

I think it would be difficult to see this basis directly, unless you assume the conclusion of a) is true at the outset of the problem. Then you would know how to assume the form of the linearly independent spanning vectors.
Alright then, I guess I will have to do some dirty work :p.
Thank you for the reply!
 
ELB27 said:
Alright then, I guess I will have to do some dirty work :p.
Thank you for the reply!

I want to clarify what I said earlier, I feel as if I was a little ambiguous.

If you know the vectors are linearly independent, then you know what the final form of the matrix will look like when you reduce ##A \vec x = \vec 0## before you even reduce it.

If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.

Since the vectors are linearly independent, only the trivial solution exists for the independent variables, i.e you can comfortably place ##0's## in many of the vector indices for the solution set without much thought.

All that would be left to do is to place a ##1## in the index of each free variable for their respective vector in the solution set.

The span of these vectors will form the basis without the need to row reduce.
 
  • Like
Likes ELB27
Zondrina said:
I want to clarify what I said earlier, I feel as if I was a little ambiguous.

If you know the vectors are linearly independent, then you know what the final form of the matrix will look like when you reduce ##A \vec x = \vec 0## before you even reduce it.

If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.

Since the vectors are linearly independent, only the trivial solution exists for the independent variables, i.e you can comfortably place ##0's## in many of the vector indices for the solution set without much thought.

All that would be left to do is to place a ##1## in the index of each free variable for their respective vector in the solution set.

The span of these vectors will form the basis without the need to row reduce.
I think I get it now. Basically, assuming linear independence of the first 3 vectors, the two remaining vectors must be from the standard basis in ##\mathbb{R}^5## and all that's left is to find which ones?
 
Zondrina said:
If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.
You will have at least 2 free variables, since the three vectors might be linearly dependent (coplanar or even collinear). In the previous paragraph you made the assumption that the three vectors were linearly independent, in which case the sentence above is correct, but I wasn't sure if that assumption still held in the next paragraph.For clarity, you might have written, "If you have ##3## linearly independent vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables..."
 
  • Like
Likes ELB27 and STEMucator
ELB27 said:
If I were to do a) first, I would put the 3 vectors in a matrix, get it to echelon form by row reduction and note that there is a pivot in every column.
A better approach would be to start with the definition of linear independence and think more generally about how to solve the system of equations rather than resorting to using matrices. To show linear independence, you want to solve
$$c_1\begin{pmatrix} 2\\1\\1\\5\\3\end{pmatrix} + c_2 \begin{pmatrix} 3\\2\\0\\0\\0 \end{pmatrix} + c_3 \begin{pmatrix} 1\\1\\50\\921\\0\end{pmatrix} = 0.$$ You should be able to see by inspection that ##c_1=0##. And it's pretty easy to show ##c_2 = c_3 = 0## follows with virtually no calculating.
 
  • Like
Likes ELB27
vela said:
A better approach would be to start with the definition of linear independence and think more generally about how to solve the system of equations rather than resorting to using matrices. To show linear independence, you want to solve
$$c_1\begin{pmatrix} 2\\1\\1\\5\\3\end{pmatrix} + c_2 \begin{pmatrix} 3\\2\\0\\0\\0 \end{pmatrix} + c_3 \begin{pmatrix} 1\\1\\50\\921\\0\end{pmatrix} = 0.$$ You should be able to see by inspection that ##c_1=0##. And it's pretty easy to show ##c_2 = c_3 = 0## follows with virtually no calculating.
Ah, that's the simplicity I was looking for! (Is 'by inspection' a valid formal argument?) ##c_1=0## because of the ##3## in the bottom of ##\vec{v}_1##, ##c_3=0## because of the ##921## and ##c_2=0## because it's the last left standing, right? As for a possible completion to a basis, by inspection I think that these two will work: ##\vec{v}_4 = (0,0,1,0,0)^T ; \vec{v}_5 = (0,1,0,0,0)^T## or ##\vec{v}_5 = (1,0,0,0,0)^T##.
 

Similar threads

Back
Top