How Can You Complete a Set of Vectors to Form a Basis in R^5?

  • Thread starter Thread starter ELB27
  • Start date Start date
  • Tags Tags
    Basis
Click For Summary

Homework Help Overview

The discussion revolves around completing a set of vectors in the space ##\mathbb{R}^5## to form a basis. The original poster presents three vectors and seeks to prove their linear independence while also exploring how to add two more vectors to complete the basis without extensive calculations.

Discussion Character

  • Exploratory, Assumption checking, Problem interpretation

Approaches and Questions Raised

  • Participants discuss methods for proving linear independence, including row reduction and inspection of the vectors. There is a suggestion to consider the implications of linear independence on the number of free variables when completing the basis.

Discussion Status

Some participants have offered insights on the relationship between the number of vectors and the dimensionality of the space, noting that if the three vectors are linearly independent, there will be two free variables. Others are exploring the implications of assuming linear independence and how that affects the choice of additional vectors to complete the basis.

Contextual Notes

There is an ongoing discussion about the assumptions regarding the linear independence of the given vectors and the implications of those assumptions on the completion of the basis. The original poster expresses a desire to avoid tedious calculations, indicating a preference for more elegant solutions.

ELB27
Messages
117
Reaction score
15

Homework Statement


Consider in the space ##\mathbb{R}^5## vectors ##\vec{v}_1 = (2,1, 1, 5, 3)^T## , ##\vec{v}_2 = (3, 2, 0, 0, 0)^T## , ##\vec{v}_3 = (1, 1, 50, 921, 0)^T##.
a) Prove that these vectors are linearly independent.
b) Complete this system of vectors to a basis.
If you do part b) first you can do everything without any computation.

Homework Equations

The Attempt at a Solution


If I were to do a) first, I would put the 3 vectors in a matrix, get it to echelon form by row reduction and note that there is a pivot in every column. Even better - I could do the row reduction with additional two arbitrary vectors and choose their components such that the final echelon form has a pivot in every row and column. However, this method is cumbersome and requires tedious calculations. The question clearly suggests I do b) first to avoid all calculations (that's probably the reason for the hint and the ugly numbers in ##\vec{v}_3##).
However, I do not see a way to choose two more vectors not belonging to span(v1,v2,v3) to complete to a basis without guessing or using the tedious row reduction suggested earlier (I could do it, but I prefer to find a more elegant approach).
Any suggestions on the best method to solve this one?

Any suggestions comments will be greatly appreciated!
 
Physics news on Phys.org
There isn't much to it. Row reducing the vectors to prove they are linearly independent will show that ##x_4## and ##x_5## are free.

Writing out the solution set will show the span of two vectors forms a linearly independent basis.

I think it would be difficult to see this basis directly, unless you assume the conclusion of a) is true at the outset of the problem. Then you would know how to assume the form of the linearly independent spanning vectors.
 
  • Like
Likes   Reactions: ELB27
Zondrina said:
There isn't much to it. Row reducing the vectors to prove they are linearly independent will show that ##x_4## and ##x_5## are free.

Writing out the solution set will show the span of two vectors forms a linearly independent basis.

I think it would be difficult to see this basis directly, unless you assume the conclusion of a) is true at the outset of the problem. Then you would know how to assume the form of the linearly independent spanning vectors.
Alright then, I guess I will have to do some dirty work :p.
Thank you for the reply!
 
ELB27 said:
Alright then, I guess I will have to do some dirty work :p.
Thank you for the reply!

I want to clarify what I said earlier, I feel as if I was a little ambiguous.

If you know the vectors are linearly independent, then you know what the final form of the matrix will look like when you reduce ##A \vec x = \vec 0## before you even reduce it.

If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.

Since the vectors are linearly independent, only the trivial solution exists for the independent variables, i.e you can comfortably place ##0's## in many of the vector indices for the solution set without much thought.

All that would be left to do is to place a ##1## in the index of each free variable for their respective vector in the solution set.

The span of these vectors will form the basis without the need to row reduce.
 
  • Like
Likes   Reactions: ELB27
Zondrina said:
I want to clarify what I said earlier, I feel as if I was a little ambiguous.

If you know the vectors are linearly independent, then you know what the final form of the matrix will look like when you reduce ##A \vec x = \vec 0## before you even reduce it.

If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.

Since the vectors are linearly independent, only the trivial solution exists for the independent variables, i.e you can comfortably place ##0's## in many of the vector indices for the solution set without much thought.

All that would be left to do is to place a ##1## in the index of each free variable for their respective vector in the solution set.

The span of these vectors will form the basis without the need to row reduce.
I think I get it now. Basically, assuming linear independence of the first 3 vectors, the two remaining vectors must be from the standard basis in ##\mathbb{R}^5## and all that's left is to find which ones?
 
Zondrina said:
If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.
You will have at least 2 free variables, since the three vectors might be linearly dependent (coplanar or even collinear). In the previous paragraph you made the assumption that the three vectors were linearly independent, in which case the sentence above is correct, but I wasn't sure if that assumption still held in the next paragraph.For clarity, you might have written, "If you have ##3## linearly independent vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables..."
 
  • Like
Likes   Reactions: ELB27 and STEMucator
ELB27 said:
If I were to do a) first, I would put the 3 vectors in a matrix, get it to echelon form by row reduction and note that there is a pivot in every column.
A better approach would be to start with the definition of linear independence and think more generally about how to solve the system of equations rather than resorting to using matrices. To show linear independence, you want to solve
$$c_1\begin{pmatrix} 2\\1\\1\\5\\3\end{pmatrix} + c_2 \begin{pmatrix} 3\\2\\0\\0\\0 \end{pmatrix} + c_3 \begin{pmatrix} 1\\1\\50\\921\\0\end{pmatrix} = 0.$$ You should be able to see by inspection that ##c_1=0##. And it's pretty easy to show ##c_2 = c_3 = 0## follows with virtually no calculating.
 
  • Like
Likes   Reactions: ELB27
vela said:
A better approach would be to start with the definition of linear independence and think more generally about how to solve the system of equations rather than resorting to using matrices. To show linear independence, you want to solve
$$c_1\begin{pmatrix} 2\\1\\1\\5\\3\end{pmatrix} + c_2 \begin{pmatrix} 3\\2\\0\\0\\0 \end{pmatrix} + c_3 \begin{pmatrix} 1\\1\\50\\921\\0\end{pmatrix} = 0.$$ You should be able to see by inspection that ##c_1=0##. And it's pretty easy to show ##c_2 = c_3 = 0## follows with virtually no calculating.
Ah, that's the simplicity I was looking for! (Is 'by inspection' a valid formal argument?) ##c_1=0## because of the ##3## in the bottom of ##\vec{v}_1##, ##c_3=0## because of the ##921## and ##c_2=0## because it's the last left standing, right? As for a possible completion to a basis, by inspection I think that these two will work: ##\vec{v}_4 = (0,0,1,0,0)^T ; \vec{v}_5 = (0,1,0,0,0)^T## or ##\vec{v}_5 = (1,0,0,0,0)^T##.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K