Find if 3 vectors form a basis in space R^4

Click For Summary
The discussion centers on determining whether the vectors a1=(1,-1,0,1), a2=(2,3,-1,0), and a3=(4,1,-1,4) form a basis in R^4. It is established that these vectors are linearly independent and have a rank of 3, but they cannot span R^4 because a basis for R^4 requires four linearly independent vectors. Participants clarify that R^n denotes the dimension of the vector space, and thus for R^4, four components are necessary. The conversation also touches on the relationship between the number of independent vectors and their corresponding vector space dimensions. Ultimately, the conclusion is that while the three vectors form a basis for a three-dimensional subspace of R^4, they do not form a basis for R^4 itself.
Deimantas
Messages
38
Reaction score
0

Homework Statement



Is the system of vectors a1=(1,-1,0,1), a2=(2,3,-1,0), a3=(4,1,-1,4) linearly independent? Do these vectors form a basis in the vector space R^4? State why.

Homework Equations





The Attempt at a Solution



I have done the first part of the exercise. I have found that the system of vectors is linearly independent and the rank is 3. I think that if there are 3 linearly independent vectors, and the rank is 3, then the vectors will form a basis in vector space R^3. What about R^4? How to prove whether it does form the basis?
 
Physics news on Phys.org
Try to see if they could span R4. The answer is no, they won't span it. You need 4 vectors for a basis in R4 because the dimension is 4, but that's a theorem, try to figure out why there must be a vector that is not a linear combination of these 3 vectors. (Looking at the system of linear equations will give you some insight into it).
 
Deimantas said:

Homework Statement



Is the system of vectors a1=(1,-1,0,1), a2=(2,3,-1,0), a3=(4,1,-1,4) linearly independent? Do these vectors form a basis in the vector space R^4? State why.

Homework Equations





The Attempt at a Solution



I have done the first part of the exercise. I have found that the system of vectors is linearly independent and the rank is 3. I think that if there are 3 linearly independent vectors, and the rank is 3, then the vectors will form a basis in vector space R^3.
No, they can't possibly form a basis for R3. They are vectors in R4. They form a basis for a three-dimension subspace of R4, but that is very different from R3.
Deimantas said:
What about R^4? How to prove whether it does form the basis?
The vectors can't form a basis for R4 since there aren't enough of them.
 
I never quite understood the R^n or R^m concept. I tried but gave up. What do you mean when you say "since there aren't enough of them". Is R^n representative of the total number of rows or columns or unique number of independent vectors?
 
Vectors in R or R1 have one component (a single real number). Any basis for this vector space contains one vector.
Vectors in R2 have two components (e.g., <1, 3>). Any basis for this vector space contains two vectors.
Vectors in R3 have three components (e.g., <1, 3, -2>). Any basis for this vector space contains three vectors.
And so on...

In the problem in this thread we're given three vectors in R4. Being vectors in R4, they have four components, so they don't belong to, for example, R3 or R2 or R5. A basis for R4 must be a linearly independent set of four vectors.
 
OK, as an example, if a 4x5 matrix has only 2 linearly independent vectors, then these 4 row vectors form a basis in R^2, corresponding to the number of pivot rows or pivot columns only. Correct?
 
sharks said:
OK, as an example, if a 4x5 matrix has only 2 linearly independent vectors, then these 4 row vectors form a basis in R^2, corresponding to the number of pivot rows or pivot columns only. Correct?
No.

A 4 x 5 matrix is a map from R5 to R4. The rows are vectors in R5, and have absolutely nothing to do with R2.
 
Mark44 said:
No.

A 4 x 5 matrix is a map from R5 to R4. The rows are vectors in R5, and have absolutely nothing to do with R2.

I'm sorry, but i don't understand. What about the effect of the number of independent linear vectors on R^n or R^m?

Is there a good book or paper that i could refer to? I just can't wrap my head around those R^n or R^m numbers. But i do know the fundamental theorem of linear algebra part 1, which deals with dimensions.

EDIT: OK, i think i understand. The R^{value} is always equal to the number of components in the row/s corresponding to the linearly independent vector/s. So, the rule is always count the number of components in the rows and not in the independent columns. Correct?
 
Last edited:
sharks said:
I'm sorry, but i don't understand. What about the effect of the number of independent linear vectors on R^n or R^m?
Now I don't understand what you're asking.

Let's go back to your example of a 4 x 5 matrix.

Suppose you ended up with these vectors:
<1, 0, 1, 1, 1>
<0, 1, 1, 1, 1>
You asked whether these could be a basis for R2. They couldn't possibly be, because these two vectors belong to R5 (they have 5 components). A basis for R2 would have to consist of vectors in R2, such as <1, 1> and <0, 1>.

The two vectors in R5 above are a basis for a two-dimensional subspace of R5. This subspace looks like a plane, but it's a plane embedded in 5-dimensional space. The fact that this subspace of R5 is a plane has nothing to do with the R2 vector space.
sharks said:
Is there a good book or paper that i could refer to? I just can't wrap my head around those R^n or R^m numbers. But i do know the fundamental theorem of linear algebra part 1, which deals with dimensions.

It's just a matter of realizing that n and m in the symbols Rn and Rm refer to vector spaces of dimension n and m respectively. n and m are just placeholders for some integers.

You aren't going to be able to come up with a mental image of 5-dimensional space (or 4- or 6- or n-) but that's OK - you don't need to. You can visualize a 1-dimensional subspace as a line in some blob that represents the higher-dimension space. A 2-dimensional subspace is just a plane in the higher-dimension space.

What is it about Rn that you're having trouble with?
 
  • #10
You made your edit after I had already started a response.
sharks said:
EDIT: OK, i think i understand. The R^{value} is always equal to the number of components in the row/s corresponding to the linearly independent vector/s.
That's more complicated than it needs to be. The "value" is equal to the number of components in a vector in R{value}.
sharks said:
So, the rule is always count the number of components in the rows and not in the independent columns. Correct?

In your example of a 4 x 5 matrix that row-reduces to a matrix with two non-zero rows, Rank(A) = 2, Kernel(A) = 3, and n = 5, where n is the dimension of the domain, R5.
 
  • #11
Mark44 said:
In your example of a 4 x 5 matrix that row-reduces to a matrix with two non-zero rows, Rank(A) = 2, Kernel(A) = 3, and n = 5, where n is the dimension of the domain, R5.

That's the best way that i can understand and remember it. It's just the rank-nullity theorem.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K