Generalizing Linear Independence: Beyond R^n and into Matrix Spaces

  • Thread starter Thread starter schaefera
  • Start date Start date
  • Tags Tags
    Linear
Click For Summary
To determine if n vectors in R^n are linearly independent, one can use the determinant of a matrix formed by these vectors. This concept extends to mxn matrices, where the same determinant approach applies, but the matrix is of size (mxn)x(mxn). For checking linear dependence, reduced-row echelon form can be used to find the rank of the matrix, indicating the number of linearly independent vectors. The linear independence of vectors can also be framed as a question of whether the equation involving scalars yields only the trivial solution. The discussion highlights that while the determinant method is specific to R^n, the principles of linear independence can be adapted to various vector spaces.
schaefera
Messages
208
Reaction score
0
To text whether n vectors in R^n are linearly independent, you put those vectors in a matrix and take its determinant.

How can this be generalized beyond vectors in R^n-- say to the space of matrices in R^(mxn)?
 
Physics news on Phys.org
Hey schaefera.

In this new space do you have mxn vectors in mxn space? If so you do exactly the same thing except your matrix is (mxn)x(mxn).

If you want to check whether any set of vectors are linearly dependent (below the dimension of the space), simply put the vectors in a matrix and do a reduced-row echelon reduction on the matrix and see what it's rank is. The rank will give you the number of linearly independent vectors for that set that you entered in.
 
How about for the space of continuous functions? Polynomials? Does the method ever break down?
 
The question of linear independence of a finite amount of vectors can be thought of as asking about solutions to the equation: c_1\mathbf{v}_1+ c_2\mathbf{v}_2+...+c_n\mathbf{v}_n= \mathbf{0} where \mathbf{v}_i \in V are the vectors you are testing for linear independence and c_i \in F are scalars in your field F. The vectors \mathbf{v}_i will be linearly independent if and only if all the scalars, c_1,c_2,...,c_n are zero. Said another way: c_1\mathbf{v}_1+ c_2\mathbf{v}_2+...+c_n\mathbf{v}_n= \mathbf{0} \Rightarrow c_1,c_2,...,c_n=0 So to test a set of vectors for linear independence you set up the above equation and check to see if all the scalars c_i are zero. How exactly you go about checking that is more or less dependent on what vector space you're dealing with. As for the case of infinitely many vectors I'm not 100% sure, so I won't comment.
 
What Gamble93 gives is the usual definition of "linear independence". Requiring that a matrix having non-zero determinant is a specific property.
 
I realized after my post that I should have included that a non zero determinant implying linear independence is a specific case that applies to \mathbb{R}^n but I was late for class so it slipped my mind. Excuse my ignorance.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K