Complicated definitions of linear independence

In summary: I'm not sure what I'm supposed to be doing.In summary, my teacher gave us an intuitive idea of what it means for two vectors in \mathbb{R}^2 to be linearly independent (they aren't multiples of each other) and for three vectors in \mathbb{R}^3 (they aren't on the same plane). However, the book has generalized the idea of linear independence to n dimensions rather than just two or three, and the definitions are hard to make sense of.
  • #1
samh
46
0
My teacher gave us an intuitive idea of what it means for two vectors in [tex]\mathbb{R}^2[/tex] to be linearly independent (they aren't multiples of each other) and for three vectors in [tex]\mathbb{R}^3[/tex] (they aren't on the same plane).

Now the book has generalized the idea of linear independence to n dimensions rather than just two or three and the definitions are hard to make sense of. I've found two definitions, here they are:

  • Definition1: A set of vectors v1,...,vn are linearly independent iff all elements of Span{v1,...,vn} have only ONE representation as a linear combination of v1,...,vn.
  • Definition2: A set of vectors v1,...,vn are linearly independent iff the ONLY choice of scalars a1,...,an that makes a1vn + ... + amvn equal 0 is a1 = ... = an = 0.

First off, I have NO idea how these two definitions are equivalent. How can the linear combinations having only one representation have anything to do with there being only one way to make a linear combination equal 0?

What's also driving me crazy is that I can't find out how you'd get these definitions from the one I gave in my first paragraph. I mean HISTORICALLY speaking people must have thought of linear independence as I described in my first paragraph and from THAT wrote a generalized definition. How would they have gone from there to Definition1 and/or Definition2??
 
Last edited:
Physics news on Phys.org
  • #2
Probably your textbook will give proofs of these things, but I can summarize. First I will show that the second definition matches the intuitive idea. Suppose that a1v1 + ... + anvn=0 with not all of the a's=0. Rearrange this equation to:
-a1v1 =a2v2+ ... + anvn
Now for [tex]\mathbb{R}^2[/tex] it is obvious that this means that the second vector is a multiple of the other: just divide by -a1, which you can do since it is not 0. For [tex]\mathbb{R}^3[/tex] this equation says that the first vector is a linear combination of the other two, a.k.a. they lie on the same plane (I don't know how much geometry you know, I can elaborate if this is not clear to you).

Now to show that the two definitions are equivalent. Suppose we had some element of the span that could be expressed in two ways as a linear combination:
v=a1v1 + ... + anvn=b1v1 + ... + bnvn
Subtract the rightmost part of the equation:
a1v1 + ... + anvn-(b1v1 + ... + bnvn)=0
(a1-b1)v1 + ... + (an-bn)vn=0
So if all of the a's are not equal to all the b's, then there is a linear combination of the vectors that gives 0 without all the coefficents being 0. This shows 1->2

To show 2->1, notice that if you had
0=a1v1 + ... + anvn
with all the a's not 0 and you had
v=b1v1 + ... + bnvn
Then you could add the first equation to the second and you would still have v on the left, but a new representation on the right.

It is a good habit to be interested in the proofs of these things, it helps a lot with understanding. Hopefully you have a good textbook.
 
  • #3
technically speaking, it is easier to rephrase thigns in terms of zero, since zero is eaiser to recognize and work with.

e.g. a pair of vectors is dependent if one is a multiple of the other, means that v = cw, but this can be reprhased as v-cw = 0. so we get the statement that there exist number a,b, such that av+bw = 0.

however if we use that definition we are stuck a bit since then 0v+0w = 0 too, even if va nd w are independent.

to be able to go from av+bw = 0 to v = cw, we need to solve for v, so we need say a not 0, then av+bw = 0, v + b/a w = 0, so v = -(b/a)w.or to put it backwards, the only waY YOU CANNOT SOLVE FOR SOME ONE OF THEM IS IF ALL COEFFICIENTS ARE ZWRO.

so we get the definition: v1,...,vn are indepednet if there is no linear combination a1v1+...anvn where you can solve for one of the v's in terms of the others,

i.e. if the only way that a1v1+...anvn = 0, is if all the ai = 0.and so on...

does this help?
 
  • #4
I would like to add:

in many cases, when we have an addition operation at hand, that saying to things are equal is the same as saying that the difference between them is zero.

This is just restating what mathwonk said, I guess.

I tend to call this idea 'homogenization', though I doubt that that is a good term.

You meet this idea all the time, such as when solving linear simultaneous equations.
 
  • #5
Thanks guys it makes more sense now.
 
  • #6
Well, again I've run into another issue... I'm running into a lot of problems where I'm required to find if some vectors are independent and often the criteria I see used for solving the problem is taking a determinant.

It's said that if you make a matrix out some vectors and find that the determinant of that matrix is nonzero then those vectors are independent (example). But I can't figure out why this is! I've been working at it and I'm thinking it has something to do with the fact that matrix inverses exist iff the determinant is nonzero but I'm not seeing the connection. Why is this true? Can you prove that the vectors are independent if and only if the determinant is nonzero?? I can't find an explanation or proof anywhere and I'm starting to go crazy.. thanks to anyone that helps.
 
  • #7
Take n vectors in R^n. Let M be the nxn matrix formed above by using them for the columns (it has to be nxn for this to make sense). If the vectors are dependent, there is a relation amongst them. You can use the relation to find a vector v=/=0 so that Mv=0, which is if and only if det(M)=0.
 
  • #8
samh, the determinant issue is fairly easy to understand. We assume you have the number of vectors as the dimension (so you have a square matrix). Note that if you have more vectors than the dimension, they are automatically dependent.

So, you set up your matrix and want to calculate the determinant. Now, you know that you can add a multiple of one row to another and still get the same value, right?

If the vectors are linearly dependent, then that means that one of them can be expressed as a linear combination of the others. This was discussed above.

The conclusion of this is that by adding multiple of rows to other rows you can make one of the rows all 0's. Do you follow this? This row, or vector, is a combination of the others. This means by some combination of multiples of the other rows you get the same row, or same vector (then just subtract instead of add of course), this gives you all 0's in a row.

If one row is all 0's, the determinant is automatically 0 (basic linear algebra).

Since your row operations didn't change the value of the determinant, it must be 0 to start wtih for any set of dependent vectors.

Does this make sense?

I would like to point out that taking the determinant without a calculator is not always the most effecicient test with big matrices in my opinion, since the calculations can get out of hand. You can instead manipulate the matrices directly using elementary row operations to see if you can get the matrix in row echelon form or reduced row echelon form. This method will work even if you have less vectors than your dimension (the determinant only works if they are equal).

But for 2x2 and 3x3 it's pretty quick to take the determinant.
 
  • #9
the determinant of a matrux is plus or minus the n dimensional volume of the block spanned by the columns. This follows immediately from the change of variables formula from several variables calculus (and may be used to prove that formula.)

Anyway, If the columns are dependent, the block they span has lower
dimension than n, hence has n dimensional volume zero.

thats why the determinant being zero tells you whether they are dependent.you might check that for a pair of vectors in the plane, the parallelogram they span has area equal to the absolute value of the determinant of the matrix with them as columns.formulas for actually computing determinants are very complicated, but the idea is very simple.

there is also an algebraic version of this, where determinants are thought of as a sort of n variable multiplication which is "alternating" (see the thread on differential forms).

that means when you interchange two arguments, the determinant changes sign.

it follows that when two arguments are equal the determinant is always zero, and since you can also subtract one arguemnt from another without changing the value, that it is zero also if one of the arguments is zero. now dependency of the family of arguemnts elads to being able to
express one in etrms off the others, and then to amke one argument zero.so the sign change property of determinants also forces them to be zero on dependent sets of arguments.
 
  • #10
actually computing determinants is often done fastest by diagonLIZING the matrix, or ratehr just triangulating it, by row and column operations.

i.e. to compute determinants, first learn that the det of a triangular matrix equals the product of the diagonal entries, then learn to use gauss elimination to triangulate any matrix.

if there are any zeroes on the diagonal at the end the det is zero, otherwise not.

to actually calculate the precise det, not just whetehr it is zero or not, you must keep record of the steps used to triangulate the matrix.
 
  • #11
I'm pretty dumb...I've got a few more questions. My brain doesn't handle matrices very well..

I see your point gonzo how the determinant must be 0 when the columns are dependent. So in other words I can see that dependent columns imply a zero determinant. But...how can I show that a zero determinant implies dependent columns?? How do I know beyond any doubt that there exists absolutely NO situations in which you can get a zero determinant and still have no dependent columns?

mathwonk: I'm only looking for algebraic explanations but that geometric one was interesting so thanks anyway.

matt grime: I don't understand how you got this:
You can use the relation to find a vector v=/=0 so that Mv=0, which is if and only if det(M)=0.
Why is it if and only if det(M)=0?

----------------------------------
Thanks for helping me out.
 
  • #12
Samh, the reverse is easy too. Besides adding a multiple of one row to another, you can also switch two rows without changing the "zeroness" of the determinant, and multiply one row by a non-zero constant without changing the "zeroness" of the determinant.

This means you can easily diagonalize the matrix without changing the "zeroness". The determinant is then the product of the diagonal entries which is only zero is one of these entries is zero which means one whole row is zero.
 
  • #13
samh said:
matt grime: I don't understand how you got this: Why is it if and only if det(M)=0?
I have to ask you what definition of determinant you are using if you don't see it is immediate that:

(there is a non zero v with Mv=0)

(det(M)=0)

are equivalent statements.

And that they are both equivalent to

(M is not invertible).

Do you see that when you multiply a matrix by a vector, Mv, the result is a linear combination of the columns of M? If not then look a bit harder. From this it is clear that linear dependence of rows (and hence columns) is equivalent to the first statement above, which is equivalent to the other three.
 
Last edited:
  • #14
samh said:
How do I know beyond any doubt that there exists absolutely NO situations in which you can get a zero determinant and still have no dependent columns?

AA^-1=I

det(AA^-1)=det(I)
det(A)*det(A^-1)=det(I)
det(A)*det(A^-1)=1

therefor detA must be different from zero, since if it was zero will be equal to one.
 
  • #15
well algebraically, independent columns meaNS YOU CAN DO ROW AND COLUMN OOPERATIONS UNTIL THE MATRIX IS THE IDENTITY. then the detrminant is 1, but the operations you did caused the determinant to change at worst by multiplication by a non zero scalar, so it wS NON Zero before.
 
  • #16
mathwonk said:
well algebraically, independent columns meaNS YOU CAN DO ROW AND COLUMN OOPERATIONS UNTIL THE MATRIX IS THE IDENTITY. then the detrminant is 1, but the operations you did caused the determinant to change at worst by multiplication by a non zero scalar, so it wS NON Zero before.
i don't get it...
is the proof of the equality det(AB)=det(a)*det(B), is under the assumption that detA and det(B) are different from zero?
cuz if it isnt, i can't see how is it not valid?
 
  • #17
TuviaDaCat said:
i don't get it...
is the proof of the equality det(AB)=det(a)*det(B), is under the assumption that detA and det(B) are different from zero?
cuz if it isnt, i can't see how is it not valid?
No, det(AB)= det(A)det(B) is true for any matrices A, B such that AB is defined.
 
  • #18
HallsofIvy said:
No, det(AB)= det(A)det(B) is true for any matrices A, B such that AB is defined.
so we know that A*A^-1=I is valid for any regular A.
and that det(AA^-1)=det(A)*det(A^-1)=1 for any regular A.
so detA cannot be zero.

i don't see how this proof is assuming from the firt place that deta is zero?
 
Last edited:

1. What is the definition of linear independence?

Linear independence is a concept in linear algebra that describes the relationship between a set of vectors. It means that no vector in the set can be written as a linear combination of the other vectors in the set.

2. How is linear independence different from linear dependence?

Linear dependence occurs when one or more vectors in a set can be expressed as a linear combination of the other vectors in the set. This means that the vectors are not independent and can be written as a linear combination of each other.

3. Can you give an example of a set of vectors that are linearly independent?

One example of a linearly independent set of vectors is {(1,0,0), (0,1,0), (0,0,1)}. Each vector is unique and cannot be written as a combination of the others.

4. How does linear independence relate to the dimension of a vector space?

The dimension of a vector space is equal to the number of linearly independent vectors that can span the space. This means that the more linearly independent vectors a space has, the higher its dimension will be.

5. Why is linear independence an important concept in mathematics and science?

Linear independence is crucial in many areas of mathematics and science, particularly in linear algebra and differential equations. It allows us to understand the relationships between vectors and solve complex problems by breaking them down into simpler, independent components.

Similar threads

  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
855
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
883
  • Linear and Abstract Algebra
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Replies
10
Views
2K
Back
Top