# Complicated definitions of linear independence

1. Sep 22, 2006

### samh

My teacher gave us an intuitive idea of what it means for two vectors in $$\mathbb{R}^2$$ to be linearly independent (they aren't multiples of each other) and for three vectors in $$\mathbb{R}^3$$ (they aren't on the same plane).

Now the book has generalized the idea of linear independence to n dimensions rather than just two or three and the definitions are hard to make sense of. I've found two definitions, here they are:

• Definition1: A set of vectors v1,...,vn are linearly independent iff all elements of Span{v1,...,vn} have only ONE representation as a linear combination of v1,...,vn.
• Definition2: A set of vectors v1,...,vn are linearly independent iff the ONLY choice of scalars a1,...,an that makes a1vn + ... + amvn equal 0 is a1 = ... = an = 0.

First off, I have NO idea how these two definitions are equivalent. How can the linear combinations having only one representation have anything to do with there being only one way to make a linear combination equal 0?

What's also driving me crazy is that I can't find out how you'd get these definitions from the one I gave in my first paragraph. I mean HISTORICALLY speaking people must have thought of linear independence as I described in my first paragraph and from THAT wrote a generalized definition. How would they have gone from there to Definition1 and/or Definition2??

Last edited: Sep 22, 2006
2. Sep 22, 2006

### LeonhardEuler

Probably your textbook will give proofs of these things, but I can summarize. First I will show that the second definition matches the intuitive idea. Suppose that a1v1 + ... + anvn=0 with not all of the a's=0. Rearrange this equation to:
-a1v1 =a2v2+ ... + anvn
Now for $$\mathbb{R}^2$$ it is obvious that this means that the second vector is a multiple of the other: just divide by -a1, which you can do since it is not 0. For $$\mathbb{R}^3$$ this equation says that the first vector is a linear combination of the other two, a.k.a. they lie on the same plane (I don't know how much geometry you know, I can elaborate if this is not clear to you).

Now to show that the two definitions are equivalent. Suppose we had some element of the span that could be expressed in two ways as a linear combination:
v=a1v1 + ... + anvn=b1v1 + ... + bnvn
Subtract the rightmost part of the equation:
a1v1 + ... + anvn-(b1v1 + ... + bnvn)=0
(a1-b1)v1 + ... + (an-bn)vn=0
So if all of the a's are not equal to all the b's, then there is a linear combination of the vectors that gives 0 without all the coefficents being 0. This shows 1->2

To show 2->1, notice that if you had
0=a1v1 + ... + anvn
with all the a's not 0 and you had
v=b1v1 + ... + bnvn
Then you could add the first equation to the second and you would still have v on the left, but a new representation on the right.

It is a good habit to be interested in the proofs of these things, it helps a lot with understanding. Hopefully you have a good textbook.

3. Sep 23, 2006

### mathwonk

technically speaking, it is easier to rephrase thigns in terms of zero, since zero is eaiser to recognize and work with.

e.g. a pair of vectors is dependent if one is a multiple of the other, means that v = cw, but this can be reprhased as v-cw = 0. so we get the statement that there exist number a,b, such that av+bw = 0.

however if we use that definition we are stuck a bit since then 0v+0w = 0 too, even if va nd w are independent.

to be able to go from av+bw = 0 to v = cw, we need to solve for v, so we need say a not 0, then av+bw = 0, v + b/a w = 0, so v = -(b/a)w.

or to put it backwards, the only waY YOU CANNOT SOLVE FOR SOME ONE OF THEM IS IF ALL COEFFICIENTS ARE ZWRO.

so we get the definition: v1,...,vn are indepednet if there is no linear combination a1v1+....anvn where you can solve for one of the v's in terms of the others,

i.e. if the only way that a1v1+...anvn = 0, is if all the ai = 0.

and so on....

does this help?

4. Sep 23, 2006

### matt grime

in many cases, when we have an addition operation at hand, that saying to things are equal is the same as saying that the difference between them is zero.

This is just restating what mathwonk said, I guess.

I tend to call this idea 'homogenization', though I doubt that that is a good term.

You meet this idea all the time, such as when solving linear simultaneous equations.

5. Sep 25, 2006

### samh

Thanks guys it makes more sense now.

6. Oct 5, 2006

### samh

Well, again I've run into another issue... I'm running into a lot of problems where I'm required to find if some vectors are independent and often the criteria I see used for solving the problem is taking a determinant.

It's said that if you make a matrix out some vectors and find that the determinant of that matrix is nonzero then those vectors are independent (example). But I can't figure out why this is! I've been working at it and I'm thinking it has something to do with the fact that matrix inverses exist iff the determinant is nonzero but I'm not seeing the connection. Why is this true? Can you prove that the vectors are independent if and only if the determinant is nonzero?? I can't find an explanation or proof anywhere and I'm starting to go crazy.. thanks to anyone that helps.

7. Oct 5, 2006

### matt grime

Take n vectors in R^n. Let M be the nxn matrix formed above by using them for the columns (it has to be nxn for this to make sense). If the vectors are dependent, there is a relation amongst them. You can use the relation to find a vector v=/=0 so that Mv=0, which is if and only if det(M)=0.

8. Oct 5, 2006

### gonzo

samh, the determinant issue is fairly easy to understand. We assume you have the number of vectors as the dimension (so you have a square matrix). Note that if you have more vectors than the dimension, they are automatically dependent.

So, you set up your matrix and want to calculate the determinant. Now, you know that you can add a multiple of one row to another and still get the same value, right?

If the vectors are linearly dependent, then that means that one of them can be expressed as a linear combination of the others. This was discussed above.

The conclusion of this is that by adding multiple of rows to other rows you can make one of the rows all 0's. Do you follow this? This row, or vector, is a combination of the others. This means by some combination of multiples of the other rows you get the same row, or same vector (then just subtract instead of add of course), this gives you all 0's in a row.

If one row is all 0's, the determinant is automatically 0 (basic linear algebra).

Since your row operations didn't change the value of the determinant, it must be 0 to start wtih for any set of dependent vectors.

Does this make sense?

I would like to point out that taking the determinant without a calculator is not always the most effecicient test with big matrices in my opinion, since the calculations can get out of hand. You can instead manipulate the matrices directly using elementary row operations to see if you can get the matrix in row echelon form or reduced row echelon form. This method will work even if you have less vectors than your dimension (the determinant only works if they are equal).

But for 2x2 and 3x3 it's pretty quick to take the determinant.

9. Oct 5, 2006

### mathwonk

the determinant of a matrux is plus or minus the n dimensional volume of the block spanned by the columns. This follows immediately from the change of variables formula from several variables calculus (and may be used to prove that formula.)

Anyway, If the columns are dependent, the block they span has lower
dimension than n, hence has n dimensional volume zero.

thats why the determinant being zero tells you whether they are dependent.

you might check that for a pair of vectors in the plane, the parallelogram they span has area equal to the absolute value of the determinant of the matrix with them as columns.

formulas for actually computing determinants are very complicated, but the idea is very simple.

there is also an algebraic version of this, where determinants are thought of as a sort of n variable multiplication which is "alternating" (see the thread on differential forms).

that means when you interchange two arguments, the determinant changes sign.

it follows that when two arguments are equal the determinant is always zero, and since you can also subtract one arguemnt from another without changing the value, that it is zero also if one of the arguments is zero. now dependency of the family of arguemnts elads to being able to
express one in etrms off the others, and then to amke one argument zero.

so the sign change property of determinants also forces them to be zero on dependent sets of arguments.

10. Oct 5, 2006

### mathwonk

actually computing determinants is often done fastest by diagonLIZING the matrix, or ratehr just triangulating it, by row and column operations.

i.e. to compute determinants, first learn that the det of a triangular matrix equals the product of the diagonal entries, then learn to use gauss elimination to triangulate any matrix.

if there are any zeroes on the diagonal at the end the det is zero, otherwise not.

to actually calculate the precise det, not just whetehr it is zero or not, you must keep record of the steps used to triangulate the matrix.

11. Oct 6, 2006

### samh

I'm pretty dumb...I've got a few more questions. My brain doesn't handle matrices very well..

I see your point gonzo how the determinant must be 0 when the columns are dependent. So in other words I can see that dependent columns imply a zero determinant. But....how can I show that a zero determinant implies dependent columns?? How do I know beyond any doubt that there exists absolutely NO situations in which you can get a zero determinant and still have no dependent columns?

mathwonk: I'm only looking for algebraic explanations but that geometric one was interesting so thanks anyway.

matt grime: I don't understand how you got this:
Why is it if and only if det(M)=0?

----------------------------------
Thanks for helping me out.

12. Oct 7, 2006

### gonzo

Samh, the reverse is easy too. Besides adding a multiple of one row to another, you can also switch two rows without changing the "zeroness" of the determinant, and multiply one row by a non-zero constant without changing the "zeroness" of the determinant.

This means you can easily diagonalize the matrix without changing the "zeroness". The determinant is then the product of the diagonal entries which is only zero is one of these entries is zero which means one whole row is zero.

13. Oct 7, 2006

### matt grime

I have to ask you what definition of determinant you are using if you don't see it is immediate that:

(there is a non zero v with Mv=0)

(det(M)=0)

are equivalent statements.

And that they are both equivalent to

(M is not invertible).

Do you see that when you multiply a matrix by a vector, Mv, the result is a linear combination of the columns of M? If not then look a bit harder. From this it is clear that linear dependence of rows (and hence columns) is equivalent to the first statement above, which is equivalent to the other three.

Last edited: Oct 7, 2006
14. Oct 7, 2006

AA^-1=I

det(AA^-1)=det(I)
det(A)*det(A^-1)=det(I)
det(A)*det(A^-1)=1

therefor detA must be different from zero, since if it was zero will be equal to one.

15. Oct 7, 2006

### mathwonk

well algebraically, independent columns meaNS YOU CAN DO ROW AND COLUMN OOPERATIONS UNTIL THE MATRIX IS THE IDENTITY. then the detrminant is 1, but the operations you did caused the determinant to change at worst by multiplication by a non zero scalar, so it wS NON Zero before.

16. Oct 7, 2006

i dont get it...
is the proof of the equality det(AB)=det(a)*det(B), is under the assumption that detA and det(B) are different from zero?
cuz if it isnt, i cant see how is it not valid?

17. Oct 7, 2006

### HallsofIvy

No, det(AB)= det(A)det(B) is true for any matrices A, B such that AB is defined.

18. Oct 7, 2006