Is Linear Independence Over Z the Same as Linear Independence Over R?

In summary: Z, if A is the matrix whose columns are the v's, then through a series of elementary integer matrix multiplications we get a diagonal matrix, of the formA' = \left(\begin{array}{cc}D&0\\0&0\end{array}\right)where D is a diagonal matrix such that each entry divides the next (this is by the theorem).Then A' must have the same rank as A, i.e there must be m diagonal entries in D. (elementary integer matrices preserve rank?)Using now elementary matrix operations over R, we can turn this into:A'' = \left(\begin{array}{cc}I_
  • #1
T-O7
55
0
Hey all,
I need to show whether or not the following statement is true:

For [tex]v_1,...,v_n\in Z^m[/tex], the set [tex]\{v_1,...,v_n\}[/tex] is linearly independent over Z [tex]\Leftrightarrow[/tex] it is linearly independent over R.

The reverse direction is true of course, but I'm having some trouble showing whether or not the forward direction is true. I'm pretty sure if R was replaced by Q, then the statement would be true, so the question boils down to irrational coefficients i think. I'm stuck at this point. Any thoughts? :smile:
 
Physics news on Phys.org
  • #2
Try a simpler case first. What about m = 1? m = 2?
 
  • #3
okay, i get a little confused even with the case m=1.
For the case m=1 we consider the vectors v as just elements in Z, and the question is: if they're linearly independent over Z, are they linearly independent over R? My issue right now though is that they can't even be linearly independent over Z to begin with! Just taking two vectors v1 and v2, the linear combination
[tex]v_2(v_1) - v_1(v_2) = 0[/tex]
shows they're dependent. So...for the case m=1, does that mean n must be precisely 1? For which case if {v1} is linearly independent over Z (i.e.v1 not 0), then it's of course linearly independent over R. Does this make any sense?
 
  • #4
looks good. now try it for vectors in Z^2. i think i got it for two vectors there anyway.
 
  • #5
Think about the matrix whose columns are the vectors. The row operations you do to see if they are Linearly independent can all be done over Z, if the entries are all in Z.
 
  • #6
Ah, I hadn't thought of that!
So following what matt suggested, if A is the matrix whose columns are the v's, we get a diagonal matrix through a series of elementary matrix multiplications, which don't affect the rank of A, and so if the columns are independent (over Z), then A must have determinant +-1 if it's to be invertible (over Z). Since this also implies invertibility over R (detA = 1 != 0), the columns are linearly independent over R. I think that works? Thanks for the tip!
 
  • #7
No, that isn't it. The two vectors (1,0) and (0,2) are LI in Z, and in R, but the matrix formed by them is not invertible as a matrix over Z.
 
  • #8
...hmph. So the fact that a n×n matrix is invertible iff it has rank n is false over Z? I think i might've also made the mistake of assuming m=n in my argument above...
 
  • #9
Invertible over Z is different from rank, yes - you may not divide rows by arbitrary numbers any more to get a matrix of determinant 1
 
  • #10
thats a nice idea. if i get it, i think Matt is saying this:

suppose an integer matrix has Z independent rows. then no row operations using only integers can ever eliminate a row. (you have to prove this.)

But there are row operations using only integers that put the matrix in echelon form, not necessarily reduced echelon form, since no dividing is allowed. (You have to prove this too.)

It is then clear that the rows are also R independent. (You also have to prove this.)
 
  • #11
Hmm...now that i think about it, i don't think i can give an argument using determinants anyway (i.e showing that the determinant is non-zer0), since i can't assume that m=n.
 
  • #12
But there's a theorem at my disposal:
if we assume the v's are independent over Z, and A is the matrix whose columns are the v's, then through a series of elementary integer matrix multiplications we get a diagonal matrix, of the form
[tex]A' = \left(\begin{array}{cc}D&0\\0&0\end{array}\right)[/tex]
where D is a diagonal matrix such that each entry divides the next
(this is by the theorem).
Then A' must have the same rank as A, i.e there must be m diagonal entries in D. (elementary integer matrices preserve rank?)
Using now elementary matrix operations over R, we can turn this into:
[tex]A'' = \left(\begin{array}{cc}I_m&0\\0&0\end{array}\right)[/tex]
(just divide each row by the diagonal element)
A'' has rank m, which is the rank of A, so the columns of A (the v's) are linearly independent over R.
This sounds a little weird...have I done something horribly wrong?
 
  • #13
no. you are modifying what i said above, but that's all right. Matt's idea as i understood it was to use only row operations, and get an echelon form, and instead you are using also column operations and getting diagonal form, but that is ok.

it would have been nice however if you had done the reasoning yourself, which is what i was trying to lead you to do, instead of just invoking a "theorem".

it really isn't very hard to prove if you know what the elementary row operations are. but maybe you do know how to prove the theorem?

I.e. row operations let you clear out the columns above and below the "pivot variables. if you also use column operations, you can also clear out the rows to the right of the pivot variables and be left with only the pivot variables, i.e. a diagonal matrix with as many pivots as the rank.

in a way that is actually cleaner than my version of Matt's suggestion, but more work too. I don't know which one Matt meant to suggest.
 
  • #14
Yeah, i should have explained earlier, this theorem was discussed in class, and I suppose this question was asked in hopes of understanding that the theorem was applicable.
 
  • #15
you also have a harder time understanding why this implies your result, since using both row and column operations muddles the idea of linear combinations.

I.e. if you noly use row operations, then the new rows are still linear combinationsa of the old rows, so it is clear that you can get a zero row if and only if the old rows were linearly dependent.


using both types of operations you have to knowthat what is being preserved is the map define by the matrix, but in a different basis of both domain and range.


so then you need to know that independent columns corresponds to an injective map. this is to me a little more sophisticated.

so i would satill recommend to you the outline argument I gave above, right after Matt's original suggestion.
 
  • #16
Ok, i think i see what you're saying...in getting the diagonal matrix A'' from A, i had tacitly assumed that the elementary integer matrices are rank-preserving over Z...and i really can't find a way to convince myself whether or not that's true.

I'm going to try your outline now, i think it might be easier after all, thanks :smile:
 
  • #17
they are rank preserving, but what does that have to do with linear independence of the columns? they are not linear combinations of the old columns unless you use only column operations, and you used both row and column operations.
 
  • #18
So if they are rank preserving, then the diagonal matrix A'' will have the same rank as A over Z, which is m (since the v's are linearly independent over Z), and since A'' clearly has rank m over R also, applying the inverses of the same integer elementary operations will give me back A, and I can once again say that A has rank m over R. Is that not correct?
 
  • #19
what is your definition of rank? and how do you prove those operations preserve rank? you seem to keep trying to invoke some magic recipe instead of just going back to first principles and simply proving it.
 
  • #20
By rank i mean the maximum number of linearly independent rows/columns of the matrix.
I guess you have to be careful about terminology when you're dealing with modules instead of vector spaces, I just find it a little confusing...I understand your point though, and i'll try to follow your line of reasoning. Thanks.
 
  • #21
ok suppose rank is the maximum number of linearly indepedent columns. then suppose you perform some row operations on the matrix. then the new columns are no longer linear combinations of the old ones, so how do you know there is still the same number of linearly independent columns?

i.e. how do you show the number of independent rows equals the number of independent columns?

if you define a free Z module of rank r, as a subset of Z^n closed under Z linear combinations and spanned by some finite set of independent vectors, but you have to prove rank is well defined.

then you can view a matrix with r independent columns as defining a map from Z^r to a free rank r, Z submodule of Z^n. Then you can prove that row and column operations are isomorphisms, but you have to prove isomorphisms rpeserve rank.


the suggestions above do all this for you, or rather they avoid all this. unless I am missing something, which I frequently am.
 
  • #22
here is a problem for you related to all this stuff. you have shown there cannot be two Z independent integers. how do you prove there cannot be three Q - independent vectors in Q^2?

or how do you prove that the row rank and column rank of a matrix of rationals are the same?
 
  • #23
mathwonk said:
in a way that is actually cleaner than my version of Matt's suggestion, but more work too. I don't know which one Matt meant to suggest.

I've missed a lot of debate here. I meant to put in "echelon" form, since we only care about (in)dependence, rather than the actual Z linear span of the vectors.
 
  • #24
thats what I thought.

by the way, i have a conceptual proof, no doubt well known,

that the reduced echelon form of an m by n matrix A (over a field) is unique.

Convention: In an echelon matrix, the first non zero entry in each non zero row, is called a "pivot", and the column it occurs in is called a pivot column. In a matrix of coefficients for a linear system of equations , the corresponding avriable is called a pivot variable. Hence the number of pivots equals the rank of the coefficient matrix.

Consider the system A.X = 0, and its set S of solutions in k^n.

Then the existence of the reduced echelon form of A, reveals that S is simply the graph of a linear function, with domain the subspace of k^n spanned by the "free" variables, and range the subspace of k^n spanned by the "pivot" or "fixed" variables.

Hence if you know which variables are pivots and which are free, then the set S determines the graph of the function expressing the pivot variables in terms of the free ones, and hence determines the reduced echelon matrix.

So choose as free variables the lexicographically largest possible subsequence of the n variables, such that S projects isomorphically onto the subspace they span. That will be the domain of the function with graph S.

Then the function whose graph S determines in the resulting product gives the reduced echelon form.

I.e. the column (or the part of the column above the zeroes) of that reduced matrix, corresponding to a given free variable Xj, is the value of the function with graph S, evaluated at the jth standard basis vector ej.

Is this indeed well known? standard books seem to give more tedious proofs of this result, which of course implies that rank is well defined (over a field).

But of course since this is true, and visible to anyone who thinks enough about it and knows what a graph is, it surely has been noticed before, probably by Gauss or Euler. Still I kind of hope it is news to somebody besides me.

One can also argue the rank well defined, by observing that a variable is a pivot variable if and only if the value of that variable (in the entries of every solution), is determined by the values of the later variables. Hence the set of solutions determines the subset of pivot variables, in particular the number of them, i.e. the rank.

The easier argument that a column is a pivot column if and only if the system defined by the part of the matrix to the left of that column is consistent, (with that column as desired solution vector), implies that the pivot variables are well defined, hence so is the rank, also over the integers.

My apologies to the beginners, this query was directed at the other teachers out there.
 
Last edited:
  • #25
i made a minus sign error: I should have said:

"I.e. the column (or the part of the column above the zeroes) of that reduced matrix, corresponding to a given free variable Xj, is MINUS the value of the function with graph S, evaluated at the jth standard basis vector ej."
 

1. What is "Linear independence over Z"?

"Linear independence over Z" refers to the concept of linearly independent vectors in a vector space over the integers (Z). This means that no vector in the set can be written as a linear combination of the other vectors in the set using only integer coefficients.

2. How is "Linear independence over Z" different from "Linear independence over R"?

The main difference between "Linear independence over Z" and "Linear independence over R" is the underlying field over which the vector space is defined. In "Linear independence over Z", the vector space is over the integers (Z), while in "Linear independence over R", the vector space is over the real numbers (R).

3. Why is "Linear independence over Z" important in linear algebra?

"Linear independence over Z" is important in linear algebra because it helps us understand the structure and properties of vector spaces. It allows us to determine whether a set of vectors is a basis for the vector space, which is crucial for solving linear systems of equations and performing other operations in linear algebra.

4. How do you determine if a set of vectors is linearly independent over Z?

To determine if a set of vectors is linearly independent over Z, you can use the method of Gaussian elimination. Write the vectors as columns of a matrix and reduce the matrix to row-echelon form. If there are no rows of zeros, then the vectors are linearly independent over Z. Alternatively, you can also check if the determinant of the matrix formed by the vectors is non-zero, which is another way to determine linear independence.

5. Can a set of vectors be linearly independent over Z but dependent over R?

Yes, it is possible for a set of vectors to be linearly independent over Z but not over R. This is because the underlying field over which the vector space is defined can affect the linear independence of the vectors. For example, the vectors (1, 0) and (0, 1) are linearly independent over Z, but they are dependent over R since (0, 1) = (1, 0) x 0 + (0, 1) x 1.

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
14
Views
3K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
4K
  • Linear and Abstract Algebra
Replies
1
Views
3K
  • Linear and Abstract Algebra
Replies
4
Views
10K
  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
3K
Back
Top