Linear Algebra Dimension Proof

Click For Summary

Homework Help Overview

The discussion centers around proving that any set of S vectors in a subspace V of R^n is linearly dependent if S exceeds the dimension D of V. Participants explore the implications of the rank-nullity theorem and the properties of linear combinations in relation to vector spaces.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss forming linear combinations of vectors and the conditions under which these combinations yield non-trivial solutions. There are considerations of constructing matrices from the vectors and analyzing their ranks. Some participants question the necessity of using augmented matrices versus simpler linear systems.

Discussion Status

There is an ongoing exploration of different approaches to demonstrate linear dependence, with various participants suggesting methods involving matrix rank and linear combinations. Some participants express uncertainty about the differences between methods, while others affirm the connections between rank and linear dependence.

Contextual Notes

Participants reference the dimensionality of vector spaces and the implications of introducing additional vectors beyond the dimension of the space. There is mention of the need for clarity on definitions and theorems related to linear algebra.

Screwdriver
Messages
125
Reaction score
0

Homework Statement



Let V\subset\,R^n be a subpace with dim(V) = D. Prove that any S vectors in V are linearly dependent if S > D.

Homework Equations



Rank-nullity theorem?

The Attempt at a Solution



dim(V) = D implies that there are D vectors in a basis for V. If S > D then there will be at least one more vector in the set, which will result in a free variable and thus, infinite solutions; implying that the set will be linearly dependent.
 
Physics news on Phys.org
For the sake of illustration, suppose we are working in V = R^2, D=2, and suppose you choose S=3 vectors. You desire to prove that these three vectors must be linearly dependent in R^2.

Form a linear combination of these vectors. You desire to verify whether or not

a_1v_1 + a_2v_2 + a_3v_3 = 0

holds true when not all a_i = 0.

Form a matrix where the columns of the matrix are your S vectors from V. The matrix will then have n rows and s columns. The linear combination above can then be expressed (in the case of our specific example) something like:

\left(\begin{array}{ccc}v_{11} &amp; v_{12} &amp; v_{13} \\<br /> v_{21} &amp; v_{22} &amp; v_{23} \end{array}\right) \cdot \left(\begin{array}{c}a_1 \\ a_2 \\ a_3 \end{array}\right) = 0

where

v_i = v_{1i}\hat{e_1} + v_{2i}\hat{e_2}

for some basis consisting of D vectors.

You can easily generalize this to a matrix with n rows and s columns.

The question arises as to what values of v, a does the above linear system have solutions for?

Clearly, if a_1=a_2=a_3=0, the system above holds true.

The question is, does it hold true when not all a_i = 0?

If so, then the three vectors must be linearly dependent, which is what you desire to prove.

The direction you were heading in w/ the rank of a matrix, is probably the right direction to go..
 
Thank you. So say V = R^n, D=n and S &gt; n. I'm trying to determine if there exists a non-trivial solution to the augmented matrix:

<br /> \left[ {\begin{array}{cccc}<br /> v_{11} &amp; v_{12} &amp; ... &amp; v_{1S} \\<br /> ... &amp; ... &amp; ... &amp; ... \\<br /> v_{n1} &amp; v_{n2} &amp; ... &amp; v_{nS} \\<br /> \end{array} } \right|<br /> \left| {\begin{array}{c}<br /> 0_{1} \\<br /> ... \\<br /> 0_{n}<br /> \end{array} } \right]<br />

Which will row-reduce to:

<br /> \left[ {\begin{array}{cccc}<br /> 1 &amp; 0 &amp; ... &amp; x \\<br /> 0 &amp; 1 &amp; ... &amp; y \\<br /> ... &amp; ... &amp; ... &amp; ... \\<br /> 0 &amp; 0 &amp; 1 &amp; z \\<br /> \end{array} } \right|<br /> \left| {\begin{array}{c}<br /> 0_{1} \\<br /> ... \\<br /> ... \\<br /> 0_{n}<br /> \end{array} } \right]<br />


Where [x, y ... z] is some non-zero vector that will arise since S &gt; n. Then the solution set will be

[a_{1}, a_{2} ... a_{S}] = [x, y ... z]

And the set will be linearly dependent as a result. Or is that just what I had originally?
 
given two vector-(sub)spaces V \subseteq W then \dim(V) \le \dim(W). You can use this fact to derive a contradiction.
 
Last edited:
I wouldn't even mess around w/ an augmented matrix, or trying to actually, "physically" reduce the matrix into some kind of row-echelon form.

Just consider the linear system:

A \cdot x = 0

A is a k-by-n matrix.. that is, it has k rows and n columns.

x is an n-dimensional vector.

0 is the k-dimensional 0-vector.

This equation is a statement on the linear dependence of the vectors which make up the columns of the matrix A.

If the only "solution" to this equation is the "trivial" solution x=0, then by definition the vectors which make up the columns of A are linearly independent.

If there exist "non-trivial" solutions to this linear system, where x \neq 0, then by definition the vetors which make up the columns of A are linearly dependent.

If your problem statement, you have a vector space where dim(V)=k, and you are selecting n vectors from this space where n > k. You want to show that these n vectors must be linearly dependent. You can model your problem in the above system since dim(V)=k, and so every vector v in V can be expressed as a linearly combination of k basis vectors. Choose any basis of k linearly independent vectors, and construct the matrix A as above.

You will have, by construction of your problem, a linear system where k < n.

By construction, you know that rank(A) <= k < n.

If you have a matrix with n columns and k rows, and you know that k < n, i.e., that the rank of the matrix is < n, what you can conclude about the linear dependence of the columns making up the matrix?

I don't want to give away the *whole* answer, but it's a standard result from linear algebra.. probably in your textbook somewhere.

Shilov treats it at the start of Chapter 3 in his classic on linear algebra.
 
Outlined said:
given two vector-(sub)spaces V \subseteq W then \dim(V) \le \dim(W). You can use this fact to derive a contradiction.

I just realized you might need the statement from the first post to prove this claim. Not sure tough
 
psholtz said:
If you have a matrix with n columns and k rows, and you know that k < n, i.e., that the rank of the matrix is < n, what you can conclude about the linear dependence of the columns making up the matrix?

I can conclude that the columns must be linearly dependent. I know that the rank is the number of linearly independent columns, so if I've got k of those but k < n columns in total, the leftover column(s) will have to be a non-trivial linear combination of the others. That then implies that the x vector in Ax = 0 will be non-trivial as well, right?

I'm still not sure how that's different from just making an augmented matrix though (sorry; I'm useless at Linear Algebra).
 
This is merely a question of knowing basic vocabulary and definitions.

Proof: Given any basis set {v1, v2, ... vD} in D-Dimensional space, the set is both linearly independent and "spans" the vector space. Now if you introduce a new vector, say vK, then Span({v1,v2, ... , vD, vK}) \subseteq \!\, Span({v1,v2, ... , vD}), where this relation follows because the second set spans the full space of V. Now because it spans the space we have that any vector can be written as a linear combination of D number of vectors:
a1v1 + a2v2 + ... a(D-1)v(D-1) + aDvD = bvK (****): a1,...aD, and b are scalars under the number field. Notice that in this case b = 1.
Now take that extra vector and concatenate it to the set to construc the set under question...
{v1,v2, ... , vD, vK}
Now is it true that c1v1 + c2v2 + ... + cDvD + c(D+1)vK = 0 such that c1 = c2 = ... = cD = c(D+1) = 0 for all such scalars c1,c2,...cD,c(D+1)?
NO! Look at (****) if you don't immediately see it now.
QED.
 
Last edited:
Screwdriver said:
I can conclude that the columns must be linearly dependent. I know that the rank is the number of linearly independent columns, so if I've got k of those but k < n columns in total, the leftover column(s) will have to be a non-trivial linear combination of the others. That then implies that the x vector in Ax = 0 will be non-trivial as well, right?
Yes, essentially.

You mentioned in the OP something about a "rank-nullity" theorem.

I'm not sure exactly what you meant about "rank-nullity" theorem, but yes, the rank of a matrix is the number of linearly independent columns (or rows), and so if the number of rows is k, then you must have rank(A) <= k.. and if you know k < n (strictly less than), where n is the number of columns, then you can conclude that the columns of the matrix must be linearly dependent.

You can also think of it in terms like the following: suppose you are working in an n-dimensional vector space, and you have a basis (x1,...,xn) consisting of n vectors. That means that if we have:

a_1x_1 + ... + a_nx_n = 0

then necessarily

a_1 = ... = a_n = 0

Now consider another vector x0 from this same vector space, and consider the linear combination:

a_0x_0 + a_1x_1 + ... + a_nx_n = 0

Suppose a_0=0. Then we must necessarily have a_1 = ... = a_n = 0, since the basis is linearly independent.

Suppose now that a_0 \neq 0. Then we can rewrite this equation as:

x_0 = - \left(\frac{a_1}{a_0}x_1 + \frac{a_n}{a_0}x_n\right)

If x0 is not the zero vector, it must therefore be a linear combination of the basis vectors.

In other words, the set of vectors x0 + the basis vectors is linearly dependent.
 
  • #10
I see now. Thank you so much to everyone :smile:
 

Similar threads

Replies
15
Views
3K
Replies
34
Views
4K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K