What is the proof of this theorem in Vector Spaces ?

Click For Summary

Discussion Overview

The discussion centers around the proof of a theorem in vector spaces, which states that if a set of vectors spans a vector space and another set consists of linearly independent vectors, then the number of vectors in the spanning set is greater than or equal to the number of vectors in the linearly independent set. The scope includes theoretical aspects of linear algebra and dimensionality.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that the theorem can be proven using the concept that a linear homogeneous equation system with more unknowns than equations has nontrivial solutions.
  • Others suggest a proof involving the construction of sets that span the vector space and maintaining linear independence throughout the process.
  • A participant mentions that if the vectors in the spanning set are linearly independent, the dimension of the vector space is equal to the number of vectors in that set.
  • Another viewpoint emphasizes that without the theorem, one cannot definitively discuss the dimension of a vector space, as it establishes the relationship between linear independence and spanning sets.
  • Some participants challenge the definitions of dimension, arguing that a specific theorem is necessary to ensure that a linearly independent set can be extended appropriately.
  • One participant presents an alternative proof approach using a contradiction by assuming the opposite of the theorem and demonstrating the resulting inconsistencies.

Areas of Agreement / Disagreement

Participants express differing opinions on the necessity of the theorem for discussing vector space dimensions, with some asserting that it is essential while others believe alternative definitions suffice. Multiple competing views on the proof methods and their validity remain unresolved.

Contextual Notes

Some limitations in the discussion include the dependence on definitions of dimension and the assumptions regarding the linear independence of the sets involved. The proofs presented rely on various interpretations and applications of linear algebra concepts, which may not be universally accepted.

Maths Lover
Messages
67
Reaction score
0
Theorem :
if S ={ v1 , ... , vn} spans the V.Space V
, L={w1 , ... , wm} is set of linear independent vectors in V
then , n is bigger than or equal to m


How can we prove this ?

_____________

I read this theorem as a important note but the proof was ommited
 
Physics news on Phys.org
Hey Maths Lover.

What can you say about the span of the space and a set of linear independent vectors with regard to the dimensionality of the space?

(Hint: How can you relate span to dimensionality and linear independence to said dimensionality of your vector space)?
 
Maths Lover said:
Theorem :
if S ={ v1 , ... , vn} spans the V.Space V
, L={w1 , ... , wm} is set of linear independent vectors in V
then , n is bigger than or equal to mHow can we prove this ?

_____________

I read this theorem as a important note but the proof was ommited
Chiro, I think the results you suggest to be used are consequences of this theorem.

I know of two different proofs. One is based upon the thereom which says that a linear homogeneous equation system with more unknowns than equations has nontrivial solutions.

Another proof goes like this:

For each k (0<=k<=m), let Sk = {w1, w2, ... , wk, v1, v2, ... , vn}
Each Sk spans W, since S does.

Now, for each k (0<=k<=m), let Tk be the result when we remove from Sk all vectors which are linear combinations of the previous vectors in Sk. Then, each Tk is linearly independent. Since L is linearly independent, no wi will ever be removed when we form Tk, so w1, w2, ... wk in Tk for all k (0<=k<=m). But when we form T(k+1) (0<=k<m), we must remove at least all vectors from S(k+1) which we remove when we form Tk from Sk. But this is not enough, for if we only remove these vectors from S(k+1) to form T(k+1), w(k+1) would still be a linear combination of the other vectors in T(k+1), i.e. of the vectors in Tk, which contradicts that T(k+1) is linearly independent. Thus, we must remove at least one vector more to obtain T(k+1).

It follows that for each k (1<=k<=m), we must remove at least k vectors from Sk to form Tk, but none of the wi:s will be removed. In particular Tm contains all the vectors w1, w2, ... wm, and still it contains at least m vectors fewer than Sm, which contains m+n vectors. It follows that m<=(m+n)-m=m, that is: m<=n,
 
"... if we only remove these vectors from S(k+1) to form T(k+1), w(k+1) would still be a linear combination of the other vectors in T(k+1), i.e. of the vectors in Tk,..."

it might help to state why this claim is true, i.e. Tk is not only independent but also spans.

(this proof is due to riemann, in the special case of proving invariance of rank of homology groups of a surface, but is usually attributed to steinitz.)
 
You can get lost iin a sea of notation here. If we are talking about finite dimension vector spaces (whcih seems a reasonable assumption, from the notation of "m" and "n" vectors) then:

If the vectors in S are linearly independent, the dimension of V is n, and there can not be more than n independsnt vectors in L.

If the vectors in S are not linearly dependent, remove vectors from S one at a time until you are left with a set of k < n independent vectors that span V. Repeating the previous argument, there can not be more than k indepedent vectors in L.
 
AlephZero said:
If the vectors in S are linearly independent, the dimension of V is n, and there can not be more than n independsnt vectors in L.
But you cannot prove this without the theorem in the OP. It is this theorem which makes it possible to talk about "the dimension" of a vector space. Without it, we wouldn't know that all bases have the same number of elements.
 
The definition of "dimension" that I use is that dim V=n if V contains a linearly independent set with cardinality n, but no linearly independent set with cardinality n+1. With this definition, we don't need a theorem to make it possible to talk about "the dimension" of a vector space.
 
Fredrik said:
The definition of "dimension" that I use is that dim V=n if V contains a linearly independent set with cardinality n, but no linearly independent set with cardinality n+1. With this definition, we don't need a theorem to make it possible to talk about "the dimension" of a vector space.
OK, we can have such a definition. But it does not á priori exclude that there exists a linearly independent set with k<n elements which cannot be extended to a linearly independent set with k+1 elements. We need a theorem like the one above, with a complex proof like the one above, to ensure this.
 
Erland said:
OK, we can have such a definition. But it does not á priori exclude that there exists a linearly independent set with k<n elements which cannot be extended to a linearly independent set with k+1 elements.

If a linearly independent set of vectors S spans a vector space, then every vector in the space has a unique representation as a linear combination of the vectors in S. (If there were two different representations, you could subtract them and get a linear dependency between the vectors in S).

You can use that fact to make a proof of the OP's theorem, by translating the argument about linear homegenous equations (which you mentioned) into the language of vector algebra.

People may have different opinions about whether that is a "simpler" proof than your (or Riemann's) proof, of course.
 
  • #10
You can try with this proof (apagoge).

Suppose m > n.

Each vector of L is a linear combination of vectors v1, ... , vn, then:

w1 = a1(v1) + ... + an(vn)

At least one a_i coefficient is not zero (wlog, suppose a1 ≠ 0). Let's multiply both sides of equation by (a1)^(-1):

a1(v1) = w1 - a2(v2) - ... - an(vn)

[(a1)^(-1)](a1)(v1) = [(a1)^(-1)][w1 - a2(v2) - ... - an(vn)]

v1 = [(a1)^(-1)][w1 - a2(v2) - ... - an(vn)] (*)

Each vector of V is a linear combination of vectors v1, ... , vn, then:

v = b1(v1) + ... + bn(vn)

Recall the last result about v1:

v = b1(v1) + ... + bn(vn) = (b1)[(a1)^(-1)][w1 - a2(v2) - ... - an(vn)] + ... + bn(vn)

that is a linear combination of L1 = {w1, v2, ... vn}. This is another spanning set of V.

Let's express w2 as linear combination of L1 vectors:

w2 = c1(w1) + c2(v2) + ... + cn(vn)

There is at least one nonzero coefficient among c2...cn (otherwise w1 and w2 would not be linearly independent vectors). We can reiterate the procedure used to obtain (*), then L2 = {w1, w2, v3, ... vn} is another spanning set of V.

Replay this procedure n times, then Ln = L = {w1, ... , wn} is a spanning set of V.
Then, each vector of V is a linear combination of vectors of L, then

v = h1(w1) + ... + hn(wn)

In this case, it is true that

w_(n+1) = k1(w1) + ... + kn(wn)

but we supposed L to be a set of linearly independent vectors, then the last result is impossible and we proved the theorem.
 
  • #11
the problem with Fredrik's definition is it is not clear that R^n has dimension n, without this theorem.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K