## What is the proof of this theorem in Vector Spaces ?

Theorem :
if S ={ v1 , .... , vn} spans the V.Space V
, L={w1 , .... , wm} is set of linear independent vectors in V
then , n is bigger than or equal to m

How can we prove this ?

_____________

I read this theorem as a important note but the proof was ommited
 PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
 Hey Maths Lover. What can you say about the span of the space and a set of linear independent vectors with regard to the dimensionality of the space? (Hint: How can you relate span to dimensionality and linear independence to said dimensionality of your vector space)?

 Quote by Maths Lover Theorem : if S ={ v1 , .... , vn} spans the V.Space V , L={w1 , .... , wm} is set of linear independent vectors in V then , n is bigger than or equal to m How can we prove this ? _____________ I read this theorem as a important note but the proof was ommited
Chiro, I think the results you suggest to be used are consequences of this theorem.

I know of two different proofs. One is based upon the thereom which says that a linear homogeneous equation system with more unknowns than equations has nontrivial solutions.

Another proof goes like this:

For each k (0<=k<=m), let Sk = {w1, w2, ... , wk, v1, v2, ... , vn}
Each Sk spans W, since S does.

Now, for each k (0<=k<=m), let Tk be the result when we remove from Sk all vectors which are linear combinations of the previous vectors in Sk. Then, each Tk is linearly independent. Since L is linearly independent, no wi will ever be removed when we form Tk, so w1, w2, ... wk in Tk for all k (0<=k<=m). But when we form T(k+1) (0<=k<m), we must remove at least all vectors from S(k+1) which we remove when we form Tk from Sk. But this is not enough, for if we only remove these vectors from S(k+1) to form T(k+1), w(k+1) would still be a linear combination of the other vectors in T(k+1), i.e. of the vectors in Tk, which contradicts that T(k+1) is linearly independent. Thus, we must remove at least one vector more to obtain T(k+1).

It follows that for each k (1<=k<=m), we must remove at least k vectors from Sk to form Tk, but none of the wi:s will be removed. In particular Tm contains all the vectors w1, w2, ... wm, and still it contains at least m vectors fewer than Sm, which contains m+n vectors. It follows that m<=(m+n)-m=m, that is: m<=n,

Recognitions:
Homework Help

## What is the proof of this theorem in Vector Spaces ?

"... if we only remove these vectors from S(k+1) to form T(k+1), w(k+1) would still be a linear combination of the other vectors in T(k+1), i.e. of the vectors in Tk,..."

it might help to state why this claim is true, i.e. Tk is not only independent but also spans.

(this proof is due to riemann, in the special case of proving invariance of rank of homology groups of a surface, but is usually attributed to steinitz.)
 Recognitions: Science Advisor You can get lost iin a sea of notation here. If we are talking about finite dimension vector spaces (whcih seems a reasonable assumption, from the notation of "m" and "n" vectors) then: If the vectors in S are linearly independent, the dimension of V is n, and there can not be more than n independsnt vectors in L. If the vectors in S are not linearly dependent, remove vectors from S one at a time until you are left with a set of k < n independent vectors that span V. Repeating the previous argument, there can not be more than k indepedent vectors in L.

 Quote by AlephZero If the vectors in S are linearly independent, the dimension of V is n, and there can not be more than n independsnt vectors in L.
But you cannot prove this without the theorem in the OP. It is this theorem which makes it possible to talk about "the dimension" of a vector space. Without it, we wouldn't know that all bases have the same number of elements.
 Mentor The definition of "dimension" that I use is that dim V=n if V contains a linearly independent set with cardinality n, but no linearly independent set with cardinality n+1. With this definition, we don't need a theorem to make it possible to talk about "the dimension" of a vector space.

 Quote by Fredrik The definition of "dimension" that I use is that dim V=n if V contains a linearly independent set with cardinality n, but no linearly independent set with cardinality n+1. With this definition, we don't need a theorem to make it possible to talk about "the dimension" of a vector space.
OK, we can have such a definition. But it does not á priori exclude that there exists a linearly independent set with k<n elements which cannot be extended to a linearly independent set with k+1 elements. We need a theorem like the one above, with a complex proof like the one above, to ensure this.

Recognitions: