System of vectors, linear dependence

nuuskur
Science Advisor
Messages
926
Reaction score
1,221

Homework Statement


Prove that if in a system of vectors: S_a =\{a_1, a_2, ..., a_n\} every vector a_i is a linear combination of a system of vectors: S_b = \{b_1, b_2, ..., b_m\}, then \mathrm{span}(S_a)\subseteq \mathrm{span}(S_b)

Homework Equations

The Attempt at a Solution


We know due to a_j being a linear combination, that every a_j\in S_a = \sum\limits_{j=1}^m c_j\cdot b_j where b_j\in S_b, c_j\in\mathbb{R}\setminus\{0\}
But where should I go from here? Suggestions?
 
Physics news on Phys.org
I suggest taking a vector in ##span(S_a)## and show that it is necessarily in ##span(S_b)##.
 
nuuskur said:

The Attempt at a Solution


We know due to a_j being a linear combination, that every a_j\in S_a = \sum\limits_{j=1}^m c_j\cdot b_j where b_j\in S_b, c_j\in\mathbb{R}\setminus\{0\}
The ##c_j## can be zero. The span of a set S is the set of all linear combinations of elements of S, including linear combinations where one or more (maybe all) of the coefficients are zero.

I would do what Orodruin said, and avoid notations like
every a_j\in S_a = \sum\limits_{j=1}^m c_j\cdot b_j where...​
It's ##a_j## that's equal to a linear combination, not ##S_a##. Oddly enough, the phrase
every ##a_j\in S_a## is equal to ##\sum\limits_{j=1}^m c_j\cdot b_j## where...​
would be considered acceptable.
 
Alright. Let's denote the systems:
A = \{a_1, a_2, ..., a_n\}\\B = \{b_1, b_2, ..., b_m\}
Let's denote the linear span of a system L(A), L(B). Then the respective linear spans would be:
L(A) = \left\{a\ |\ a = \sum\limits_{k=1}^n \lambda _k\cdot a_k, \lambda _k\in\mathbb{R}, a_k\in A \right\}\\<br /> L(B) = \left\{b\ |\ b = \sum\limits_{k=1}^m \lambda _k\cdot b_k, \lambda _k\in\mathbb{R}, b_k\in B \right\}
We know that every vector a\in A is a linear combination of the vectors in system B, that is:
a = \sum\limits_{k=1}^m\lambda _k\cdot b_k where \lambda _k\in\mathbb{R}, b_k\in B.
Considering that a linear span is a vector space, then it is closed under multiplication with a scalar. Therefore, every a\in L(A) implies a\in L(B)\Leftrightarrow L(A)\subseteq L(B)_{\square}
 
nuuskur said:
Alright. Let's denote the systems:
A = \{a_1, a_2, ..., a_n\}\\B = \{b_1, b_2, ..., b_m\}
Let's denote the linear span of a system L(A), L(B). Then the respective linear spans would be:
L(A) = \left\{a\ |\ a = \sum\limits_{k=1}^n \lambda _k\cdot a_k, \lambda _k\in\mathbb{R}, a_k\in A \right\}\\<br /> L(B) = \left\{b\ |\ b = \sum\limits_{k=1}^m \lambda _k\cdot b_k, \lambda _k\in\mathbb{R}, b_k\in B \right\}
We know that every vector a\in A is a linear combination of the vectors in system B, that is:
a = \sum\limits_{k=1}^m\lambda _k\cdot b_k where \lambda _k\in\mathbb{R}, b_k\in B.
Writing this down is a good start, but I don't follow your argument here:

nuuskur said:
Considering that a linear span is a vector space, then it is closed under multiplication with a scalar. Therefore, every a\in L(A) implies a\in L(B)\Leftrightarrow L(A)\subseteq L(B)_{\square}
I would just start with a simple statement like "Let ##x\in L(A)##." Then you can use the definition of ##L(A)## to say something about ##x##. This statement will involve the ##a_k##. Then you can use what you know about the ##a_k## to say something else. And so on. At some point you should be able to conclude that ##x\in L(B)##. Then you will have proved that ##L(A)\subseteq L(B)##.

Be careful with your statements. The quoted statement above is saying that every vector in the subspace ##L(A)## implies some statement. Statements are implied by other statements, not by vectors.
 
Back
Top