I Characterizing linear independence in terms of span

psie
Messages
315
Reaction score
40
TL;DR Summary
I'm reading Linear Algebra by Friedberg, Insel and Spence. Prior to a theorem, they make a statement about linear independence and they claim this can be deduced from the theorem.
Throughout, let ##\mathsf V## be a vector space (the concept of dimension has not been introduced yet). The statement that precedes the theorem below is that if no proper subset of ##T\subset \mathsf V## generates the span of ##T## (where, if I'm not mistaken, ##T## consists of two or more vectors) then ##T## must be linearly independent. Taking the contrapositive,

Claim: If ##T\subset\mathsf V## is linearly dependent, then there is some proper subset ##S\subset T## such that ##\operatorname{span}(S)=\operatorname{span}(T)##.

Theorem: Let ##S\subset \mathsf V## be linearly independent and let ##v\notin S##. Then ##S\cup\{v\}## is linearly dependent if and only if ##v\in\operatorname{span}(S)##.

I'm trying to prove the claim from the theorem. What I struggle with is that I only seem to be able to prove the claim when ##T=S\cup\{v\}##, where ##S## is linearly independent and ##v\notin S##. Then the theorem tells us that ##v\in\operatorname{span}(S)##. This in turn implies ##\operatorname{span}(S)=\operatorname{span}(S\cup\{v\})## (see below for a proof of why this is implied by ##v\in\operatorname{span}(S)##). Hence we can take the proper subset ##S\subset S\cup\{v\}=T## as the set in the claim preceding the theorem. But what if ##T## is not of the form ##S\cup\{v\}##? I feel like I've only proved a very special case.

Regarding ##v\in\operatorname{span}(S)\implies\operatorname{span}(S)=\operatorname{span}(S\cup\{v\})##, the inclusion ##\operatorname{span}(S)\subset\operatorname{span}(S\cup\{v\})## always holds. The reverse inclusion follows since any ##w\in \operatorname{span}(S\cup\{v\})## can be written as ##w=a_1u_1+\cdots +a_nu_n+bv##, where ##u_1,\ldots,u_n\in S##. Since ##v\in\operatorname{span}(S)##, ##w## is actually a linear combination of vectors in ##S##.
 
Physics news on Phys.org
What definition of linear independence and linear dependence are you using?
 
PeroK said:
What definition of linear independence and linear dependence are you using?
A subset ##S## of a vector space ##\mathsf V## is linearly dependent if there exists a finite number of distinct vectors ##u_1,\ldots,u_n## in ##S## and scalars, ##a_1,\ldots,a_n##, not all zero such that $$a_1u_1+\cdots+a_nu_n=0.$$ A subset ##S## of ##\mathsf V## is linearly independent if it is not linearly dependent.

It was a bit of a mess in post #1, but I think I've figured it out. :smile:
 
The idea is to approach the concept of a basis as a maximal set of linearly independent vectors without requiring that it is finite. Add one non-zero vector to such a set and it isn't linearly independent anymore, which in return automatically generates a linear expression in terms of basis vectors. Note that such a linear expression is always finite. There are no infinite series as we in general do not have a concept of convergence! That would require a topological vector space, e.g. a metric.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top