Undergrad Characterizing linear independence in terms of span

Click For Summary
SUMMARY

This discussion focuses on characterizing linear independence in vector spaces, specifically through the relationship between a set of vectors and their span. The claim established is that if a set \( T \) is linearly dependent, then a proper subset \( S \) exists such that \( \operatorname{span}(S) = \operatorname{span}(T) \). The theorem presented states that for a linearly independent set \( S \) and a vector \( v \notin S \), the union \( S \cup \{v\} \) is linearly dependent if and only if \( v \in \operatorname{span}(S) \). The discussion emphasizes the implications of these definitions and the nuances of proving the claim under various conditions.

PREREQUISITES
  • Understanding of vector spaces and their properties
  • Familiarity with the concepts of linear independence and linear dependence
  • Knowledge of vector spans and their implications
  • Basic proficiency in mathematical proofs and logic
NEXT STEPS
  • Study the definitions and properties of vector spaces in detail
  • Learn about the concept of bases and their role in linear algebra
  • Explore the implications of linear combinations in vector spaces
  • Investigate the relationship between linear independence and dimension in vector spaces
USEFUL FOR

Students and educators in mathematics, particularly those focusing on linear algebra, as well as researchers interested in the foundational aspects of vector spaces and their properties.

psie
Messages
315
Reaction score
40
TL;DR
I'm reading Linear Algebra by Friedberg, Insel and Spence. Prior to a theorem, they make a statement about linear independence and they claim this can be deduced from the theorem.
Throughout, let ##\mathsf V## be a vector space (the concept of dimension has not been introduced yet). The statement that precedes the theorem below is that if no proper subset of ##T\subset \mathsf V## generates the span of ##T## (where, if I'm not mistaken, ##T## consists of two or more vectors) then ##T## must be linearly independent. Taking the contrapositive,

Claim: If ##T\subset\mathsf V## is linearly dependent, then there is some proper subset ##S\subset T## such that ##\operatorname{span}(S)=\operatorname{span}(T)##.

Theorem: Let ##S\subset \mathsf V## be linearly independent and let ##v\notin S##. Then ##S\cup\{v\}## is linearly dependent if and only if ##v\in\operatorname{span}(S)##.

I'm trying to prove the claim from the theorem. What I struggle with is that I only seem to be able to prove the claim when ##T=S\cup\{v\}##, where ##S## is linearly independent and ##v\notin S##. Then the theorem tells us that ##v\in\operatorname{span}(S)##. This in turn implies ##\operatorname{span}(S)=\operatorname{span}(S\cup\{v\})## (see below for a proof of why this is implied by ##v\in\operatorname{span}(S)##). Hence we can take the proper subset ##S\subset S\cup\{v\}=T## as the set in the claim preceding the theorem. But what if ##T## is not of the form ##S\cup\{v\}##? I feel like I've only proved a very special case.

Regarding ##v\in\operatorname{span}(S)\implies\operatorname{span}(S)=\operatorname{span}(S\cup\{v\})##, the inclusion ##\operatorname{span}(S)\subset\operatorname{span}(S\cup\{v\})## always holds. The reverse inclusion follows since any ##w\in \operatorname{span}(S\cup\{v\})## can be written as ##w=a_1u_1+\cdots +a_nu_n+bv##, where ##u_1,\ldots,u_n\in S##. Since ##v\in\operatorname{span}(S)##, ##w## is actually a linear combination of vectors in ##S##.
 
Physics news on Phys.org
What definition of linear independence and linear dependence are you using?
 
PeroK said:
What definition of linear independence and linear dependence are you using?
A subset ##S## of a vector space ##\mathsf V## is linearly dependent if there exists a finite number of distinct vectors ##u_1,\ldots,u_n## in ##S## and scalars, ##a_1,\ldots,a_n##, not all zero such that $$a_1u_1+\cdots+a_nu_n=0.$$ A subset ##S## of ##\mathsf V## is linearly independent if it is not linearly dependent.

It was a bit of a mess in post #1, but I think I've figured it out. :smile:
 
The idea is to approach the concept of a basis as a maximal set of linearly independent vectors without requiring that it is finite. Add one non-zero vector to such a set and it isn't linearly independent anymore, which in return automatically generates a linear expression in terms of basis vectors. Note that such a linear expression is always finite. There are no infinite series as we in general do not have a concept of convergence! That would require a topological vector space, e.g. a metric.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K