Linear Dependence and Subsets: Proving Linear Dependence in Sets of Vectors

Click For Summary

Homework Help Overview

The discussion revolves around proving that if a set of vectors E is linearly dependent, then a superset F containing E must also be linearly dependent. The problem is situated within the context of linear algebra, specifically focusing on concepts of linear dependence and vector spans.

Discussion Character

  • Conceptual clarification, Assumption checking, Mathematical reasoning

Approaches and Questions Raised

  • Participants explore the implications of linear dependence, questioning the validity of certain assumptions about the zero vector's presence in the sets. They discuss the relationship between the spans of E and F and how this affects the proof of linear dependence.

Discussion Status

The conversation is ongoing, with participants revising their proofs and clarifying their reasoning. Some guidance has been offered regarding the notation and structure of the proof, but there is no explicit consensus on the final form of the argument.

Contextual Notes

Participants note the importance of correctly identifying elements within the sets and the implications of ordering in the context of sets. There is also a mention of a related question regarding the relationship between the sums and intersections of spans, indicating a broader exploration of linear algebra concepts.

TranscendArcu
Messages
277
Reaction score
0

Homework Statement



Suppose that E,F are sets of vectors in V with [itex]E \subseteq F[/itex]. Prove that if E is linearly dependent, then so is F.

The Attempt at a Solution

Read post #2. This proof, I think, was incorrect.

If we suppose that E is linearly dependent, then we know that there exists [itex]E_1,...,E_n[/itex] distinct vectors such that,
[itex]e_1 E_1 + ... + e_n E = \vec0[/itex], where e1,...,en are numbers. Thus, we know [itex]\vec0 \in E[/itex]. Since [itex]E \subseteq F[/itex], [itex]\vec0 \in F[/itex]. Any set containing the zero-vector must certainly be linearly dependent since [itex]n \vec0 = \vec0[/itex], where n is any number.

Thus, if E is linearly dependent, then F is linearly dependent.

Sound about right?
 
Last edited:
Physics news on Phys.org
Actually, I have to revise my proof. I don't think I can write that [itex]\vec0 \in E[/itex]. In any case, we know that [itex]e_1 E_1 + ...+ e_n E_n = \vec0[/itex]. We can therefore say (I think) that [itex]\vec0 \in span{E}[/itex]. [itex]E \subseteq F[/itex] implies [itex]spanE \subseteq spanF[/itex]. Thus, [itex]\vec0 \in spanF[/itex]. Since E has a nontrivial solution to [itex]e_1 E_1 + ...+ e_n E_n = \vec0[/itex] and since [itex]\vec0 \in spanF[/itex], F must also have a nontrivial solution to get the zero vector. In one such case, F's nontrivial solution is identical to E's.

Is that better?
 
TranscendArcu said:
Actually, I have to revise my proof. I don't think I can write that [itex]\vec0 \in E[/itex]. In any case, we know that [itex]e_1 E_1 + ...+ e_n E_n = \vec0[/itex]. We can therefore say (I think) that [itex]\vec0 \in span{E}[/itex]. [itex]E \subseteq F[/itex] implies [itex]spanE \subseteq spanF[/itex]. Thus, [itex]\vec0 \in spanF[/itex]. Since E has a nontrivial solution to [itex]e_1 E_1 + ...+ e_n E_n = \vec0[/itex] and since [itex]\vec0 \in spanF[/itex], F must also have a nontrivial solution to get the zero vector. In one such case, F's nontrivial solution is identical to E's.

Is that better?

Nope. The zero vector is in ANY span. F is a set of vectors. E is a subset of those vectors. Let's list the elements of F as [itex]F=\{F_1,F_2,...,F_n\}[/itex] and let's say that the first m of those vectors are the ones that also belong to E, so [itex]E=\{F_1,F_2,...,F_m\}[/itex] where m<=n. Try expressing a proof using that notation.
 
Dick said:
Nope. The zero vector is in ANY span. F is a set of vectors. E is a subset of those vectors. Let's list the elements of F as [itex]F=\{F_1,F_2,...,F_n\}[/itex] and let's say that the first m of those vectors are the ones that also belong to E, so [itex]E=\{F_1,F_2,...,F_m\}[/itex] where m<=n. Try expressing a proof using that notation.
So if I write [itex]F=\{F_1,F_2,...,F_n\}[/itex] and [itex]E=\{F_1,F_2,...,F_m\}[/itex] then in E there exists [itex]F_1,...,F_m[/itex] distinct vectors such that [itex]f_b F_b + ... + f_m F_m = \vec0[/itex], where [itex]f_1,...,f_m[/itex] are not all zero vectors. Because [itex]E \subseteq F[/itex], F also contains these distinct vectors. Thus, in F we can say that there exist [itex]F_1,...,F_m[/itex] distinct vectors such that [itex]f_b F_b + ... + f_m F_m = \vec0[/itex]. Hence, we see that F is also linearly dependent.
 
TranscendArcu said:
So if I write [itex]F=\{F_1,F_2,...,F_n\}[/itex] and [itex]E=\{F_1,F_2,...,F_m\}[/itex] then in E there exists [itex]F_1,...,F_m[/itex] distinct vectors such that [itex]f_b F_b + ... + f_m F_m = \vec0[/itex], where [itex]f_1,...,f_m[/itex] are not all zero vectors. Because [itex]E \subseteq F[/itex], F also contains these distinct vectors. Thus, in F we can say that there exist [itex]F_1,...,F_m[/itex] distinct vectors such that [itex]f_b F_b + ... + f_m F_m = \vec0[/itex]. Hence, we see that F is also linearly dependent.

The phrasing is still clumsy. Why are you starting with [itex]f_b[/itex]? Why not start with [itex]f_1[/itex]? And the f's aren't vectors, they are scalars. You've got the right idea though.
 
Dick said:
The phrasing is still clumsy. Why are you starting with [itex]f_b[/itex]? Why not start with [itex]f_1[/itex]? And the f's aren't vectors, they are scalars. You've got the right idea though.
I guess it doesn't matter to start with F1 or b. I thought Fb gave the impression that E didn't necessarily have to start with the first element of F. But I seem to recall that ordering doesn't matter in sets anyway (isn't that right?)

Besides these problems (which I think will be pretty trivial to fix) is there anything else that is making the proof clumsy?
 
TranscendArcu said:
I guess it doesn't matter to start with F1 or b. I thought Fb gave the impression that E didn't necessarily have to start with the first element of F. But I seem to recall that ordering doesn't matter in sets anyway (isn't that right?)

Besides these problems (which I think will be pretty trivial to fix) is there anything else that is making the proof clumsy?

In setting it up we designated that the first m elements of F were to be the elements of F n E. Sure, sets are unordered, but our labeling of the elements of the sets has some useful order. You kept the last element of E as F_m after all. Don't ponder this deeply, just change the 'b' to '1' and see if it reads more simply.
 
Okay. Thanks!

I won't rewrite the work here since there isn't much need if the changes you recommend are so easy to make.

On the other hand, I also want to ask another question regarding sets. If I have a statement like [itex]spanE + spanF[/itex], is this just the same as [itex]spanE \cap spanF[/itex]? I ask because I was doing some practice problems and one of them asked to show that [itex]L(E \cap F) \subseteq L(E) \cap L(F)[/itex]. The answer to this problem showed that [itex]L(E \cap F) \subseteq spanE + spanF[/itex], but stopped there. It isn't immediately obvious to me how the stuff on the right hand side of the subset sign relates to what I was asked to show.
 
TranscendArcu said:
Okay. Thanks!

I won't rewrite the work here since there isn't much need if the changes you recommend are so easy to make.

On the other hand, I also want to ask another question regarding sets. If I have a statement like [itex]spanE + spanF[/itex], is this just the same as [itex]spanE \cap spanF[/itex]? I ask because I was doing some practice problems and one of them asked to show that [itex]L(E \cap F) \subseteq L(E) \cap L(F)[/itex]. The answer to this problem showed that [itex]L(E \cap F) \subseteq spanE + spanF[/itex], but stopped there. It isn't immediately obvious to me how the stuff on the right hand side of the subset sign relates to what I was asked to show.

No. [itex]spanE + spanF[/itex], is defined as the set of all vectors e+f where e is in span(E) and f is in span(F). Pick a basis for [itex]spanE \cap spanF[/itex] and extend it to a basis for span(E) and span(F) to see the relation.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
1K
Replies
15
Views
3K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K