How Can We Prove Linear Dependence in Vector Spaces?

Click For Summary

Discussion Overview

The discussion revolves around proving the linear dependence of two nonzero vectors in a vector space. Participants explore the conditions under which the set of vectors is linearly dependent, specifically focusing on the relationship between the vectors being scalar multiples of each other.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant attempts to prove that the set {u₁, u₂} is linearly dependent if and only if u₁ is a scalar multiple of u₂ or vice versa, but expresses difficulty in the forward direction of the proof.
  • Another participant points out that the definition of linear dependence requires that not both coefficients α₁ and α₂ can be zero, which is crucial for completing the proof.
  • Some participants suggest that assuming u₁ = -u₂ is not valid for proving the reverse direction and emphasize that one should assume u₁ = cu₂ for some scalar c.
  • Discussion includes the need to explicitly state that one of the coefficients must be non-zero to apply the definition of linear dependence effectively.
  • There is a suggestion that the proof could be more complex when considering sets of more than two elements, which could provide better practice for understanding linear dependence.
  • A participant raises a hypothetical scenario where one of the vectors could be the zero vector, leading to a different conclusion about linear dependence, indicating that the original statement may not hold in that case.

Areas of Agreement / Disagreement

Participants generally agree on the definitions and conditions for linear dependence but exhibit disagreement on specific assumptions and approaches to the proof. The discussion remains unresolved regarding the best method to articulate the proof effectively.

Contextual Notes

Participants note that the proof's validity hinges on the assumption that both vectors are non-zero. If either vector were zero, the conditions for linear dependence would change, complicating the proof.

autre
Messages
116
Reaction score
0
I have to prove:

Let [itex]u_{1}[/itex] and [itex]u_{2}[/itex] be nonzero vectors in vector space [itex]U[/itex]. Show that {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} is linearly dependent iff [itex]u_{1}[/itex] is a scalar multiple of [itex]u_{2}[/itex] or vice-versa.

My attempt at a proof:

([itex]\rightarrow[/itex]) Let {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} be linearly dependent. Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1} \not= \alpha_{2}[/itex]...I'm stuck here in this direction

([itex]\leftarrow[/itex]) Fairly trivial. Let and [itex]u_{1} = -u_{2}[/itex]. Then [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] but [itex]\alpha_{1} \not= \alpha_{2}[/itex].

Any ideas?
 
Physics news on Phys.org
autre said:
Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1} \not= \alpha_{2}[/itex]...I'm stuck here in this direction
Look at the definition of linear dependence again. That's not what linear dependence tells you about the scalars. It tells you that [itex]\alpha_1[/itex] and [itex]\alpha_2[/itex] are not both...? Fixing this definition will also help finish the proof.

([itex]\leftarrow[/itex]) Fairly trivial. Let and [itex]u_{1} = -u_{2}[/itex]. Then [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] but [itex]\alpha_{1} \not= \alpha_{2}[/itex].

Maybe I'm missing something, but you can't just assume that [itex]u_1 = -u_2[/itex] to prove the reverse direction. You're only given that one is a scalar multiple of the other, so you only know [itex]u_1 = c u_2[/itex] for some scalar c.
 
"[itex]\rightarrow[/itex]"

[itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex]

looking at the definition, what is the condition on [itex]\alpha_{1}[/itex] and [itex]\alpha_{2}[/itex] for {[itex]u_{1}, u_{2}[/itex]} to be linearly dependent?

"[itex]\leftarrow[/itex]"

in this part you have to assume [itex]u_{1} = c u_{2}[/itex], perhaps the negative you put in your original will give you a hint as to what to do for the first part.
 
Last edited:
Thanks for the input guys.

Look at the definition of linear dependence again.

([itex]\rightarrow[/itex]) Let {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} be linearly dependent. Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1}, \alpha_{2}[/itex] are not both [itex]0[/itex]. Therefore, if [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex], [itex]\alpha_{1}u_{1} = -\alpha_{2}u_{2}[/itex]. Is that good?

For part 2:

([itex]\leftarrow[/itex]) Let and [itex]u_{1} = cu_{2}[/itex]. Why does that mean that the coefficients aren't both [itex]0[/itex]?
 
now you're getting somewhere for the "[itex]\rightarrow[/itex]" part.
so if [itex]\alpha_{1}u_{1} = -\alpha_{2}u{2}[/itex] where either [itex]\alpha_{1}[/itex] or [itex]\alpha_{2}[/itex] is non zero (or maybe both are non-zero), what can you do now that you couldn't before?

for the second part, you have to somehow relate [itex]u_{1} = c u_{2}[/itex] to your findings from part one.
 
If [itex]\alpha_{1}u_{1} = -\alpha_{2}u_{2}[/itex], then one is a scalar multiple of another as required by the direction, right? What more do I need to do?
 
I know it seems obvious, but you have to explicitly state:
assume one of [itex]\alpha_{1}, \alpha_{2}[/itex] is non zero (by the definition of linear dependance). for the sake of argument we take [itex]\alpha_{1}[/itex] to be the non zero coefficiant, and since it is non-zero we can divide both sides by that coefficiant.
which leads us to : [itex]u_{1} = \frac{-\alpha_{2}u_{2}}{\alpha_{1}}[/itex]. therefore if the set {[itex]u_{1}, u_{2}[/itex]} is linearly dependent, one must be a scalar multiple of the other as desired.

the "[itex]\leftarrow[/itex]" is just a reversal of "[itex]\rightarrow[/itex]"
its a lot more powerful to prove a set of [itex]n[/itex] elements than one of just 2, if you're looking for good practice i'd suggest trying that.
 
autre said:
For part 2:

([itex]\leftarrow[/itex]) Let and [itex]u_{1} = cu_{2}[/itex]. Why does that mean that the coefficients aren't both [itex]0[/itex]?

So now you need to find scalars [itex]\alpha_1, \alpha_2[/itex] not both zero such that [itex]\alpha_1 u_1 + \alpha_2 u_2 =0[/itex]. Can you see a way to use the information [itex]u_1 = c u_2[/itex] to choose scalars so this is true? Try rearranging the equation in your post.
 
as gordonj005 pointed out, the proof of (→) breaks down into 2 cases.

you can avoid this difficulty by noting that, in point of fact:

[itex]\alpha_1u_1 = -\alpha_2u_2 \implies \alpha_1,\alpha_2 \neq 0[/itex] since, for example:

[itex]\alpha_1 = 0 \implies -\alpha_2u_2 = 0 \implies \alpha_2 = 0[/itex] since [itex]u_2 \neq 0[/itex].

so you are free to divide by α1 or α2.

you almost had the (←) in your first go-round. your mistake was this: assuming α1 = 1. just use "c" where c is the multiple of u1 that u2 is.

why do you know that c ≠ 0 (because u1 is _______)?
 
  • #10
why do you know that c ≠ 0 (because u1 is _______)?

a non-zero vector! Just curious, how would I go about this part of the proof if it weren't specified that u1, u2 were nonzero vectors?
 
  • #11
suppose u1 = 0. then {u1,u2} is linearly dependent no matter what u2 is:

au1 + 0u2 = 0, for any non-zero value of a.

the same goes if u2 = 0.

so the statement:{u1,u2} is linearly dependent iff u1 is a scalar multiple of u2 (and vice-versa), is no longer true.

however, in actual practice, no one ever tries to decide if the 0-vector is part of a basis, because including it automatically makes a set linearly dependent. so one just wants to decide if a set of non-zero vectors is linearly independent or not.
 

Similar threads

  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
Replies
2
Views
2K