Proving linear dependence

  • #1
117
0
I have to prove:

Let [itex]u_{1}[/itex] and [itex]u_{2}[/itex] be nonzero vectors in vector space [itex]U[/itex]. Show that {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} is linearly dependent iff [itex]u_{1}[/itex] is a scalar multiple of [itex]u_{2}[/itex] or vice-versa.

My attempt at a proof:

([itex]\rightarrow[/itex]) Let {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} be linearly dependent. Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1} \not= \alpha_{2} [/itex]...I'm stuck here in this direction

([itex]\leftarrow[/itex]) Fairly trivial. Let and [itex]u_{1} = -u_{2}[/itex]. Then [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] but [itex]\alpha_{1} \not= \alpha_{2} [/itex].

Any ideas?
 
  • #2
Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1} \not= \alpha_{2} [/itex]...I'm stuck here in this direction
Look at the definition of linear dependence again. That's not what linear dependence tells you about the scalars. It tells you that [itex] \alpha_1[/itex] and [itex] \alpha_2 [/itex] are not both...? Fixing this definition will also help finish the proof.

([itex]\leftarrow[/itex]) Fairly trivial. Let and [itex]u_{1} = -u_{2}[/itex]. Then [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] but [itex]\alpha_{1} \not= \alpha_{2} [/itex].

Maybe I'm missing something, but you can't just assume that [itex] u_1 = -u_2[/itex] to prove the reverse direction. You're only given that one is a scalar multiple of the other, so you only know [itex] u_1 = c u_2 [/itex] for some scalar c.
 
  • #3
"[itex]\rightarrow[/itex]"

[itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex]

looking at the definition, what is the condition on [itex]\alpha_{1}[/itex] and [itex]\alpha_{2}[/itex] for {[itex]u_{1}, u_{2}[/itex]} to be linearly dependant?

"[itex]\leftarrow[/itex]"

in this part you have to assume [itex]u_{1} = c u_{2}[/itex], perhaps the negative you put in your original will give you a hint as to what to do for the first part.
 
Last edited:
  • #4
Thanks for the input guys.

Look at the definition of linear dependence again.

([itex]\rightarrow[/itex]) Let {[itex]u_{1}[/itex],[itex]u_{2}[/itex]} be linearly dependent. Then, [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex] where [itex]\alpha_{1}, \alpha_{2}[/itex] are not both [itex]0[/itex]. Therefore, if [itex]\alpha_{1}u_{1}+ \alpha_{2}u_{2}=0[/itex], [itex]\alpha_{1}u_{1} = -\alpha_{2}u_{2}[/itex]. Is that good?

For part 2:

([itex]\leftarrow[/itex]) Let and [itex]u_{1} = cu_{2}[/itex]. Why does that mean that the coefficients aren't both [itex]0[/itex]?
 
  • #5
now you're getting somewhere for the "[itex]\rightarrow[/itex]" part.
so if [itex]\alpha_{1}u_{1} = -\alpha_{2}u{2}[/itex] where either [itex]\alpha_{1}[/itex] or [itex]\alpha_{2}[/itex] is non zero (or maybe both are non-zero), what can you do now that you couldn't before?

for the second part, you have to somehow relate [itex]u_{1} = c u_{2}[/itex] to your findings from part one.
 
  • #6
If [itex]\alpha_{1}u_{1} = -\alpha_{2}u_{2}[/itex], then one is a scalar multiple of another as required by the direction, right? What more do I need to do?
 
  • #7
I know it seems obvious, but you have to explicitly state:
assume one of [itex]\alpha_{1}, \alpha_{2}[/itex] is non zero (by the definition of linear dependance). for the sake of argument we take [itex]\alpha_{1}[/itex] to be the non zero coefficiant, and since it is non-zero we can divide both sides by that coefficiant.
which leads us to : [itex] u_{1} = \frac{-\alpha_{2}u_{2}}{\alpha_{1}} [/itex]. therefore if the set {[itex]u_{1}, u_{2}[/itex]} is linearly dependant, one must be a scalar multiple of the other as desired.

the "[itex]\leftarrow[/itex]" is just a reversal of "[itex]\rightarrow[/itex]"
its a lot more powerful to prove a set of [itex]n[/itex] elements than one of just 2, if you're looking for good practice i'd suggest trying that.
 
  • #8
For part 2:

([itex]\leftarrow[/itex]) Let and [itex]u_{1} = cu_{2}[/itex]. Why does that mean that the coefficients aren't both [itex]0[/itex]?

So now you need to find scalars [itex] \alpha_1, \alpha_2[/itex] not both zero such that [itex] \alpha_1 u_1 + \alpha_2 u_2 =0 [/itex]. Can you see a way to use the information [itex] u_1 = c u_2 [/itex] to choose scalars so this is true? Try rearranging the equation in your post.
 
  • #9
as gordonj005 pointed out, the proof of (→) breaks down into 2 cases.

you can avoid this difficulty by noting that, in point of fact:

[itex]\alpha_1u_1 = -\alpha_2u_2 \implies \alpha_1,\alpha_2 \neq 0[/itex] since, for example:

[itex]\alpha_1 = 0 \implies -\alpha_2u_2 = 0 \implies \alpha_2 = 0[/itex] since [itex]u_2 \neq 0[/itex].

so you are free to divide by α1 or α2.

you almost had the (←) in your first go-round. your mistake was this: assuming α1 = 1. just use "c" where c is the multiple of u1 that u2 is.

why do you know that c ≠ 0 (because u1 is _______)?
 
  • #10
why do you know that c ≠ 0 (because u1 is _______)?

a non-zero vector! Just curious, how would I go about this part of the proof if it weren't specified that u1, u2 were nonzero vectors?
 
  • #11
suppose u1 = 0. then {u1,u2} is linearly dependent no matter what u2 is:

au1 + 0u2 = 0, for any non-zero value of a.

the same goes if u2 = 0.

so the statement:{u1,u2} is linearly dependent iff u1 is a scalar multiple of u2 (and vice-versa), is no longer true.

however, in actual practice, no one ever tries to decide if the 0-vector is part of a basis, because including it automatically makes a set linearly dependent. so one just wants to decide if a set of non-zero vectors is linearly independent or not.
 

Suggested for: Proving linear dependence

Replies
4
Views
1K
Replies
3
Views
83
Replies
10
Views
193
Replies
17
Views
816
Replies
1
Views
743
Replies
2
Views
580
Replies
2
Views
763
Replies
3
Views
665
Back
Top