Linear Algebra linear independence

  • Thread starter Fanta
  • Start date
  • #1
Fanta
38
0
If u,v andw are three linearly independent vectors of some vectorial space V, show that u + v , u-v and u -2v + w are also linearly independent.


Okay, first of all, i know that:
[tex]\lambda_{1} \times u + \lambda_{2} \times v + \lambda_{3} \times w = (0,0,0)[/tex]

admits only the solution that all lambdas = 0, but how can I proove that they are linearly independent, knowing so little?
 

Answers and Replies

  • #2
Jademonkey
11
0
I'd say to set it up in matrix form and check to see if the determinant is non-zero or row-reduce and if there is no row of all zeros at the end, it's linearly independent.

Edit: If matrices aren't allowed, show that for a system with constants multiplied by your u, v and w coefficients, each constant must be zero.

e.g.

Solve

1*a+1*b=0
1*a-1*b=0
1*a-2*b+c=0
 
Last edited:
  • #3
36,338
8,294
If u,v andw are three linearly independent vectors of some vectorial space V, show that u + v , u-v and u -2v + w are also linearly independent.


Okay, first of all, i know that:
[tex]\lambda_{1} \times u + \lambda_{2} \times v + \lambda_{3} \times w = (0,0,0)[/tex]

admits only the solution that all lambdas = 0, but how can I proove that they are linearly independent, knowing so little?
Show that the equation c1(u + v) + c2(u - v) + c3(u - 2v + w) = 0 has only a single solution for the constants ci, using the fact that u, v, and w are linearly independent.
 
  • #4
Fanta
38
0
that's the process you would normally use when dealing with coordinatse.
But since we are dealing with whole vectors (instead of each vector's coordinates), would that really work?

For example, if I wanted to proove that vectors a b and c were linear independent:
Given a = (1,0,0), b = (0,1,0) and c = (0,0,1), we'd just do that same process, dealing with coordinates (c1(1,0,0) + c2(0,1,0)... = (0, 0, 0)

The confusion rises because we are multiplying constants with vectors, not coordinates.
 
  • #5
36,338
8,294
that's the process you would normally use when dealing with coordinatse.
This definition applies whether you know the coordinates or not.
But since we are dealing with whole vectors (instead of each vector's coordinates), would that really work?

For example, if I wanted to proove that vectors a b and c were linear independent:
Given a = (1,0,0), b = (0,1,0) and c = (0,0,1), we'd just do that same process, dealing with coordinates (c1(1,0,0) + c2(0,1,0)... = (0, 0, 0)

The confusion rises because we are multiplying constants with vectors, not coordinates.
Again, you are making a false distinction. Try what I suggested.
 
  • #6
Fanta
38
0
didn't know it applied to vectors too. Thanks.
Is there anywhere i can read on about that to get a better feel for the theory behind it?

And could I use the same principle to prove linear dependece on a problem, again with three vectors (u, v and w), but not necessairly linear independent, such that : w = 2u + v
?
 
  • #7
36,338
8,294
Presumably you have a textbook. Look up the definitions of linear independence and linear dependence.
 

Suggested for: Linear Algebra linear independence

  • Last Post
Replies
4
Views
40
Replies
14
Views
64
Replies
8
Views
632
  • Last Post
Replies
1
Views
532
  • Last Post
Replies
6
Views
431
Replies
16
Views
581
Replies
2
Views
432
Replies
8
Views
796
Replies
25
Views
1K
Top