Linear Algebra linear independence

  • Thread starter Fanta
  • Start date
  • #1
38
0
If u,v andw are three linearly independent vectors of some vectorial space V, show that u + v , u-v and u -2v + w are also linearly independent.


Okay, first of all, i know that:
[tex]\lambda_{1} \times u + \lambda_{2} \times v + \lambda_{3} \times w = (0,0,0)[/tex]

admits only the solution that all lambdas = 0, but how can I proove that they are linearly independent, knowing so little?
 

Answers and Replies

  • #2
11
0
I'd say to set it up in matrix form and check to see if the determinant is non-zero or row-reduce and if there is no row of all zeros at the end, it's linearly independent.

Edit: If matrices aren't allowed, show that for a system with constants multiplied by your u, v and w coefficients, each constant must be zero.

e.g.

Solve

1*a+1*b=0
1*a-1*b=0
1*a-2*b+c=0
 
Last edited:
  • #3
33,722
5,418
If u,v andw are three linearly independent vectors of some vectorial space V, show that u + v , u-v and u -2v + w are also linearly independent.


Okay, first of all, i know that:
[tex]\lambda_{1} \times u + \lambda_{2} \times v + \lambda_{3} \times w = (0,0,0)[/tex]

admits only the solution that all lambdas = 0, but how can I proove that they are linearly independent, knowing so little?
Show that the equation c1(u + v) + c2(u - v) + c3(u - 2v + w) = 0 has only a single solution for the constants ci, using the fact that u, v, and w are linearly independent.
 
  • #4
38
0
that's the process you would normally use when dealing with coordinatse.
But since we are dealing with whole vectors (instead of each vector's coordinates), would that really work?

For example, if I wanted to proove that vectors a b and c were linear independent:
Given a = (1,0,0), b = (0,1,0) and c = (0,0,1), we'd just do that same process, dealing with coordinates (c1(1,0,0) + c2(0,1,0)... = (0, 0, 0)

The confusion rises because we are multiplying constants with vectors, not coordinates.
 
  • #5
33,722
5,418
that's the process you would normally use when dealing with coordinatse.
This definition applies whether you know the coordinates or not.
But since we are dealing with whole vectors (instead of each vector's coordinates), would that really work?

For example, if I wanted to proove that vectors a b and c were linear independent:
Given a = (1,0,0), b = (0,1,0) and c = (0,0,1), we'd just do that same process, dealing with coordinates (c1(1,0,0) + c2(0,1,0)... = (0, 0, 0)

The confusion rises because we are multiplying constants with vectors, not coordinates.
Again, you are making a false distinction. Try what I suggested.
 
  • #6
38
0
didn't know it applied to vectors too. Thanks.
Is there anywhere i can read on about that to get a better feel for the theory behind it?

And could I use the same principle to prove linear dependece on a problem, again with three vectors (u, v and w), but not necessairly linear independent, such that : w = 2u + v
?
 
  • #7
33,722
5,418
Presumably you have a textbook. Look up the definitions of linear independence and linear dependence.
 

Related Threads on Linear Algebra linear independence

  • Last Post
Replies
3
Views
4K
  • Last Post
Replies
21
Views
5K
  • Last Post
Replies
4
Views
1K
  • Last Post
Replies
12
Views
1K
  • Last Post
Replies
7
Views
3K
Replies
4
Views
6K
Replies
3
Views
3K
Replies
8
Views
21K
Replies
7
Views
4K
Replies
0
Views
1K
Top