Linear Algebra linear independence

• Fanta
In summary: The definition applies to vectors in a vector space, not just coordinates.In summary, the conversation discusses the concept of linear independence and how to prove that u + v, u-v, and u-2v+w are linearly independent given that u, v, and w are also linearly independent vectors in a vector space V. The suggested methods to prove linear independence include setting up the equation in matrix form, checking the determinant, or using constants multiplied by the vectors to show that the only solution is when all constants are zero. The concept of linear dependence is also briefly mentioned.
Fanta
If u,v andw are three linearly independent vectors of some vectorial space V, show that u + v , u-v and u -2v + w are also linearly independent.

Okay, first of all, i know that:
$$\lambda_{1} \times u + \lambda_{2} \times v + \lambda_{3} \times w = (0,0,0)$$

admits only the solution that all lambdas = 0, but how can I proove that they are linearly independent, knowing so little?

I'd say to set it up in matrix form and check to see if the determinant is non-zero or row-reduce and if there is no row of all zeros at the end, it's linearly independent.

Edit: If matrices aren't allowed, show that for a system with constants multiplied by your u, v and w coefficients, each constant must be zero.

e.g.

Solve

1*a+1*b=0
1*a-1*b=0
1*a-2*b+c=0

Last edited:
Fanta said:
If u,v andw are three linearly independent vectors of some vectorial space V, show that u + v , u-v and u -2v + w are also linearly independent.

Okay, first of all, i know that:
$$\lambda_{1} \times u + \lambda_{2} \times v + \lambda_{3} \times w = (0,0,0)$$

admits only the solution that all lambdas = 0, but how can I proove that they are linearly independent, knowing so little?
Show that the equation c1(u + v) + c2(u - v) + c3(u - 2v + w) = 0 has only a single solution for the constants ci, using the fact that u, v, and w are linearly independent.

that's the process you would normally use when dealing with coordinatse.
But since we are dealing with whole vectors (instead of each vector's coordinates), would that really work?

For example, if I wanted to proove that vectors a b and c were linear independent:
Given a = (1,0,0), b = (0,1,0) and c = (0,0,1), we'd just do that same process, dealing with coordinates (c1(1,0,0) + c2(0,1,0)... = (0, 0, 0)

The confusion rises because we are multiplying constants with vectors, not coordinates.

Fanta said:
that's the process you would normally use when dealing with coordinatse.
This definition applies whether you know the coordinates or not.
Fanta said:
But since we are dealing with whole vectors (instead of each vector's coordinates), would that really work?

For example, if I wanted to proove that vectors a b and c were linear independent:
Given a = (1,0,0), b = (0,1,0) and c = (0,0,1), we'd just do that same process, dealing with coordinates (c1(1,0,0) + c2(0,1,0)... = (0, 0, 0)

The confusion rises because we are multiplying constants with vectors, not coordinates.
Again, you are making a false distinction. Try what I suggested.

didn't know it applied to vectors too. Thanks.
Is there anywhere i can read on about that to get a better feel for the theory behind it?

And could I use the same principle to prove linear dependece on a problem, again with three vectors (u, v and w), but not necessairly linear independent, such that : w = 2u + v
?

Presumably you have a textbook. Look up the definitions of linear independence and linear dependence.

1. What is linear independence in linear algebra?

Linear independence refers to a set of vectors in a vector space that cannot be represented as a linear combination of other vectors in the same space. This means that no vector in the set can be written as a linear combination of the others, making the set unique and essential in the vector space.

2. How is linear independence determined?

Linear independence is determined by using the concept of linear dependence, which states that a set of vectors is linearly dependent if one or more of the vectors can be written as a linear combination of the others. If this is not possible, then the set is linearly independent.

3. What is the significance of linear independence in linear algebra?

Linear independence is crucial in linear algebra because it allows us to solve systems of linear equations. It also helps us determine the dimension of a vector space and identify a basis for that space. Additionally, linear independence plays a significant role in determining whether a matrix is invertible or not.

4. How can linear independence be tested?

To test for linear independence, we can use the concept of determinants. If the determinant of a matrix formed by the set of vectors is non-zero, then the vectors are linearly independent. Another method is to use Gaussian elimination to reduce the matrix to its row-echelon form. If there are no zero rows in the reduced matrix, then the vectors are linearly independent.

5. Can a set of vectors be linearly independent in one vector space but not in another?

Yes, a set of vectors can be linearly independent in one vector space but not in another. This is because the dimension of a vector space is dependent on the number of vectors in the set. If the dimension of the vector space is greater than the number of vectors, the set may be linearly independent, but if the dimension is less than the number of vectors, the set will be linearly dependent.

• Calculus and Beyond Homework Help
Replies
0
Views
482
• Calculus and Beyond Homework Help
Replies
15
Views
869
• Calculus and Beyond Homework Help
Replies
1
Views
686
• Calculus and Beyond Homework Help
Replies
4
Views
1K
• Calculus and Beyond Homework Help
Replies
5
Views
1K
• Calculus and Beyond Homework Help
Replies
8
Views
701
• Calculus and Beyond Homework Help
Replies
1
Views
374
• Calculus and Beyond Homework Help
Replies
2
Views
1K
• Calculus and Beyond Homework Help
Replies
2
Views
1K
• Calculus and Beyond Homework Help
Replies
7
Views
468