Need help with linear independence proof

In summary, the conversation discusses two different proof techniques and their application to two different problems. The first problem involves showing that a set of vectors is linearly independent, and the second problem involves proving that a set of noninvertible operators is not a subspace. Both proofs use the method of contradiction and involve expanding out vectors or matrices to show linear dependence.
  • #1
dyanmcc
7
0
Hi,

I don't know how to do the following proof:

If (v1, ...vn) are linearly independent in V, then so is the list (v1-v2, v2-v3, ...vn-1 -vn, vn).

I can do the proof if I replace 'linearly independent' with 'spans V' ...so what connection am I missing?

Thanks much!
 
Physics news on Phys.org
  • #2
Prove it by contradiction. If the second set of vectors was not linearly independent, then you can write 0 as a linear combination of those vectors. Then simply expand out each vector to show that this implies v1...vn are also linearly dependent.
 
  • #3
Great thanks. Here's another one for you...Prove that if V is finite dimensional with dim V > 1, then the set of noninvertible operators on V is not a subspace of L(V)
 
  • #4
What have you done for that second problem dyanmcc? Start by thinking about matrices.
 
  • #5
0rthodontist said:
Prove it by contradiction. If the second set of vectors was not linearly independent, then you can write 0 as a linear combination of those vectors. Then simply expand out each vector to show that this implies v1...vn are also linearly dependent.
Also be sure to prove that there is still a nonzero coefficient when you expand the vectors out. (look at the FIRST nonzero coefficient before expansion)
 
Last edited:

What is linear independence?

Linear independence is a concept in linear algebra that refers to a set of vectors being able to span the entire vector space without any redundancy. In other words, none of the vectors in the set can be written as a linear combination of the others.

Why is proving linear independence important?

Proving linear independence is important because it allows us to determine whether a set of vectors can form a basis for a vector space. This is crucial in many areas of mathematics and science, such as solving systems of equations and understanding the behavior of physical systems.

What is the process for proving linear independence?

The process for proving linear independence involves setting up a linear combination of the vectors in the set and then showing that the only solution is when all of the coefficients are equal to zero. If there is any other solution, then the vectors are linearly dependent.

What are some common techniques for proving linear independence?

Some common techniques for proving linear independence include using Gaussian elimination, calculating determinants, and using the Gram-Schmidt process. These techniques can help simplify the process and make it easier to determine whether a set of vectors is linearly independent.

How can I apply the concept of linear independence in real life?

The concept of linear independence has many real-world applications, such as in engineering, physics, and computer science. For example, in physics, linear independence is used to determine whether a set of forces acting on an object can be resolved into a unique solution. In computer science, linear independence is used in data analysis and machine learning algorithms.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
284
  • Calculus and Beyond Homework Help
Replies
4
Views
947
  • Calculus and Beyond Homework Help
Replies
34
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
413
  • Calculus and Beyond Homework Help
Replies
24
Views
798
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
622
  • Calculus and Beyond Homework Help
Replies
17
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
Back
Top