2 Linear Algebra Proofs about Linear Independence

In summary, the conversation discusses two proofs related to linear independence. The first proof involves showing that a set S is linearly independent if and only if the system Ax=0 has only the trivial solution, where the columns of A are composed of the vectors in S. The second proof involves proving that if {v1, v2, ... vp}, p>=2 is a linearly independent set, then vp is not an element of the span of {v1, v2, ... vp-1}. The first proof requires a starting point and the second proof seems valid.
  • #1
alec_tronn
29
0

Homework Statement


Proof 1:
Show that S= {v1, v2, ... vp} is a linearly independent set iff Ax = 0 has only the trivial solution, where the columns of A are composed of the vectors in S. Be sure to state the relationship of the vector x to the vectors in S

2. The attempt at a solution
As far as I can tell (from my book and class), the actual definition of linear independence is
"A set of vectors is linearly independent iff there exists only the trivial solution to c1v1 + c2v2+ ... + cpvp = 0, or similarly only a trivial solution exists to the equation Ax = 0 when the columns of A are composed of the vectors in S. "

I know many implications of this property, but not the steps in between the term and the definition. Is there a more basic definition that I need to start this proof out with? Are the two options I gave in my definition of linear independence really different enough to merit in-between steps?

Homework Statement



Proof 2:
Prove or disprove: If {v1, v2, ... vp}, p>=2 is a linearly independent set, then vp is not an element of Span {v1, v2, ... vp-1}. 2. The attempt at a solution

I just took a guess at this one:

Assume that vp}, p>=2 is linearly independent, and let vp be an element of Span {v1, v2, ... vp-1}.

By definition of spanning, vp is a linear combination of {v1, v2, ... vp-1} .

By definition of linear combination, there exists c1v1 + c2v2+ ... + cpvp-1 = vp.

Algebra give us:
c1v1 + c2v2+ ... + cpvp-1 - vp = 0.

This gives a solution other than the trivial solution (where {c1, c2, ... cp} are all zero) to the equation c1v1 + c2v2+ ... + cpvp = 0, because cp must equal -1.

This is a contradiction. {v1, v2, ... vp}, p>=2 can't be a linearly independent set while vp is an element of Span {v1, v2, ... vp-1}. , so if {v1, v2, ... vp}, p>=2 is a linearly independent set, then vp is not an element of Span {v1, v2, ... vp-1}, as desired. Q.E.D.Any help or hints on either of these would be greatly appreciated. The first one I just need a starting point, and the second one I need to know if I made any mistakes, or if that is a valid proof. Thanks a lot.
 
Physics news on Phys.org
  • #2
I agree with the 2nd proof. In the first problem, the statement and the predicate (the definition) seem to be identical -- I am not clear on what is being asked.
 
  • #3
Regarding proof 1, do you know anything about the dimension of the vector space of solutions to a homogenous system? How does it relate to the rank of the system matrix A? What *is* the rank of your matrix A, since it consists of p linear independent vectors?
 

1. What is linear independence in linear algebra?

Linear independence in linear algebra refers to a set of vectors that cannot be written as a linear combination of each other. In other words, no vector in the set is redundant, and each vector brings a unique contribution to the span of the set.

2. How do you prove linear independence?

To prove linear independence, you can use the definition of linear independence and set up a system of equations. Then, you can use techniques such as Gaussian elimination or matrix operations to solve the system. If the only solution is the trivial solution (all coefficients are equal to 0), then the vectors are linearly independent.

3. What is a basis in linear algebra?

A basis in linear algebra is a set of linearly independent vectors that span a vector space. This means that any vector in the space can be written as a unique linear combination of the basis vectors. The number of vectors in the basis is called the dimension of the vector space.

4. Can a set of vectors be linearly independent and dependent at the same time?

No, a set of vectors cannot be both linearly independent and dependent at the same time. This is because the definition of linear independence states that all vectors in the set must be necessary to span the space, while linear dependence means that at least one vector can be written as a linear combination of the others, making it redundant.

5. Why is linear independence important in linear algebra?

Linear independence is important in linear algebra because it allows us to understand the structure and properties of vector spaces. It helps us determine the dimension of a space, find a basis, and solve systems of equations. Linear independence also plays a crucial role in other areas of mathematics and physics, such as differential equations, optimization, and quantum mechanics.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
258
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
402
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
14
Views
583
  • Calculus and Beyond Homework Help
Replies
24
Views
784
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
10
Views
993
  • Calculus and Beyond Homework Help
Replies
8
Views
608
Back
Top