Question about vector independence

  • Context: Undergrad 
  • Thread starter Thread starter MarcL
  • Start date Start date
  • Tags Tags
    Independence Vector
Click For Summary
SUMMARY

The discussion centers on the concept of linear dependence and independence of vectors in ℝ³. It establishes that if a set S contains more vectors than the dimension of the space (r > n), then the set is linearly dependent. The example provided demonstrates that two vectors can be either dependent or independent based on their relationship, specifically through the use of augmented matrices and row reduction techniques. The theorem clarifies that while two vectors in ℝ³ can be independent, any set of four or more vectors in ℝ³ must be dependent.

PREREQUISITES
  • Understanding of linear algebra concepts, specifically vector spaces.
  • Familiarity with the definitions of linear dependence and independence.
  • Knowledge of matrix operations, including row reduction and echelon forms.
  • Ability to work with augmented matrices in solving linear equations.
NEXT STEPS
  • Study the process of row reduction to echelon form in matrices.
  • Learn about the implications of the rank of a matrix on linear independence.
  • Explore examples of linear dependence in higher-dimensional vector spaces.
  • Investigate the geometric interpretation of linear independence and dependence in ℝ³.
USEFUL FOR

Students of linear algebra, mathematicians, and educators seeking to deepen their understanding of vector independence and its applications in various mathematical contexts.

MarcL
Messages
170
Reaction score
2
So I was reading my textbook and I confused myself about a theorem

Where if S={v,v2,...,vr} and in ℝn then if r>n, then it is linearly dependent

It doesn't make sense to me because if we look at 2 vectors in ℝ3 (lets say u and v)

we have u=(u1,u2,u3) and v=(v1,v2,v3)
So i do k1(u1,u2,u3)+k2(v1,v2,v3)=0
If i use a matrix:

u1 v1 0
u2 v2 0
u3 v3 0

Then it would seem to me as both vectors would be linearly dependent, no?
 
Physics news on Phys.org
MarcL said:
So I was reading my textbook and I confused myself about a theorem

Where if S={v,v2,...,vr} and in ℝn then if r>n, then it is linearly dependent

It doesn't make sense to me because if we look at 2 vectors in ℝ3 (lets say u and v)

The theorem doesn't say that 2 vectors in R^3 cannot be dependent. It doesn't say that there is dependence "if and only if r > n".
 
But if I solve for a matrix of a 2x3, I would end up with a free variable no? ( 3rd line, after I REF or RREF the matrix) so that would make it linearly dependent
 
MarcL said:
But if I solve for a matrix of a 2x3, I would end up with a free variable no? ( 3rd line, after I REF or RREF the matrix) so that would make it linearly dependent

The theorem does not say that 2 vectors in \mathbb{R}^3 must be independent. There is no contradiction.
 
Well I don't know what I am trying to say is that I see a contradiction, more like " what is wrong in my reasoning" type of thing. I'll give a concrete example I just did, maybe my question will be more clear

Is this set of vector linearly dependent

(8,-1,3),(4,0,1)

8 4 0 --> ( interchange row 2 with row 1, then row 3 with "new" row 2)
-1 0 0
3 1 0

-1 0 0
3 1 0
8 4 0
We end up with 2 rows of leading one, and one row that I can reduce to 0 0 0 ( the third one )easily, so how isn't that linearly dependent?
 
MarcL said:
so how isn't that linearly dependent?

The two vectors (8,-1,3),(4,0,1) in your example are not linearly dependent.

To determine if two 3-dimensional vectors (u1,u2,u3) and (v1,v2,v3) are dependent, you need to find non-zero solutions x and y to the vector equation:

x ( u1,u2,u3) + y (v1,v2,v3) = 0

Written in matrix form, this can be expressed as:

\begin{pmatrix} u1&v1 \\ u2 & v2\\ u3&v3\end{pmatrix} \begin{pmatrix} x\\y \end{pmatrix} = \begin{pmatrix} 0\\ 0 \\0\end{pmatrix}.

Apparently you analyze this equation by writing the "augmented matrix" for it ( where the constants on the right-hand side are put in the matrix, we are dealing with:

\begin{pmatrix} u1 & v1 & 0\\ u2&v2& 0\\u3&v3 & 0\end{pmatrix}

If we row-reduce the matrix so the bottom row is zeroes, what does that tell us?

If it reduces to \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0\end{pmatrix} , this implies that x = 0, \ y = 0 is the unique solution to the equations, so the two vectors are linearly independent.

If it reduces to something like \begin{pmatrix} 1 & -2 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{pmatrix} , this implies there are solutions like x = 2, y = 1 and x = 4, y = 2 etc., so the two vectors are dependent.
 
Oh I didn't know you guys had a matrix format that I could use... sorry about that

So, trivial solutions are only determined whether or not a x or y ( in this case) variable would have a leading one or not? just seems kinda odd that one row could be 0 without affecting the depedence of the vector
 
MarcL said:
just seems kinda odd that one row could be 0 without affecting the depedence of the vector

You might be confusing two different techiques for testing independence. Another (and more common technique) for testing the indepdence for a set of n-dimensional vectors is to write the vectors as rows in a matrix and try to row reduce the matrix toward the identity. If you get a row of zeros in that technique you have managed to express the zero vector as a linear combination of other rows, so the vectors are dependent.

If you row reduce \begin{pmatrix} u1 & u2 & u3 \\ v1 & v2 & v3 \end{pmatrix} to something like \begin{pmatrix} 1 &-2 &0 \\ 0 & 0 & 0 \end{pmatrix} then the vectors are dependent.

If you row reduce it something like \begin {pmatrix} 1 & 0 & -2 \\ 0 & 1 & 7 \end{pmatrix} the vectors are independent.
 
If you have two vectors to consider, it's very easy to determine whether they are linearly independent. If neither one is a constant multiple of the other, the two vectors are linearly independent.

It's a lot harder when you have three or more vectors. Even if no one vector is a multiple of any of the others, the set can still be linearly dependent. For example, consider S = {<1, 1, 0>, <1, -1, 0>, <2, 0, 0>}. No one vector is a multiple of any of the others, but the equation c1v1 + c2v2 + c3v3 = 0, has a solution where not all of the constants are zero.
 
  • #10
MarcL said:
if we look at 2 vectors in ℝ3 (lets say u and v)

we have u=(u1,u2,u3) and v=(v1,v2,v3)
So i do k1(u1,u2,u3)+k2(v1,v2,v3)=0
That is not always possible: If u=(1,0,0) and v=(0,1,0) it is not possible to find nonzero k1 and k2 that will give 0.
But it can certainly be true: If u=(1,0,0) and v = (-1,0,0) clearly k1=1 and k2=1 would give 0.
So as long as the number of vectors is less than or equal to the dimension of the space, it can go either way.

However, for R3, it is impossible to find 4 vectors where no set of them are linearly dependent. That is what the theorem says.
 
  • #11
The theorem says that for all positive integers ##n##, every subset of ##\mathbb R^n## with cardinality (=number of elements) greater than ##n## is linearly independent. So in particular, it says that every subset of ##\mathbb R^3## with at least four elements is linearly dependent. It says nothing about sets with only two elements, like your ##\{u,v\}##.

Some subsets of ##\mathbb R^3## with two elements are linearly dependent, and some are linearly independent. ##\{(1,0,0),(2,0,0)\}## is an example of the former, and ##\{(1,0,0),(0,1,0)\}## is an example of the latter.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 59 ·
2
Replies
59
Views
9K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
Replies
12
Views
2K