Conflicting definitions of linear independence

  • Thread starter Thread starter Brian_D
  • Start date Start date
Brian_D
Gold Member
Messages
68
Reaction score
13
Homework Statement
Determine whether the vectors <1,0,0,0>, <0,1,0,0,>, and <0,0,1,0> in R4 are linearly independent.
Relevant Equations
Not applicable.
I am confused about two apparently contradictory definitions of linear independence of a set of vectors. One definition is when the only solution to a homogeneous system is the trivial solution. However, it is also said that a set of equations cannot be independent if it contains a zero vector or is underdetermined. These definitions appear to be in conflict in the above problem. On the one hand, the homogenous system in this case has only the trivial solution, indicating that the system is independent, which is the answer given by the textbook. On the other hand, however, the matrix contains a column of zeros and is underdetermined, indicating that it has infinitely many solutions and the vectors are not independent. Can anyone explain this paradox?
 
Physics news on Phys.org
What precisely is "underdetermined" in this case?
 
A set of vectors is linearly dependent if it contains the zero vector, or if you can produce the zero vector by making linear combinations of the vectors. A zero column only implies that you can produce a zero row if the dimension of the vector space (number of elements) is equal to the number of vectors. (prove with gaussion elemination. you will get a zero pivot for the last row, but only if there are enough rows)
 
PeroK said:
What precisely is "underdetermined" in this case?
"Underdetermined" in this case means that the fourth variable can be any real number, because the coefficients in the fourth column are zero. So the solution is x1=x2=x3=0 and x4=any real number. This would indicate that the trivial solution is not the only solution, and the system is therefore not linearly independent. In claiming that the trivial solution is the only solution, the textbook considered only the first three variables. But what about the fourth variable?
 
Brian_D said:
I am confused about two apparently contradictory definitions of linear independence of a set of vectors. One definition is when the only solution to a homogeneous system is the trivial solution. However, it is also said that a set of equations cannot be independent if it contains a zero vector or is underdetermined. These definitions appear to be in conflict
The second statement is not a "definition" of independence, it is a statement of (a proposed) fact about independence.
 
If you turn your vectors into an nxm matrix; ##n \neq m##, then the vectors are dependent iff(def.) the matrix doesn't have maximal rank. In our case, if the rank is less than 3.
 
willem2 said:
A set of vectors is linearly dependent if it contains the zero vector, or if you can produce the zero vector by making linear combinations of the vectors. A zero column only implies that you can produce a zero row if the dimension of the vector space (number of elements) is equal to the number of vectors. (prove with gaussion elemination. you will get a zero pivot for the last row, but only if there are enough rows)
OK, so if you apply this to the problem I stated, do you conclude that the homogeneous system in my problem is linearly dependent or independent?
 
WWGD said:
If you turn your vectors into an nxm matrix; ##n \neq m##, then the vectors are dependent iff(def.) the matrix doesn't have maximal rank. In our case, if the rank is less than 3.
I believe I understand what you have said. But I can't reconcile this with a column of zeros, which seems to indicate that that the fourth variable can be any real number, and the homogeneous system therefore has infinitely many solutions.
 
FactChecker said:
The second statement is not a "definition" of independence, it is a statement of (a proposed) fact about independence.
Thank you for the clarification, but it does not materially change my question, unless the "proposed fact" is false.
 
  • #10
Brian_D said:
"Underdetermined" in this case means that the fourth variable can be any real number, because the coefficients in the fourth column are zero. So the solution is x1=x2=x3=0 and x4=any real number. This would indicate that the trivial solution is not the only solution, and the system is therefore not linearly independent. In claiming that the trivial solution is the only solution, the textbook considered only the first three variables. But what about the fourth variable?
You are given three vectors, which are linearly independent. If you add a 4th vector, the zero vector, then that extended set of sectors is linearly dependent.

You must work with the vectors you are given!
 
  • #11
Brian_D said:
Homework Statement: Determine whether the vectors <1,0,0,0>, <0,1,0,0,>, and <0,0,1,0> in R4 are linearly independent.

I am confused about two apparently contradictory definitions of linear independence of a set of vectors.
The definition of linear independence of vectors is not contradictory, but many new students of linear algebra are nevertheless confused by it. If we call the vectors in your statement ##\vec v_1,\vec v_2##, and ##\vec v_3##, these vectors are linearly independent iff the only solution to the equation ##c_1\vec v_1 + c_2\vec v_2 + c_3\vec v_3=\vec 0## occurs when ##c_1=c_2=c_3##.

Note that for any set of vectors, there is always a solution in which the coefficients are all zero, whether or not the vectors are linearly independent. The difference between linear independence and linear dependence is whether the solution set for the coefficients is unique.
 
  • #12
PeroK said:
You are given three vectors, which are linearly independent. If you add a 4th vector, the zero vector, then that extended set of sectors is linearly dependent.

You must work with the vectors you are given!
Thank you. However, just considering the three given vectors, each of them gives zero as the value of the fourth coefficient. This seems to entail the following: (1) the fourth variable can be any real number; therefore (2) the homogeneous system (again, considering only the three given vectors) has infinitely many solutions; therefore, (3) the three equations are NOT linearly independent. Your thoughts?
 
  • #13
Brian_D said:
Thank you. However, just considering the three given vectors, each of them gives zero as the value of the fourth coefficient. This seems to entail the following: (1) the fourth variable can be any real number; therefore (2) the homogeneous system (again, considering only the three given vectors) has infinitely many solutions; therefore, (3) the three equations are NOT linearly independent. Your thoughts?
There is no 4th coefficient if there are only three vectors.
 
  • #14
Mark44 said:
The definition of linear independence of vectors is not contradictory, but many new students of linear algebra are nevertheless confused by it. If we call the vectors in your statement ##\vec v_1,\vec v_2##, and ##\vec v_3##, these vectors are linearly independent iff the only solution to the equation ##c_1\vec v_1 + c_2\vec v_2 + c_3\vec v_3=\vec 0## occurs when ##c_1=c_2=c_3##.

Note that for any set of vectors, there is always a solution in which the coefficients are all zero, whether or not the vectors are linearly independent. The difference between linear independence and linear dependence is whether the solution set for the coefficients is unique.
Thank you. However, it is NOT the case here "that
the only solution to the equation ##c_1\vec v_1 + c_2\vec v_2 + c_3\vec v_3=\vec 0## occurs when ##c_1=c_2=c_3##." When this condition is satisfied for the first three variables, the fourth variable can be any real number, because the fourth element in each of the three vectors is zero. This means that the homogeneous system has infinitely many solutions, and the solution set for all four coefficients is NOT unique.
 
  • #15
PeroK said:
There is no 4th coefficient if there are only three vectors.
Doesn't the fourth element of each vector represent a coefficient, namely zero?
 
  • #16
Brian_D said:
(1) the fourth variable can be any real number; therefore (2) the homogeneous system (again, considering only the three given vectors) has infinitely many solutions; therefore, (3) the three equations are NOT linearly independent. Your thoughts?
(3) is obviously not true. According to your definition no set of 3 vectors with 4 components can be linearly independent.
 
  • #17
Brian_D said:
Doesn't the fourth element of each vector represent a coefficient, namely zero?
Those are components of the vector. The coefficients are the scalar multiples of the vectors in a linear combination. In any case:
$$a\langle1,0,0,0 \rangle + b\langle 0,1,0,0 \rangle +c \langle 0,0,1,0 \rangle = \langle a, b, c, 0\rangle$$And that final vector is only zero if ##a=b=c = 0##. Therefore, the vectors are linearly independent.

Moreover, you are never going see a more manifestly linearly independent set! You should see immediately that they are linearly independent. They are three of the four normal basis vectors.
 
  • #18
Brian_D said:
Doesn't the fourth element of each vector represent a coefficient, namely zero?
As already pointed out, but I mention it for emphasis, the coefficients are the scalars that multiply the vectors, not the coordinates of the vectors themselves.

For your example, there are three vectors. Therefore, in a test for linear independence the equation will have three coefficients. It doesn't matter that the vectors are in ##\mathbb R^4##.
 
  • Like
Likes DeBangis21 and PeroK
  • #19
PeroK said:
Those are components of the vector. The coefficients are the scalar multiples of the vectors in a linear combination. In any case:
$$a\langle1,0,0,0 \rangle + b\langle 0,1,0,0 \rangle +c \langle 0,0,1,0 \rangle = \langle a, b, c, 0\rangle$$And that final vector is only zero if ##a=b=c = 0##. Therefore, the vectors are linearly independent.

Moreover, you are never going see a more manifestly linearly independent set! You should see immediately that they are linearly independent. They are three of the four normal basis vectors.
Thank you, PeroK, this helps a lot. Without a fourth row vector, there cannot be a fourth coefficient. But when you say "And that final [column] vector is only zero if ##a=b=c = 0##," isn't that the same as saying that it is linearly dependent on the first three column vectors? Why does that not make the whole set of vectors linearly dependent?

As for the normal basis vectors, yes, I recognized them immediately but thought that the system is still underdetermined because of the final column of zeros.
 
  • #20
Mark44 said:
As already pointed out, but I mention it for emphasis, the coefficients are the scalars that multiply the vectors, not the coordinates of the vectors themselves.

For your example, there are three vectors. Therefore, in a test for linear independence the equation will have three coefficients. It doesn't matter that the vectors are in ##\mathbb R^4##.
Thank you, Mark44, this is very helpful. But I'm still trying to understand why it doesn't matter that the vectors are in R4.
 
  • #21
Brian_D said:
Thank you, Mark44, this is very helpful. But I'm still trying to understand why it doesn't matter that the vectors are in R4.
That was in reference to your example. If you have 5 vectors in ##\mathbb R^4##, they can't possibly be a linearly independent set. But if you have the same number or fewer vectors than the dimension of the space they belong to, the vectors may or may not be linearly independent. In your example you had 3 vectors in ##\mathbb R^4##, so the equation you would set up to check for linear independence will involve only those three vectors and their coefficients. You don't need (and shouldn't have) an equation with four coefficients. As someone else said, your equation should include only the vectors you have.
 
  • Like
Likes roam and PeroK
  • #22
From post #1:
Brian_D said:
I am confused about two apparently contradictory definitions of linear independence of a set of vectors. One definition is when the only solution to a homogeneous system is the trivial solution. However, it is also said that a set of equations cannot be independent if it contains a zero vector or is underdetermined. These definitions appear to be in conflict in the above problem.
I believe you are conflating two different concepts: independent equations and linearly independent vectors.
If you have a matrix equation of the form ##A\vec x = \vec 0##, where A is a matrix with 3 rows and 4 columns, with the rows being the vectors in your example; i.e. <1, 0, 0, 0>, <0, 1, 0, 0>, and <0, 0, 1, 0>, the rows are obviously linearly independent. The columns, however, are obviously linearly dependent as 1) there are four vectors in ##\mathbb R^3##, and 2) the last column vector is the zero vector. This system is underdetermined.

The three equations that correspond to the matrix equation above are :
x = 0
y = 0
z = 0
0w = 0, meaning that w is arbitrary.

Geometrically, the solution represents a line in ##\mathbb R^4## that passes through the origin.
 
Last edited:
  • #23
Mark44 said:
That was in reference to your example. If you have 5 vectors in ##\mathbb R^4##, they can't possibly be a linearly independent set. But if you have the same number or fewer vectors than the dimension of the space they belong to, the vectors may or may not be linearly independent. In your example you had 3 vectors in ##\mathbb R^4##, so the equation you would set up to check for linear independence will involve only those three vectors and their coefficients. You don't need (and shouldn't have) an equation with four coefficients. As someone else said, your equation should include only the vectors you have.
Thank you, this gives me a better understanding of it.
 
  • Like
Likes AlexB23 and berkeman
  • #24
Mark44 said:
From post #1:
I believe you are conflating two different concepts: independent equations and linearly independent vectors.
If you have a matrix equation of the form ##A\vec x = \vec 0##, where A is a matrix with 3 rows and 4 columns, with the rows being the vectors in your example; i.e. <1, 0, 0, 0>, <0, 1, 0, 0>, and <0, 0, 1, 0>, the rows are obviously linearly independent. The columns, however, are obviously linearly dependent as 1) there are four vectors in ##\mathbb R^3##, and 2) the last column vector is the zero vector. This system is underdetermined.

The three equations that correspond to the matrix equation above are :
x = 0
y = 0
z = 0
0w = 0, meaning that w is arbitrary.

Geometrically, the solution represents a line in ##\mathbb R^4## that passes through the origin.
Thank you, Mark44, this is also very helpful. Your explanations and those of PeroK and others in this forum have given me as good an understanding of linear independence as I think I can have at this stage of my study of linear algebra. Much appreciated, all!
 
Last edited by a moderator:
  • #25
Brian_D said:
One definition is when the only solution to a homogeneous system is the trivial solution.
This cannot be a definition, there will be only the trivial solution only if the rank of the system is equal to the number of variables.
Brian_D said:
However, it is also said that a set of equations cannot be independent if it contains a zero vector or is underdetermined.
Undetermined? Otherwise correct, any subset that contains the zero is linearly dependent.
Brian_D said:
These definitions appear to be in conflict in the above problem. On the one hand, the homogenous system in this case has only the trivial solution, indicating that the system is independent, which is the answer given by the textbook.
Curious, which textbook that is.
Brian_D said:
On the other hand, however, the matrix contains a column of zeros and is underdetermined, indicating that it has infinitely many solutions and the vectors are not independent. Can anyone explain this paradox?
What does undetermined mean?
"Underdetermined" in this case means that the fourth variable can be any real number, because the coefficients in the fourth column are zero.
Ah I see, undetermined = free variable.

If there's a column of zeroes, it simply means the respective variable is redundant. Formally, the number of free variables is equal to the number of variables minus rank of the system. If you added a column of zeros, you added a free variable. There is no paradox.
 
  • #26
Thank you, nuuskur. My "textbook" is a McGraw Hill "Practice Makes Perfect" study guide, _Linear Algebra_, p. 81, problem c. I don't understand from your comments whether you agree or disagree with the authors that the system is linearly independent. I have been using the term "underdetermined," not "undetermined." Your comment that the fourth variable is redundant is consistent with something that PeroK said previously: "And that final [column] vector is only zero if ##a=b=c = 0##." So I have the same question for you that I had for PeroK, namely, isn't that the same as saying that the fourth column vector is linearly dependent on the first three column vectors? Why does that not make the whole set of vectors linearly dependent?
Brian_D said:
Thank you, PeroK, this helps a lot. Without a fourth row vector, there cannot be a fourth coefficient. But when you say "And that final [column] vector is only zero if ##a=b=c = 0##," isn't that the same as saying that it is linearly dependent on the first three column vectors? Why does that not make the whole set of vectors linearly dependent?

As for the normal basis vectors, yes, I recognized them immediately but thought that the system is still underdetermined because of the final column of zeros.
 
  • #27
The column of zeros would make the set of column vectors linearly dependent. But that does not imply the set of row vectors would be linearly dependent.

isn't that the same as saying that the fourth column vector is linearly dependent on the first three column vectors?
There is no designation like vector x is linearly dependent on vectors y,z,w. A system of vectors is either linearly dependent or linearly independent. You could say the column of zeros is a linear combination of the three other column vectors and therefore the set of column vectors is linearly dependent.
 
Last edited:
  • #28
Brian_D said:
Thank you, nuuskur. My "textbook" is a McGraw Hill "Practice Makes Perfect" study guide, _Linear Algebra_, p. 81, problem c. I don't understand from your comments whether you agree or disagree with the authors that the system is linearly independent. I have been using the term "underdetermined," not "undetermined." Your comment that the fourth variable is redundant is consistent with something that PeroK said previously: "And that final [column] vector is only zero if ##a=b=c = 0##." So I have the same question for you that I had for PeroK, namely, isn't that the same as saying that the fourth column vector is linearly dependent on the first three column vectors? Why does that not make the whole set of vectors linearly dependent?
The definition of underdetremined is fewer equations than unknows.

Your technique of putting the vectors into a 4x4 matrix is the problem. For three vectors, you should have a 4x3 matrix, with the vectors as colums. That's four equations in three coefficients/variables/unknows. That is not underdetermined. Although, in this case, the 4th equation is trivial, so we really only have three equations.

Putting three vectors ina 4x4 matrix effectivelky adds a 4th vector, the zero vector and makes the set linearly dependent. That's the mistake.

In general, if you have four vectors, you put them in a 4x4 matrix and have four equations in four unknows. Again, that's not underdetermined.

If, however, you had five vectors, you would put them into a 5x4 matrix. That would be four equations in five unknowns. That would be underdetermined. In other words, any five vectors in ##\mathbb R^4## are linearly dependent. Whereas, with three or four vectors, the condition of being underdetermined is never satisfied, so you have to use the normal definition to determine linear dependence or independence.
 
  • #29
To be honest, I can't see the point of this alternative condition. Note that it's not a definition of linear dependence. It's a sufficient condition for linear dependence but not necessary. Two vectors in ##\mathbb R^4## could be linearly dependent or not. Looking for an underdetrimied set of equations doesn't help, as far as I can see.
 
  • #30
PeroK said:
The definition of underdetremined is fewer equations than unknows.

Your technique of putting the vectors into a 4x4 matrix is the problem. For three vectors, you should have a 4x3 matrix, with the vectors as colums. That's four equations in three coefficients/variables/unknows. That is not underdetermined. Although, in this case, the 4th equation is trivial, so we really only have three equations.

Putting three vectors ina 4x4 matrix effectivelky adds a 4th vector, the zero vector and makes the set linearly dependent. That's the mistake.

In general, if you have four vectors, you put them in a 4x4 matrix and have four equations in four unknows. Again, that's not underdetermined.

If, however, you had five vectors, you would put them into a 5x4 matrix. That would be four equations in five unknowns. That would be underdetermined. In other words, any five vectors in ##\mathbb R^4## are linearly dependent. Whereas, with three or four vectors, the condition of being underdetermined is never satisfied, so you have to use the normal definition to determine linear dependence or independence.
Thank you. To avoid confusion, I should have posted the matrix I was using, but for some reason when I post LaTeX code with my computer on this website, I only see the code, not the "finished product."

I was not putting the three vectors into a 4x4 matrix, but rather into a 3x4 matrix. So when I was referring to "the fourth column vector," I was referring to the fourth column vector in a 3x4 matrix. But I don't see why using a 3x4 matrix or a 4x3 matrix makes any difference. If we use a 4x3 matrix, then we have a row of zeros at the bottom instead of a column of zeros at the right. In either case, don't we have a matrix that is linearly dependent?

You said, "Although, in this case [a 4x3 matrix], the 4th equation is trivial, so we really only have three equations." I don't follow that. If the 4th equation is trivial because it can be expressed as a linear combination of the other 3 equations, doesn't that make the whole system linearly dependent?
 
  • #31
Brian_D said:
Thank you. To avoid confusion, I should have posted the matrix I was using, but for some reason when I post LaTeX code with my computer on this website, I only see the code, not the "finished product."
You have to keep refreshing the page.
Brian_D said:
I was not putting the three vectors into a 4x4 matrix, but rather into a 3x4 matrix.
The vectors should be columns in your matrix. It doesn't work if you make the vectors the rows. You should check this yourself.
 
Back
Top