Are the Vectors (1,x) and (1,y) Linearly Independent in R^2?

In summary, in order for the vectors (1,x) and (1,y) to be linearly independent in R^2, x must not equal y. For the vectors (1,x,x^2), (1,y,y^2), and (1,z,z^2) to be linearly independent in R^3, the dot product of any two of the vectors must not equal 0. This can be extended to R^n, where the dot product of any two vectors in a set must not equal 0 for the set to be linearly independent.
  • #1
fk378
367
0

Homework Statement


When are the vectors (1,x) and (1,y) linearly independent in R^2? When are the vectors (1,x,x^2), (1,y,y^2), and (1,z,z^2) linearly independent? Generalize to R^n.


The Attempt at a Solution


At first, I think I misconstrued the question and I ended up finding basis vectors. ie, this is what I did for R^2:

In order for the vectors (1,x) and (1,y) to be linearly independent, there must exist some c1,c2 such that
c1(1,x)+ c2(1,y)= (0,0)
then we have c1=-c2
c1x=-c2y

Using the first equation you have C=(c1,c2)=c2(-1,1) and so the basis vector is (-1,1).

I don't really know why i did this honestly. I just saw linearly independent, and from there I got to my last step. Then upon actually reading the question again, I realized that for them to be linearly independent their dot product should be 0. Also I guess my initial way of solving the problem was wrong since I should have gotten that c1=c2=0, right?

So anyway if the dot product=0 then
<(1,x),(1,y)> = 1+xy and we want this equal to 0
--> xy=-1.

Any help?
 
Physics news on Phys.org
  • #2
In order for the vectors (1,x) and (1,y) to be linearly independent, there must exist some c1,c2 such that
c1(1,x)+ c2(1,y)= (0,0)
then we have c1=-c2
c1x=-c2y

If c1 = -c2, then -c2x = -c2y, which implies that x = y
In that case, you have found a nontrivial solution to the equation c1(1, x) + c2(1, y) = (0,0).
This means that these vectors are linearly dependent. In fact, they are the same vector.

If x != y, then (1, x) and (1, y) point in different directions, and so must be linearly independent.

Can you extend these ideas to the rest of your problem? Be aware that it's very simple to tell whether two vectors form a dependent set--each one is some constant multiple of the other. It's not as easy to tell when you have more than two vectors, but you can always go back to the definition of linear dependence; namely that c1*v1 + c2*v2 + ... + cn*vn = 0 has exactly one solution (linearly independent set of vectors) or has an infinite number of solutions (linearly dependent set).
 
  • #3
Does it have to do with being orthogonal?
 
  • #4
Not in this problem. Vectors that are othogonal have dot products that are zero, and I don't see any of that going on here.
 
  • #5
Well one vector cannot be a multiple of the other so we cannot have
c1(1,x)=c2(1,y)
?
 
  • #6
Why can't a vector be a multiple of another? If x = y, then they are the same vector, hence either one is the 1 multiple of the other, hence the two are linearly dependent.

If x != y, they are different vectors, hence form a linearly independent set.
 
  • #7
So x cannot equal y. (Is that what != means?)
 
  • #8
Yes. It's notation that comes from the "C" programming language.
 
  • #9
But is that the only answer? That x cannot = y?
 
  • #10
It's the only answer to this question:
When are the vectors (1,x) and (1,y) linearly independent in R^2?

Think about it graphically. I.e., graph these two vectors for two cases: when x = y and when x != y.
 
  • #11
So then for R^3 it would be when x!=y!=z and x^2 != y^2 != z^2?
 
  • #12
fk378 said:
So then for R^3 it would be when x!=y!=z and x^2 != y^2 != z^2?

Maybe, but maybe not. Do the same thing you did for the vectors (1, x) and (1, y); that is, solve the equation c1*(1, x, x^2) + c2*(1, y, y^2) + c3*(1, z, z^2) = 0 for the constants c1, c2, and c3. Clearly c1 = c2 = c3 = 0 is a solution, which is always the case whether the set of vectors is linearly dependent or linearly independent. This set of three vectors is linearly dependent if there is a nontrivial solution (one for which at least one of the ci's is not zero).

It's very easy to tell if two vectors are linearly dependent/indepent. If dependent, one of them will be a multiple of the other. When you have more than two vectors, you can't tell as easily.
 
  • #13
Okay, when I solved
c1(1,x,x^2) + c2(1,y,y^2) + c3(1,z,z^2)=0

I got C=(c1,c2,c3)=c2(-1,1,0) + c3(-1,0,1).

But aren't these the basis vectors?
 
Last edited:
  • #14
fk378 said:
Okay, when I solved
c1(1,x,x^2) + c2(1,y,y^2) + c3(1,z,z^2)
You can't solve that--it's not an equation! Look at the equation that I wrote. From it you should get three equations in c1, c2, and c3.
fk378 said:
I got C=(c1,c2,c3)=c2(-1,1,0) + c3(-1,0,1).

But aren't these the basis vectors?
This doesn't make any sense to me. You're trying to determine the conditions on x, y, and z for the three vectors to be linearly dependent. The goal is not to find a basis for <whatever>.
 
  • #15
For R^3 I got
c1= [-c3(1,z,z^2) - c2(1,y,y^2)] / (1,x,x^2)
 
  • #16
And how do you propose to divide by (1, x, x^2)? Look at what I said in post 14.
 
  • #17
Okay, so
(c1, c1x, c1x^2)+(c2,c2x,c2y^2)+(c3,c3z,c3z^2)=0
(c1+c2+c3),(c1x+c2y+c3z),(c1x^2+c2y^2+c3z^2)=0
c1=-c2-c3?

Is that right at all? If not, I don't understand what you're asking.
 
  • #18
fk378 said:
Okay, so
(c1, c1x, c1x^2)+(c2,c2x,c2y^2)+(c3,c3z,c3z^2)=0
OK, I buy this (above).
fk378 said:
(c1+c2+c3),(c1x+c2y+c3z),(c1x^2+c2y^2+c3z^2)=0
This I don't buy, and I don't even know what it means. Recall that I asked you to write three equations. Each of them will have c1, c2, and c3, so you should be able to solve this system for these numbers. Clearly, if c1=c2=c3=0, that works, but what we're interested in is other solutions where not all of the ci's are zero.

The first equation above has to be true for all x, y, and z. Group together the constant terms, the first-degree terms, the second-degree terms. You should get three equations.
fk378 said:
c1=-c2-c3?

Is that right at all? If not, I don't understand what you're asking.
 
  • #19
Okay, doing what I did for c1 and c2, like you said to do, I got
C=(c1,c2,c3)=c2(-1,1,0) + c3(-1,0,1)
 
  • #20
fk378 said:
Okay, doing what I did for c1 and c2, like you said to do, I got
C=(c1,c2,c3)=c2(-1,1,0) + c3(-1,0,1)
The three equations I was talking about are
c1 + c2 + c3 = 0
c1x + c2y + c3z = 0
c1x^2 + c2y^2 + c3z^2 = 0

If you put this system into an augmented matrix, you have:
Code:
[1   1   1 | 0]
[x   y   z  | 0]
[x^2 y^2 z^2 | 0]
You don't actually need the far-right column, since it will never change, so I will omit it in further work.
This matrix can be row-reduced to echelon form, like so:
Code:
[1   1      1     ]
[0  y-x   z-x     ]
[0   0  (z-x)(z-y)]
(You should check my work.)

The matrix just above tells us about the solutions c1, c2, and c3. Under what conditions will there be a solution for c1, c2, c3, where at least one of these numbers is not zero? Those are exactly the same conditions for the set of vectors {(1, x, x^2), (1, y, y^2), (1, z, z^2)} to be linearly dependent.
 
  • #21
It would have to be that the determinant of the matrix does not equal 0.
But I don't see a pattern between the equation for this matrix and the equation for a 2x2 matrix to find the answer for R^n.
 
  • #22
fk378 said:
It would have to be that the determinant of the matrix does not equal 0.
No. The question was "Under what conditions will there be a solution for c1, c2, c3, where at least one of these numbers is not zero? Those are exactly the same conditions for the set of vectors {(1, x, x^2), (1, y, y^2), (1, z, z^2)} to be linearly dependent."
 
  • #23
If any of the c1,c2,c3 is not equal to zero, then that means the matrix cannot be in reduced-echelon form (i.e. every pivot position does *not* have a zero above it).
 
  • #24
fk378 said:
If any of the c1,c2,c3 is not equal to zero, then that means the matrix cannot be in reduced-echelon form (i.e. every pivot position does *not* have a zero above it).

Relative to the matrix below, "Under what conditions will there be a solution for c1, c2, c3, where at least one of these numbers is not zero?"

Code:
[1   1      1     ]
[0  y-x   z-x     ]
[0   0  (z-x)(z-y)]

By "under what conditions" I meant what constraints are there on x, y, and z for there to be nonzero solutions for c1, c2, and c3?
 
  • #25
If y=x or z=x then there would be nonzero solutions.
 
  • #26
Also if z = y.

Good!
Now, what does this mean? Remember that the problem statement was: When are the vectors (1, x, x^2), (1, y, y^2), and (1, z, z^2) linearly independent?
What you just said was that these vectors will be linearly dependent if x = y or x = z (and I added, if z = y).
The vectors will be linearly independent if it's not true that (x = y or x = z or z = y). This is equivalent to saying that x!= y and x != z and z != y. More simply, the vectors are lin. independent if x, y, and z are all different.

Geometrically, we have three vectors pointing to different places in R^3. No one vector is a multiple of (or even equal to) any other vector. And no one vector is a linear combination of the other two.

In the first problem, it asked when the vectors (1, x) and (1, y) were linearly independent in R^2. That happens as long as x != y. IOW, as long as neither vector is a multiple of (or equal to) the other.

Look at what happened when we went from a set of two vectors in R^2 to a set of three vectors in R^3. There are only two variables (x and y) and one relationship in the vectors in R^2. Adding a dimension increased the complexity since there are now three variables (I'm not counting the squared variables since if you know x, y, and z, you know their squares) and three relationships to keep track of. Adding one more dimension will generate more than one more relationship among the four variables.

As an analogy, consider people and friendships. You and a friend are two people, with friendship as a single connection between you. If another person comes into the group, we can connect each person to one other as a friend. Three people: three friendships.
If we add another person, now there are six friendships. If you don't understand this, draw four dots and draw lines between them. Four dots, six connections. Each time another dot is added, there are quite a few more connectors that develop.
 
  • #27
Thank you for your help!
 

1) What does it mean for vectors to be linearly independent?

Linearly independent vectors are vectors that cannot be written as a linear combination of each other. This means that no vector can be expressed as a sum of scalar multiples of the other vectors in the set. In other words, the coefficients of the vectors must all be equal to zero for them to be linearly independent.

2) How do you determine if a set of vectors is linearly independent?

To determine if a set of vectors is linearly independent, you can use the determinant method or the rank method. The determinant method involves creating a matrix with the given vectors as columns, and checking if the determinant of the matrix is equal to zero. If the determinant is zero, the vectors are linearly dependent. The rank method involves forming a matrix with the vectors as rows, and reducing it to row-echelon form. If there are no rows of zeros, the vectors are linearly independent.

3) Can a set of two vectors be linearly independent?

Yes, a set of two vectors can be linearly independent. As long as the two vectors are not scalar multiples of each other, they are considered linearly independent. However, if the two vectors are collinear (lie on the same line), then they are linearly dependent.

4) What is the significance of linear independence in linear algebra?

Linear independence is an important concept in linear algebra because it allows us to form a basis for a vector space. A basis is a set of linearly independent vectors that can be used to represent any vector in that space. Linear independence also helps us solve systems of linear equations and perform other operations with matrices.

5) How does linear independence relate to span and dimension?

The span of a set of vectors is the set of all possible linear combinations of those vectors. If the vectors are linearly independent, then the span will be the entire vector space. If the vectors are linearly dependent, then the span will be a subspace of the vector space. The dimension of a vector space is the number of vectors in a basis for that space. Therefore, the dimension of a vector space is equal to the number of linearly independent vectors in its basis.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
258
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
972
  • Calculus and Beyond Homework Help
Replies
19
Views
911
  • Calculus and Beyond Homework Help
Replies
5
Views
119
  • Calculus and Beyond Homework Help
Replies
2
Views
168
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
753
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Back
Top