Finding eigenvectors of [[1,-1,-1],[-1,1,-1],[-1,-1,1]]

  • Thread starter Thread starter thetrystero
  • Start date Start date
  • Tags Tags
    Eigenvectors
Click For Summary

Homework Help Overview

The discussion revolves around finding the eigenvectors of the 3x3 matrix [[1,-1,-1],[-1,1,-1],[-1,-1,1]] after determining its eigenvalues, which are 2, 2, and -1. Participants are exploring how to compute the eigenvectors corresponding to these eigenvalues.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants discuss the process of solving the linear system derived from the eigenvalue equations. Some express confusion about the nature of the solutions, particularly regarding the linear independence of eigenvectors and the significance of specific choices for parameters.

Discussion Status

There is an ongoing exploration of different approaches to finding eigenvectors, with some participants suggesting specific forms of solutions and questioning the uniqueness of certain choices. Guidance has been offered regarding the need for linearly independent eigenvectors, and the discussion reflects a productive examination of the problem without reaching a consensus.

Contextual Notes

Participants note that there are infinitely many possible eigenvectors for the eigenvalue 2, and they discuss the implications of choosing particular values for parameters in their solutions. There is a recognition of the constraints imposed by the requirement for linear independence among eigenvectors.

thetrystero
Messages
3
Reaction score
0
he eigenvalues of the 3x3 matrix [[1,-1,-1],[-1,1,-1],[-1,-1,1]] are 2,2, and -1.
how can i compute the eigenvectors?
for the case lambda=2, for example, i end up with an augmented matrix [[-1,-1,-1,0],[-1,-1,-1,0],[-1,-1,-1,0]] so I'm stuck at this point.


much appreciated.
 
Physics news on Phys.org
thetrystero said:
he eigenvalues of the 3x3 matrix [[1,-1,-1],[-1,1,-1],[-1,-1,1]] are 2,2, and -1.
how can i compute the eigenvectors?
for the case lambda=2, for example, i end up with an augmented matrix [[-1,-1,-1,0],[-1,-1,-1,0],[-1,-1,-1,0]] so I'm stuck at this point.


much appreciated.

So, you need to solve the linear system
[tex]x_1 + x_2 + x_3 = 0\\<br /> x_1 + x_2 + x_3 = 0\\<br /> x_1 + x_2 + x_3 = 0[/tex]
There are lots of solutions. In fact, you should be able to find two linearly independent solution vectors, corresponding to the double eigenvalue 2.

RGV
 
I prefer to work from the basic definitions (perhaps I just never learned these more sophisticated methods!):
Saying that 2 is an eigenvalue of this matrix means there exist a non-zero vector such that
[tex]\begin{bmatrix}1 & -1 & -1 \\ -1 & 1 & -1 \\ -1 & -1 & 1\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}x - y- z\\ -x+ y- z \\ -x- y+ z\end{bmatrix}= \begin{bmatrix}2x \\ 2y\\ 2z\end{bmatrix}[/tex]
which gives the three equations x- y- z= 2x, -x+ y- z= 2y, -x- y+ z= 2z which are, of course, equivalent to -x- y- z= 0, -x- y- z= 0, -x- y- z= 0. Those three equations are the same. We can, for example, say that z= -x- y so that any vector of the form <x, y, -x- y>= <x, 0, -x>+ <0, y, -y>= x<1, 0, -1>+ y<0, 1, -1> is an eigenvector. Notice that the eigenvalue, 2, not only has algebraic multiplicity 2 (it is a double root of the characteristic equation) but has geometric multiplicity 2 (the space of all corresponding eigenvalues is 2 dimensional).

Similarly, the fact that -1 is an eigenvalue means there are x, y, z, satisfying x- y- z= -x, -x+ y- z= -y, -x- y+ z= -z which are, of course, equivalent to 2x- y- z= 0, -x+ 2y- z= 0, -x- y+ 2z= 0. If we subtract the second equation from the first, we eliminate z to get 3x- 3y= 0 so y= x. Putting that into the third equation, 2x+ 2z= 0 so z= -x.
Any eigenvector corresponding to eigenvalue -1 is of the form <x, x, -x>= x<1, 1, -1>.
 
HallsofIvy said:
I prefer to work from the basic definitions (perhaps I just never learned these more sophisticated methods!):
Saying that 2 is an eigenvalue of this matrix means there exist a non-zero vector such that
[tex]\begin{bmatrix}1 & -1 & -1 \\ -1 & 1 & -1 \\ -1 & -1 & 1\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}x - y- z\\ -x+ y- z \\ -x- y+ z\end{bmatrix}= \begin{bmatrix}2x \\ 2y\\ 2z\end{bmatrix}[/tex]
which gives the three equations x- y- z= 2x, -x+ y- z= 2y, -x- y+ z= 2z which are, of course, equivalent to -x- y- z= 0, -x- y- z= 0, -x- y- z= 0. Those three equations are the same. We can, for example, say that z= -x- y so that any vector of the form <x, y, -x- y>= <x, 0, -x>+ <0, y, -y>= x<1, 0, -1>+ y<0, 1, -1> is an eigenvector. Notice that the eigenvalue, 2, not only has algebraic multiplicity 2 (it is a double root of the characteristic equation) but has geometric multiplicity 2 (the space of all corresponding eigenvalues is 2 dimensional).

Similarly, the fact that -1 is an eigenvalue means there are x, y, z, satisfying x- y- z= -x, -x+ y- z= -y, -x- y+ z= -z which are, of course, equivalent to 2x- y- z= 0, -x+ 2y- z= 0, -x- y+ 2z= 0. If we subtract the second equation from the first, we eliminate z to get 3x- 3y= 0 so y= x. Putting that into the third equation, 2x+ 2z= 0 so z= -x.
Any eigenvector corresponding to eigenvalue -1 is of the form <x, x, -x>= x<1, 1, -1>.

by that reasoning, can i not have <1,-1,0> and <-1,1,0> as my two solutions for eigenvalue 2? but wolframalpha says i need to have the case where y=0.

also, i think the solution for eigenvalue -1 is <1,1,1>
 
Your two listed vectors (for eigenvalue 2) are just multiples of each other. You need two *linearly independent* eigenvectors, such as <1,-1,0> and <1,0,-1>, or <0,-1,1> and <1,-1/2, -1/2>, etc. There are infinitely many possible pairs of vectors <x1,y1,z1> and <x2,y2,z2> that are linearly independent and satisfy the equation x+y+z=0. Any such pair will do.

RGV
 
Last edited:
Ray Vickson said:
Your two listed vectors (for eigenvalue 2) are just multiples of each other. You need two *linearly independent* eigenvectors, such as <1,-1,0> and <1,0,-1>, or <0,-1,1> and <1,-1/2, -1/2>, etc. There are infinitely many possible pairs of vectors <x1,y1,z1> and <x2,y2,z2> that are linearly independent and satisfy the equation x+y+z=0. Any such pair will do.

RGV
yes, i had thought of that, but found it uncomfortable that of all the many possibilities, both my professor and wolframalpha chose the cases y=0 and z=0 as solutions, so i was wondering what made these two special compares to the others.
 
thetrystero said:
yes, i had thought of that, but found it uncomfortable that of all the many possibilities, both my professor and wolframalpha chose the cases y=0 and z=0 as solutions, so i was wondering what made these two special compares to the others.

There is nothing special about these choices, except for the fact that they both have one component = 0 so are, in a sense, the simplest possible. However, you could equally take x=0 and y=0 or x=0 and z=0.

RGV
 
If you have a vector that depends upon parameters, say, <x, y, -x- y> as I have above, then choosing x= 0, y= 1 gives you <0, 1, -1> and choosing x= 1, y= 0 gives <1, 0, -1>. That is, in effect, the same as writing <x, y, -x- y>= <x, 0, -x>+ <0, y, -y>= x<1, 0, -1>+ y<0, 1, -1>, showing that any such vector is a linear combination of <1, 0, -1> and <0, 1, -1>.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K