Undergrad Eigenvectors for degenerate eigenvalues

Click For Summary
The discussion revolves around finding eigenvectors for a matrix with degenerate eigenvalues, specifically focusing on the repeated eigenvalue of 1. The user encounters difficulties using back substitution to derive eigenvectors, ultimately realizing that they have a single equation with three unknowns due to the redundancy in the equations. The solution involves expressing one variable in terms of two arbitrary constants, leading to a linear combination of two vectors. It is clarified that while orthogonal eigenvectors are preferred in quantum mechanics, they are not strictly necessary, and the provided vectors can be made orthogonal if desired. The conversation concludes with the understanding that the two vectors define a plane in R3, representing an infinite set of solutions.
dyn
Messages
774
Reaction score
63
I am looking at some notes on Linear algebra written for maths students mainly to improve my Quantum Mechanics. I came across the following example - $$ \begin{pmatrix} 2 & -3 & 1 \\ 1 & -2 & 1 \\ 1 & -3 & 2 \end{pmatrix} $$
The example then gives the eigenvalues as 0 and 1(doubly degenerate). It then calculates the eigenvectors using Gaussian elimination. This is where my problem arises - coming from a physics background I tried to find the eigenvectors for the repeated eigenvalue 1 using back substitution but it doesn't seem to produce a solution this way. Am I doing something wrong or is it possible for back substitution not to work while Gaussian elimination works ?
The answer given for the eigenvector is a linear combination of the 2 vectors ( 3 1 0 )T and (-1 0 1)T. In the Quantum Mechanics textbook I am using it says for degenerate eigenvalues to choose 2 mutually orthogonal vectors. The 2 vectors I have listed are not orthogonal. Is the orthogonal part just a preference for QM and not a requirement ?
Thanks
 
Physics news on Phys.org
You can turn them into an orthogonal pair by subtracting from one the projection of the other onto it.

Given two linearly independent vectors ##\vec u,\vec v##, the pair ##\vec u-\frac{\vec u\cdot \vec v}{\vec v\cdot\vec v}\vec v, \vec v## is orthogonal. You can check that by calculating ##(\vec u-\frac{\vec u\cdot \vec v}{\vec v\cdot\vec v}\vec v)\cdot \vec v##
 
So choosing the eigenvectors as orthogonal is just a matter of preference. Thanks. Any thoughts on why I can't calculate the eigenvectors by back substitution but it can be done by Gaussian elimination ?
 
If I apply a general vector ( a b c )T to the eigenvalue equation with eigenvalue 1 , I end up with 3 equations exactly the same a-3b+c=0. How do I then proceed to end up with the answer given which is equivalent to ( 3x-y , x , y )T
 
The equation 'a-3b+c=0' can be written as 'a=3b-c' which just says that for any an eigenvector with Eigenvalue 1, whose2nd and 3rd components are b,c, the first component is 3b-c.

Relabel a,b,c as x,y,z and you have the given answer.
 
dyn said:
If I apply a general vector ( a b c )T to the eigenvalue equation with eigenvalue 1 , I end up with 3 equations exactly the same a-3b+c=0. How do I then proceed to end up with the answer given which is equivalent to ( 3x-y , x , y )T
Elaborating on what andrewkirk said, relabel the equation above as x - 3y + z = 0.

Then
x = 3y - z
y = y
z = ... z
If you look at the right sides as a sum of two vectors, you get
##\begin{bmatrix} x \\ y \\ z \end{bmatrix} = y\begin{bmatrix} 3 \\ 1 \\ 0 \end{bmatrix} + z\begin{bmatrix} -1 \\ 0 \\ 1\end{bmatrix}##

Here y and z on the right side can be considered arbitrary constants.
 
Thanks for your replies. So essentially because I end up with 3 equations that are the same I really have just one equation with 3 unknowns. So I take 2 of those unknowns to have arbitrary values and the express the remaining unknown in terms of the 2 arbitrary values
 
dyn said:
Thanks for your replies. So essentially because I end up with 3 equations that are the same I really have just one equation with 3 unknowns. So I take 2 of those unknowns to have arbitrary values and the express the remaining unknown in terms of the 2 arbitrary values
Yes. In the work I showed, you can take y = 1 and z = 0, and get one solution, and you can take y = 0, z = 1, to get another solution. Since y and z are completely arbitrary, you get a double infinity of solutions.

Geometrically, the two vectors I showed determine a plane in R3. Every point in this plane is some linear combination of those two vectors.
 
  • Like
Likes dyn

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K