Finding Eigenvectors of Matrix A

  • Thread starter Thread starter t_n_p
  • Start date Start date
  • Tags Tags
    Eigenvectors
Click For Summary

Homework Help Overview

The discussion revolves around finding eigenvectors for given matrices, specifically focusing on the matrix A = [3 -2; -3 2] and another matrix A = [0 -1 0; -4 0 0; 0 0 2]. Participants explore the eigenvalues and corresponding eigenvectors, questioning the uniqueness and multiplicity of eigenvectors for repeated eigenvalues.

Discussion Character

  • Exploratory, Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • Participants discuss the process of finding eigenvectors from eigenvalues, questioning how different forms of eigenvectors relate to each other. There is exploration of the implications of having multiple eigenvectors for a single eigenvalue and the nature of eigenspaces.

Discussion Status

Several participants have provided insights into the nature of eigenvectors, noting that scalar multiples of eigenvectors are valid. The conversation has also touched on the dimensionality of eigenspaces and the existence of multiple linearly independent eigenvectors for repeated eigenvalues. There is ongoing inquiry into the mathematical reasoning behind specific eigenvector forms.

Contextual Notes

Participants express confusion regarding the relationship between eigenvalues and eigenvectors, particularly in cases of repeated eigenvalues and the implications for the number of eigenvectors. Some participants are working under the assumption that eigenvectors must be unique for each eigenvalue, which is being challenged in the discussion.

t_n_p
Messages
593
Reaction score
0

Homework Statement



I have a matrix A = [3 -2; -3 2],
then I peform det(A-λI) and find solutions λ=0 and λ=5.

For the first case λ=0 I perform A-0I, which of course is just A [3 -2; -3 2]. I then reduce to row echelon form and find the values V1 = 2 and V2 = 3, therefore for λ=0 my eigenvector is [2; 3]

For the second case λ=5, I perform A-5I, which gives [-2 -2; -3 -3]. Reducing to row echelon form yields [-6 -6; 0 0][V1; V2] = [0; 0]

In linear form my, the equation is as follows, -6V1 - 6V2 = 0

The answer provided is [V1; V2] = [-1; 1], my question is, how do you know for a case where the coeffients are the same, that the eigenvector will be [-1; 1] rather than [1;-1] as -6(1) - 6(-1) = 0 and still satifies the equation.

Thanks
 
Physics news on Phys.org
An infinite number of eigenvectors exists for every eigenvalue, though they may not be linearly independent. Both (1,-1) and (-1,1) are linear combinations of each other; they differ by a factor of -1 alone. Any multiple of this particular eigenvector would also satisfy the equation because you can cancel out the contants on both side of the matrix equation [tex]Ax=\lambda x[/tex]
 
If v is an eigenvector, the c*v is also an eigenvector for ANY constant c. Since A(cv)=cA(v)=c*lambda*v=lambda*(cv). So [-1,1] and [1,-1] are both eigenvectors. You can choose either one. For the eigenvalue 0 you could also have found the eigenvector to be [-2,-3]. That's also fine.
 
Thanks.
 
I have another question. I have a matrix A = [0 -1 0; -4 0 0; 0 0 2]. I have found the characteristic equation to be (-1)^3(λ+2)(λ-2)^2 and hence my eigenvectors and λ=-2 and 2. In the answers it says the eigenvectors are [1; 2; 0], [1; -2; 0] and [0; 0; 1].

How can you have three eigenvectors from two eigenvalues? Yes I realized there are two solutions λ=2, but surely they provide the same eigenvectors?

Note: For λ=2 I found my eigenvector to be [1; -2; 0] and hence I am suspicious about the [0; 0; 1] eigenvector
 
There can be more than one linearly independent eigenvector for each eigenvalue. There's nothing wrong with that.
 
Can you explain how you get the eigenvector [0; 0; 1] ?
 
TRY! exactly like you did before.

Although I prefer the more fundamental:
[tex]\left[\begin{array}{ccc}0 & -1 & 0 \\ -4 & 0 & 0\\ 0 & 0 & 2\end{array}\right]\left[\begin{array}{c} x \\ y \\ z\end{array}\right]= \left[\begin{array}{c} 2x \\ 2y \\ 2z \end{array}\right][/tex]
directly from the definition of "eigenvector".

That gives three equations: -y= 2x, -4x= 2y, 2z= 2z . The last equation is true for all x, y, and z. From the first, y= -2x so -4x= 2(-2x) = -4x is true for any x, y satisfying y= -2x. That is <x, -2x, z> is an eigenvalue for any x or z. In particular, if you take x= 1, z= 0, you get <1, -2, 0> and if you take x= 0, z= 1, you get <0, 0, 1>.
 
t_n_p said:
Yes I realized there are two solutions λ=2, but surely they provide the same eigenvectors?

Hi t_n_p! :smile:

No. An n-repeated eigenvalue will have a whole n-dimensional subspace of eigenvectors.

For example the nxn matrix In has n eigenvalues = 1, but obviously any nx1 vector will be an eigenvector! :smile:
For λ=2 I found my eigenvector to be [1; -2; 0] and hence I am suspicious about the [0; 0; 1] eigenvector

How did you get [1; -2; 0]? You'll probably find that the same method gave you [0; 0; 1], and you threw it away for some reason. :smile:
 
  • #10
Halls of Ivy - In the notes we don't learn this way

Tiny tim - using the same method as I did before, obviously I get the same eigenvector if I'm using the same eigenvalue.

Here's a recap of what I tried.
-Got the characteristic equation as shown above
-Subbed in the value of λ=2 into my matrix and setup the following..

[-2 -1 0; -4 -2 0; 0 0 0][V1; V2; V3] = [0; 0; 0]

-Then reduced to row echelon to form -4V1 - 2V2 + 0V3 = 0

This is how I got the eigenvector [1; -2; 0]. Obviously I can see that [0; 0; 1] satifies the equation, in fact [0; 0; x] where x is any number will satisfy the equation, but how you get it using the way I found eigenvector [1; -2; 0] is what I wish to know.

Thanks
 
  • #11
What definition of "eigenvector" did you learn?

-Then reduced to row echelon to form -4V1 - 2V2 + 0V3 = 0
Yes, That gives V2= -2V1. But do you see that V3 can be anything and still satisfy that equation? You can choose V1 and V3 to be anything and then V2= -2V1. That's why the "eigenspace" corrresponding to [itex]\lambda= 2[/itex] is two dimensional. As I said, you can choose V1 and V3 to be anything you like. "1" and "0" are particularly easy and if you chose one to be 0 and the other to be 1, you are sure to get independednt vectors. If you take V1= 1 and V3= 0 you get <1, -2, 0> and if you take V1= 0, V3= 1, you get <0, 0, 1>.
 
Last edited by a moderator:
  • #12
tiny-tim said:
Hi t_n_p! :smile:

No. An n-repeated eigenvalue will have a whole n-dimensional subspace of eigenvectors.

For example the nxn matrix In has n eigenvalues = 1, but obviously any nx1 vector will be an eigenvector! :smile:

tiny-tim, that's not necessarily true. For example, the matrix
[tex]\left[\begin{array}{cc}2 & 1 \\ 0 & 2\end{array}\right][/tex]
has 2 as a "double eigenvalue" but all eigenvectors are of the form <x, 0>: only one dimensional. That's why it cannot be "diagonalized".
 
  • #13
HallsofIvy said:
What definition of "eigenvector" did you learn?


Yes, That gives V2= -2V1. But do you see that V3 can be anything and still satisfy that equation? You can choose V1 and V3 to be anything and then V2= -2V1. That's why the "eigenspace" corrresponding to [itex]\lambda= 2[/itex] is two dimensional. As I said, you can choose V1 and V3 to be anything you like. "1" and "0" are particularly easy and if you chose one to be 0 and the other to be 1, you are sure to get independednt vectors. If you take V1= 1 and V3= 0 you get <1, -2, 0> and if you take V1= 0, V3= 1, you get <0, 0, 1>.

Yes I see what you mean, my next question is now whether there is a more "mathematical" basis for obtaining the eigenvector <0, 0, x>, where x can be any number. Yes I can see that the eigenvector <0, 0, x> satisfies the equation by "eye", I'm just wondering if this is a more "logical" reasoning rather than a "mathematical" one. I hope you understand what I'm trying to say :-p
 
  • #14
I think what you are trying to say is exactly what I said before:

The definition of "eigenvector" is "v is an eigenvector of A, corresponding to eigenvector [itex]\lambda[/itex] if and only if [itex]Av= \lambda v[/itex]

Using your example that leads to
[tex]\left[\begin{array}{ccc}0 & -1 & 0 \\ -4 & 0 & 0\\ 0 & 0 & 2\end{array}\right]\left[\begin{array}{c} x \\ y \\ z\end{array}\right]= \left[\begin{array}{c} 2x \\ 2y \\ 2z \end{array}\right][/tex]
and, as I said before, equations -y= 2x, -4x= 2y, 2z= 2z. The first two are satisfied as along as y= -2x, the third for all z. Any eigenvector is of the form <x, -2x, z> . You are free to choose any values for x and z. You can "separate" the two by taking x and z, in turn to be 0: <0, 0, z> and <x, -2x, 0>. Even better, take z= 1 in the first and x= 1 in the second: <0, 0, 1> and <1, -2, 0>.


You should have done that sort of thing before. For example: find a basis for the vector space of <x, y, z> satisfying x- y+ 2z= 0. I can solve that quickly for x as a function of y and z: x= y- 2z. Taking y= 1, z= 0, x= 1- 2(0)= 1 so <1, 1, 0> is in that space. Taking y= 0, z= 1, x= 0- 2(1)= -2 so <-2, 0, 1> is in that space and by using 0 and 1, I have guarenteed that they are independent: a basis is <-2, 0, 1> and <1, 1, 0>.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 5 ·
Replies
5
Views
15K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
5
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K