Eigenvalues of a 2 by 2 matrix

In summary: If you've only got one eigenvector, then you can reduce the size of A by finding any two more vectors that are perpendicular to your eigenvector and perpendicular to each other. When you assemble a P inverse and P from that set of three vectors, its effect on D will be to turn it into a matrix with a diagonal of your eigenvalue followed by a 2x2 matrix. Is that what you're getting at?
  • #1
Benny
584
0
Hi, I'm wondering if there is some kind of shortcut for finding the eigenvalues and eigenvectors of the following matrix.

[tex]
C = \left[ {\begin{array}{*{20}c}
{0.8} & {0.3} \\
{0.3} & {0.7} \\
\end{array}} \right]
[/tex]

Solving the equation [tex]\det \left( {C - \lambda I} \right) = 0[/tex], I get [tex]\lambda = \frac{{15 \pm \sqrt {37} }}{{20}}[/tex] and then if I try to find the eigenvectors by finding the solution space of [C - (eigenvalues)I] I end up with what seems like an endless/tedious/time consuming/frustrating bunch of row operations which don't appear to lead anywhere. I'm wondering if there is an easier way to find the eigenvalues, perhaps that might reveal an easier way to find the eigenvectors. Thanks...
 
Physics news on Phys.org
  • #2
The hard part is finding the eigenvalues!

Find the corresponding eigenvectors is the same as solving [itex]Ax= \lamdax[/itex] for each eigenvalue, [itex]\lamda[/itex].

In this case solve the equation
[tex] .8x+ .3y= \frac{{15 \pm \sqrt {37} }}{{20}}x[/tex]
You don't need to worry about the second equation, it will be automatically satisfied by your solution to this. Of course, you can only find y in terms of x: any multiple of an eigenvector is an eigenvector.
Since the entries in the matrix are given as decimals to one decimal place, I would recommend using 1.0 and 0.4 in place of
[tex]\frac{{15 +\sqrt {37} }}{{20}}[/tex] and
[tex]\frac{{15 - \sqrt {37} }}{{20}}[/tex] respectively.
 
  • #3
Hmm...I should've seen that.:shy:

Thanks very much for your help. So the key idea is that the multiplicity of each of the eigenvalues is 1 so the eigenspace of each one must be one dimensional? If so then I can see why it would be fairly easily to extract appropriate eigenvectors.

Edit: The question is from a non-calculator exam so I can't take decimal approximations. :(
 
  • #4
Benny said:
Edit: The question is from a non-calculator exam so I can't take decimal approximations. :(

When you see all those multiples of 0.1, simply multiply everything by 10. You'll be looking for eigenvalues of (15 +- sqrt(37))/2. Try (A,B) and get the equation:

8A + 3B = (15 +- sqrt(37))/2 A

Multiply by 2 to get

16A + 6B = 15A +- sqrt(37) A,

or

6B = (+-sqrt(37) - 1) A.

And the answers are the vectors:

[tex]\left(\begin{array}{c}
6 \\ \pm \sqrt{37}-1\end{array}\right)[/tex]

The trick to remember is to put the eigenvalue equation into "kA = mB" form, and then the eigenvectors are (m,n).

Carl
 
Last edited:
  • #5
Thanks Carl. I haven't been using a calculator to do math problems for quite a while anyway, so I'm pretty used to using the traditional ways to solve equations.:biggrin:

Anyway, suppose that I've been given a matrix A and I'm told to find an invertible matrix P and a diagonal matrix D such that A = PDP^-1. Then the implication is that A is definitely diagonalizable right? Suppose A is a 3 by 3 matrix.

A condition for A to be diagonalizable is that the algebraic multiplicity of each eignenvalue is the same as the dimension of the eigenspace corresponding to each of those eigenvalues. So say that I find an eigenvalue of A, call it lamda sub one, which has a multiplicity of 1. To find the corresponding eigenspace I need to solve the system:

[tex]
\left[ {A - \lambda _1 I} \right]\mathop x\limits^ \to = \mathop 0\limits^ \to
[/tex]

If I have:

[tex]
\left[ {A - \lambda _1 I} \right] = \left[ {\begin{array}{*{20}c}
a & b & c \\
d & e & f \\
g & h & i \\
\end{array}} \right]
[/tex]

Then since I know that the multiplicity of the eigenvalue is one, and by the implications of the question(asking for A = PDP^-1), can I just straight away ignore/wipe out one of the rows of the above matrix because I know the dimension of the eigenspace must be one?
 
  • #6
Benny said:
Then since I know that the multiplicity of the eigenvalue is one, and by the implications of the question(asking for A = PDP^-1), can I just straight away ignore/wipe out one of the rows of the above matrix because I know the dimension of the eigenspace must be one?

If you had all three eigenvectors, then, as you know, you'd also have P and P inverse. (The three eigenvectors make the columns of P inverse and the rows of P, if I recall.)

If you've only got one eigenvector, then you can reduce the size of A by finding any two more vectors that are perpendicular to your eigenvector and perpendicular to each other. When you assemble a P inverse and P from that set of three vectors, its effect on D will be to turn it into a a matrix with a diagonal of your eigenvalue followed by a 2x2 matrix. Is that what you're getting at?

Carl
 
  • #7
CarlB said:
If you had all three eigenvectors, then, as you know, you'd also have P and P inverse. (The three eigenvectors make the columns of P inverse and the rows of P, if I recall.)
The eigenvectors are the columns op P, but P inverse isn't necessarily the transpose (so the eigenvectors in the rows), this is only the case if you normalized/orthogonalized the eigenvectors so you were dealing with an orthongal matrix. (If I recall correctly...) If A is symmetric though (which is the case here), the eigenvectors are indeed orthogal vectors.
 
  • #8
What I mean is, if one of my eigenvectors is lamda sub one, then to find an eigenvector corresponding to that eigenvalue, I would find the solution space of [tex]\left[ {A - \lambda _1 I} \right][/tex] which is just the eigenspace corresponding to the eigenvalue lambda sub one. From that, I can extract an eigenvector and put it into P as one of P's columns. That's the procedure I used for finding eigenvectors.

My original question rests on actually finding the eigenspace. As I mentioned before, assuming that the multiplicity of the eigenvalue lamda sub one is exactly one. Then since A is diagonalizable, the dimension of the eigenspace corresponding to that eigenvalue, must also be one. In finding the eigenspace, this comes down to finding the solution space of the matrix that I referred to in the above paragraph. Since there is one 'parameter' in the solution, then doesn't that mean that one of the rows can be ignored, since whatever solution I get out from the other two rows of the matrix, must satisify the equation represented by the 'ignored' row. I know what I've said is quite confusing but I'm pretty sure that I can do what I mentioned. I just wanted some clarification.
 
  • #9
Benny, I think you understand this well, but it doesn't hurt to spend a certain amount of time solving matrix equations. Here's (a not terribly hard) one that I've been working on:

[tex]\left(\begin{array}{cccccccc}
+c_pc_t & +c_ps_t & -s_pc_t & -s_ps_t \\
-c_ps_t & +c_pc_t & +s_ps_t & -s_pc_t \\
+s_pc_t & -s_ps_t & +c_pc_t & +c_ps_t \\
-s_ps_t & -s_pc_t & -c_ps_t & +c_pc_t
\end{array}\right)[/tex]

where [tex]c_p = \cosh(\alpha_p)[/tex], [tex]s_p = \sinh(\alpha_p)[/tex], [tex]c_t = \cos(\alpha_t)[/tex], [tex]s_t = \sin(\alpha_t)[/tex] and
[tex]\alpha_p[/tex] and [tex]\alpha_t[/tex] are real numbers. The problem arises when I try to parameterize symmetry breaking of parity and time reversal symmetries by modifying the Dirac equation, more or less. I'm putting it here to show that this sort of eigenvector problem is something that you may find useful in your later studies in physics. It's also a very important part of engineering and mathematics in general.

By the way, TD is right, to get P and P inverse from the eigenvectors does require that the eigenvectors be normalized and orthogonal. In your situation, since the eigenvalues have multiplicity one, the orthogonality is automatic.

Carl
 
  • #10
Thanks for taking the time to answer my questions today. I'll be off now, need some sleep.:zzz:
 

1. What are eigenvalues of a 2 by 2 matrix?

The eigenvalues of a 2 by 2 matrix are the values that, when multiplied by the identity matrix and subtracted from the original matrix, result in a zero matrix.

2. How do I find the eigenvalues of a 2 by 2 matrix?

To find the eigenvalues of a 2 by 2 matrix, you can use the characteristic equation: det(A-λI)=0, where A is the original matrix and λ is the eigenvalue. This will result in a quadratic equation that can be solved for λ.

3. What is the significance of eigenvalues in a 2 by 2 matrix?

Eigenvalues in a 2 by 2 matrix are important because they represent the scaling factor of the corresponding eigenvectors. They also provide information about the behavior of the matrix, such as whether it is invertible or singular.

4. How many eigenvalues does a 2 by 2 matrix have?

A 2 by 2 matrix always has two eigenvalues, which can be either real or complex. This is because the characteristic equation for a 2 by 2 matrix results in a quadratic equation with two solutions.

5. Can the eigenvalues of a 2 by 2 matrix be negative?

Yes, the eigenvalues of a 2 by 2 matrix can be negative. The sign of the eigenvalues depends on the values of the original matrix, and they can be positive, negative, or zero.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
513
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
361
  • Calculus and Beyond Homework Help
Replies
6
Views
269
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
19
Views
3K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
Back
Top