Proving the Diagonalization of a Real Matrix with Distinct Eigenvalues

  • Thread starter Thread starter gtfitzpatrick
  • Start date Start date
  • Tags Tags
    Eigenvalues
Click For Summary

Homework Help Overview

The discussion revolves around proving the diagonalization of a real matrix A with distinct eigenvalues. The matrix A is given in a specific form, and participants are exploring the implications of its eigenvalues and eigenvectors in the context of matrix diagonalization.

Discussion Character

  • Mixed

Approaches and Questions Raised

  • Participants are attempting to calculate the product P-1AP to demonstrate that it results in a diagonal matrix D. There are discussions about the correct form of the matrix P and its inverse, as well as the characteristic equation associated with matrix A. Some participants question their calculations and the assumptions made regarding the eigenvalues and eigenvectors.

Discussion Status

The discussion is ongoing, with several participants providing calculations and questioning their accuracy. Some have expressed confusion about the correct formulation of the matrices involved, while others suggest alternative approaches, such as proving that the columns of P are the eigenvectors of A. There is no explicit consensus, but various lines of reasoning are being explored.

Contextual Notes

Participants have noted issues with the determinant of matrix A and the formulation of the inverse matrix P-1. There is also mention of the characteristic equation that relates to the eigenvalues, which is a key aspect of the problem.

gtfitzpatrick
Messages
372
Reaction score
0
The real matrix A= <br /> <br /> \begin{pmatrix}\alpha &amp; \beta \\ 1 &amp; 0 \end{pmatrix}<br /> <br /> <br /> has distinct eigenvalues \lambda1 and \lambda2.
If P=<br /> <br /> \begin{pmatrix}\lambda1 &amp; \lambda2 \\ 1 &amp; 0 \end{pmatrix}<br /> <br /> <br />

proove that P^{}^-^1AP = D =diag{\lambda1 , \lambda2}.

deduce that, for every positive integer m, A^{}m = PD^{}mP^{}^-^1)


so i just tryed to multiply the whole lot out, (p^-1 is easy to find, just swap,change signs)
and i got
<br /> <br /> \begin{pmatrix}\lambda1(\alpha - \lambda2) + \beta &amp; \lambda2(\alpha - \lambda2) + \beta \\ \lambda1(-\alpha + \lambda1) - \beta &amp; \lambda2(-\alpha + \lambda1) - \beta \end{pmatrix}<br /> <br /> <br />

am i going the right road with this or should i be approaching it differently?
 
Physics news on Phys.org


Just doing the calculation should show that, but I don't get that for the calculation.

If
P= \begin{bmatrix}\lambda_1 &amp; \lambda_2 \\ 1 &amp; 0\end{bmatrix}
then
P^{-1}= \begin{bmatrix}0 &amp; 1 \\ \frac{1}{\lambda_1} &amp; -\frac{\lambda_1}{\lambda_2}\end{bmatrix}

Is that what you got?

You will also want to use the fact that the characteristic equation for A is x^2- \alpha x- \beta= 0 so \lambda_1^2- \alpha \lambda_1- \beta= 0 and \lambda_2^2- \alpha \lambda_2- \beta= 0.
 


HallsofIvy said:
Just doing the calculation should show that, but I don't get that for the calculation.

If
P= \begin{bmatrix}\lambda_1 &amp; \lambda_2 \\ 1 &amp; 0\end{bmatrix}
then
P^{-1}= \begin{bmatrix}0 &amp; 1 \\ \frac{1}{\lambda_1} &amp; -\frac{\lambda_1}{\lambda_2}\end{bmatrix}

Is that what you got?

Nice reply!

HallsofIvy said:
You will also want to use the fact that the characteristic equation for A is x^2- \alpha x- \beta= 0 so \lambda_1^2- \alpha \lambda_1- \beta= 0 and \lambda_2^2- \alpha \lambda_2- \beta= 0.

Perhaps, something like:

\lambda_{1} = \frac{\alpha \pm \sqrt{\alpha^{2}+4\beta}}{2}

\lambda_{2} = \frac{\alpha \pm \sqrt{\alpha^{2}+4\beta}}{2}

not sure though how it will solve the problem.
 


Couldn't you prove it by showing that the columns of P are the eigenvectors of A?
 


Yes, if you already have that theorem. But the obvious way to do it is to just do the product P^{-1}AP.
 


Thanks for all the replies. i had it wrong when getting p^-1 i only went and forgot 1/detA!

so for P^1AP i get <br /> = \begin{bmatrix}\lambda_1 &amp; \lambda_1 \\ \alpha-\lambda_1^2/\lambda_2+\alpha/\lambda_1 &amp; \alpha - \lambda_1^2/\lambda_2\end{bmatrix}<br />

I thought diag(\lambda_1,\lambda_2) = <br /> D= \begin{bmatrix}\lambda_1 &amp; 0 \\ 0 &amp; \lambda_2 \end{bmatrix}<br />

i still think I am doing something wrong
confussed!
 


After repeatedly trying and not getting anywhere I am after realising i had the question wrong P should read <br /> <br /> <br /> \begin{pmatrix}\lambda1 &amp; \lambda2 \\ 1 &amp; 1 \end{pmatrix}<br /> <br /> <br /> <br />

So I am off the try this new version.

But if i wanted to prove it like you said random variable how would i go about that?
take
<br /> \lambda_1^2- \alpha \lambda_1- \beta= 0 and<br /> \lambda_2^2- \alpha \lambda_2- \beta= 0<br /> and let them = columns of p?
 


right so now i got 1/(\lambda_1-\lambda_2) \begin{bmatrix}\lambda_1(\alpha - \lambda _2 ) + \beta &amp; \lambda_2(\alpha - \lambda_2) + \beta \\ \lambda_1(-\alpha + \lambda_1) - \beta &amp; \lambda_2(-\alpha + \lambda_1) - \beta\end{bmatrix}<br /> <br />

not sure where to go from here...
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
4
Views
2K
Replies
9
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 16 ·
Replies
16
Views
1K