Proving the Diagonalization of a Real Matrix with Distinct Eigenvalues

  • Thread starter Thread starter gtfitzpatrick
  • Start date Start date
  • Tags Tags
    Eigenvalues
gtfitzpatrick
Messages
372
Reaction score
0
The real matrix A= <br /> <br /> \begin{pmatrix}\alpha &amp; \beta \\ 1 &amp; 0 \end{pmatrix}<br /> <br /> <br /> has distinct eigenvalues \lambda1 and \lambda2.
If P=<br /> <br /> \begin{pmatrix}\lambda1 &amp; \lambda2 \\ 1 &amp; 0 \end{pmatrix}<br /> <br /> <br />

proove that P^{}^-^1AP = D =diag{\lambda1 , \lambda2}.

deduce that, for every positive integer m, A^{}m = PD^{}mP^{}^-^1)


so i just tryed to multiply the whole lot out, (p^-1 is easy to find, just swap,change signs)
and i got
<br /> <br /> \begin{pmatrix}\lambda1(\alpha - \lambda2) + \beta &amp; \lambda2(\alpha - \lambda2) + \beta \\ \lambda1(-\alpha + \lambda1) - \beta &amp; \lambda2(-\alpha + \lambda1) - \beta \end{pmatrix}<br /> <br /> <br />

am i going the right road with this or should i be approaching it differently?
 
Physics news on Phys.org


Just doing the calculation should show that, but I don't get that for the calculation.

If
P= \begin{bmatrix}\lambda_1 &amp; \lambda_2 \\ 1 &amp; 0\end{bmatrix}
then
P^{-1}= \begin{bmatrix}0 &amp; 1 \\ \frac{1}{\lambda_1} &amp; -\frac{\lambda_1}{\lambda_2}\end{bmatrix}

Is that what you got?

You will also want to use the fact that the characteristic equation for A is x^2- \alpha x- \beta= 0 so \lambda_1^2- \alpha \lambda_1- \beta= 0 and \lambda_2^2- \alpha \lambda_2- \beta= 0.
 


HallsofIvy said:
Just doing the calculation should show that, but I don't get that for the calculation.

If
P= \begin{bmatrix}\lambda_1 &amp; \lambda_2 \\ 1 &amp; 0\end{bmatrix}
then
P^{-1}= \begin{bmatrix}0 &amp; 1 \\ \frac{1}{\lambda_1} &amp; -\frac{\lambda_1}{\lambda_2}\end{bmatrix}

Is that what you got?

Nice reply!

HallsofIvy said:
You will also want to use the fact that the characteristic equation for A is x^2- \alpha x- \beta= 0 so \lambda_1^2- \alpha \lambda_1- \beta= 0 and \lambda_2^2- \alpha \lambda_2- \beta= 0.

Perhaps, something like:

\lambda_{1} = \frac{\alpha \pm \sqrt{\alpha^{2}+4\beta}}{2}

\lambda_{2} = \frac{\alpha \pm \sqrt{\alpha^{2}+4\beta}}{2}

not sure though how it will solve the problem.
 


Couldn't you prove it by showing that the columns of P are the eigenvectors of A?
 


Yes, if you already have that theorem. But the obvious way to do it is to just do the product P^{-1}AP.
 


Thanks for all the replies. i had it wrong when getting p^-1 i only went and forgot 1/detA!

so for P^1AP i get <br /> = \begin{bmatrix}\lambda_1 &amp; \lambda_1 \\ \alpha-\lambda_1^2/\lambda_2+\alpha/\lambda_1 &amp; \alpha - \lambda_1^2/\lambda_2\end{bmatrix}<br />

I thought diag(\lambda_1,\lambda_2) = <br /> D= \begin{bmatrix}\lambda_1 &amp; 0 \\ 0 &amp; \lambda_2 \end{bmatrix}<br />

i still think I am doing something wrong
confussed!
 


After repeatedly trying and not getting anywhere I am after realising i had the question wrong P should read <br /> <br /> <br /> \begin{pmatrix}\lambda1 &amp; \lambda2 \\ 1 &amp; 1 \end{pmatrix}<br /> <br /> <br /> <br />

So I am off the try this new version.

But if i wanted to prove it like you said random variable how would i go about that?
take
<br /> \lambda_1^2- \alpha \lambda_1- \beta= 0 and<br /> \lambda_2^2- \alpha \lambda_2- \beta= 0<br /> and let them = columns of p?
 


right so now i got 1/(\lambda_1-\lambda_2) \begin{bmatrix}\lambda_1(\alpha - \lambda _2 ) + \beta &amp; \lambda_2(\alpha - \lambda_2) + \beta \\ \lambda_1(-\alpha + \lambda_1) - \beta &amp; \lambda_2(-\alpha + \lambda_1) - \beta\end{bmatrix}<br /> <br />

not sure where to go from here...
 
Back
Top