MHB Effie's question via email about Eigenvalues, Eigenvectors and Diagonalisation

AI Thread Summary
Effie has accurately determined the eigenvalues of the matrix A, which are λ1 = -3 and λ2 = 2. The corresponding eigenvectors are found by solving the equation A * x = λ * x for each eigenvalue, resulting in eigenvectors of the forms [1, -3] and [-2, 1]. A modal matrix M is constructed from these eigenvectors, yielding M = [1, -2; -3, 1]. The diagonal matrix D, which contains the eigenvalues on its diagonal, is confirmed by the relation D = M^(-1) * A * M. The discussion emphasizes the importance of the characteristic equation in deriving the eigenvalues.
Prove It
Gold Member
MHB
Messages
1,434
Reaction score
20
Effie has correctly found that the eigenvalues of $\displaystyle \begin{align*} A = \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \end{align*}$ are $\displaystyle \begin{align*} \lambda_1 = -3 \end{align*}$ and $\displaystyle \begin{align*} \lambda_2 = 2 \end{align*}$. To find the eigenvectors we solve $\displaystyle \begin{align*} A \,\mathbf{x} = \lambda \, \mathbf{x} \end{align*}$ for each $\displaystyle \begin{align*} \lambda \end{align*}$. For $\displaystyle \begin{align*} \lambda_1 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= -3\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}6 & \phantom{-}2 \\ -3 & -1 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 6 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding half of row 1 to row 2 in row 2...} \end{align*}$

So we can see that $\displaystyle \begin{align*} 6\,x + 2\,y = 0 \implies y = -3\,x \end{align*}$, so by letting $\displaystyle \begin{align*} x = t \end{align*}$ where $\displaystyle \begin{align*} t \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} t\,\left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$ will do.

For $\displaystyle \begin{align*} \lambda_2 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= 2\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}1 & \phantom{-}2 \\ -3 & -6 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 1 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding three lots of row 1 to row 2 in row 2...} \end{align*}$

We can see that $\displaystyle \begin{align*} x + 2\,y = 0 \implies x = -2\,y \end{align*}$. If we let $\displaystyle \begin{align*} y = s \end{align*}$ where $\displaystyle \begin{align*} s \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} s\,\left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix}\right] \end{align*}$ will do.

So a modal matrix, whose columns are made up of the eigenvectors, is $\displaystyle \begin{align*} M = \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \end{align*}$. The spectral (diagonal) matrix has the corresponding eigenvalues on the main diagonal and 0 everywhere else, so $\displaystyle \begin{align*} D = \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \end{align*}$. We can show that $\displaystyle \begin{align*} D = M^{-1} \, A \, M \end{align*}$...

$\displaystyle \begin{align*} M^{-1} &= \frac{1}{1 \cdot 1 - \left( -2 \right) \cdot \left( -3 \right) } \, \left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ \\ M^{-1} \, A \, M &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} -3 & -4 \\ \phantom{-}9 & \phantom{-}2 \end{matrix} \right] \\ &= -\frac{1}{5} \, \left[ \begin{matrix} 15 & \phantom{-}0 \\ 0 & -10 \end{matrix} \right] \\ &= \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \\ &= D \end{align*}$
 
Mathematics news on Phys.org
Prove It said:
Effie has correctly found that the eigenvalues of $\displaystyle \begin{align*} A = \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \end{align*}$ are $\displaystyle \begin{align*} \lambda_1 = -3 \end{align*}$ and $\displaystyle \begin{align*} \lambda_2 = 2 \end{align*}$. To find the eigenvectors we solve $\displaystyle \begin{align*} A \,\mathbf{x} = \lambda \, \mathbf{x} \end{align*}$ for each $\displaystyle \begin{align*} \lambda \end{align*}$. For $\displaystyle \begin{align*} \lambda_1 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= -3\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}6 & \phantom{-}2 \\ -3 & -1 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 6 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding half of row 1 to row 2 in row 2...} \end{align*}$

So we can see that $\displaystyle \begin{align*} 6\,x + 2\,y = 0 \implies y = -3\,x \end{align*}$, so by letting $\displaystyle \begin{align*} x = t \end{align*}$ where $\displaystyle \begin{align*} t \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} t\,\left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$ will do.

For $\displaystyle \begin{align*} \lambda_2 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= 2\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}1 & \phantom{-}2 \\ -3 & -6 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 1 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding three lots of row 1 to row 2 in row 2...} \end{align*}$

We can see that $\displaystyle \begin{align*} x + 2\,y = 0 \implies x = -2\,y \end{align*}$. If we let $\displaystyle \begin{align*} y = s \end{align*}$ where $\displaystyle \begin{align*} s \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} s\,\left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix}\right] \end{align*}$ will do.

So a modal matrix, whose columns are made up of the eigenvectors, is $\displaystyle \begin{align*} M = \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \end{align*}$. The spectral (diagonal) matrix has the corresponding eigenvalues on the main diagonal and 0 everywhere else, so $\displaystyle \begin{align*} D = \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \end{align*}$. We can show that $\displaystyle \begin{align*} D = M^{-1} \, A \, M \end{align*}$...

$\displaystyle \begin{align*} M^{-1} &= \frac{1}{1 \cdot 1 - \left( -2 \right) \cdot \left( -3 \right) } \, \left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ \\ M^{-1} \, A \, M &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} -3 & -4 \\ \phantom{-}9 & \phantom{-}2 \end{matrix} \right] \\ &= -\frac{1}{5} \, \left[ \begin{matrix} 15 & \phantom{-}0 \\ 0 & -10 \end{matrix} \right] \\ &= \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \\ &= D \end{align*}$
Correct!...the learner has to first come up with what i call characteristic equation; ##(3-λ)(-4-λ)+6=0## in finding the eigenvalues...
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Thread 'Imaginary Pythagoras'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...

Similar threads

Back
Top