Why Are Diagonalized Matrix Columns the Eigenvectors?

  • Thread starter Thread starter toffee
  • Start date Start date
AI Thread Summary
In diagonalized matrices, the columns represent the eigenvectors because they are derived from the transformation of the basis vectors under the linear transformation. The eigenvalues correspond to the diagonal elements of the matrix, as they emerge directly from the characteristic equation. When a matrix is diagonalized, it indicates that the eigenvectors form a basis for the vector space, allowing for a clear representation of the transformation. The change of basis matrix, which consists of these eigenvectors as columns, facilitates the transition between different bases. This relationship underscores the fundamental properties of linear transformations and their representations in matrix form.
toffee
Messages
5
Reaction score
0
why are the eigenvectors in a square matrix which is diagonalised (only has numbers in the M_jj elements), just each column of the matrix? And why are the eigenvalues the actual number in each column.

I can understand if its diagonal, the matrix consists of linearly indepednent therefore orthogal vectors. Are these necessarily the eigenvectors of the matrix?

Many thanks
 
Mathematics news on Phys.org
Which of the many matrices you're alluding to but not mentioning are you talking about?

Let M be the original matrix, and let D be its diagonalized form, and let P and Q=P^{-1} be the matrix satisfying

M = PDQ, or D=QMP

When you refer to the rows/columns, you ought to be talking about those of Q/P, when it should be clear what's going on if you're ok with the notion of "change of basis".
 
It sounds like toffee is talking about the eigenvectors and eigenvalues of a diagonal matrix, D.

(that the eigenvalues are the diagonal elements themselves comes directly from the characteristic equation : det|D - \lambda I| = 0 => \Pi _{i=1}^N (D_{ii} - \lambda) = 0)
 
We write any linear transformation as a matrix by choosing a specific basis and seeing what the transformation does to the basis vectors. That is, if v1, v1, . . ., v1 is a basis, we can write any vector v= a1v1+ a1v1+ anv1 and write v as the "n-tuple" (a1, a2, . . ., an). Each basis vector is represented by (1, 0, ..., 0), (0, 1, . . ., 0), etc.
Applying the linear transformation to v1 is the same as multiplying the matrix times (1, 0, . . ., 0) which gives whatever the coefficients of that vector are.

Suppose T is a linear transformation, over vector space V, with eigenvalues &lambda1, &lambda2, . . ., &lambdan, having corresponding eigenvectors v1, v1, . . ., v1 which form a basis for V. Applying T to v1 ((represented as (1, 0, . . ., 0)) gives λ1: that would be written (&lamba;1, 0, . . ., 0) so the first column is simply that: (&lamba;1, 0, . . ., 0) . That's why the matrix is a diagonal matrix with the eigenvalues on the diagonal.
The "matrix with eigenvectors as columns" you are talking about is, I think, the "change of basis" matrix. If you have a linear transformation written as a matrix in a given basis and want to change to another basis, then you need to multiply by a "change of basis" matrix which is just the same as applying a linear transformation. It's columns are, again, just the result of applying the transformation to the basis vectors. In particular, the "change of basis" matrix from the basis of eigenvectors to your original take (1, 0,..., 0) to the first eigenvector (written in the orginal basis) and so has first column exactly what that eigenvector is.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top