Eigenvalues/vector question

  • Thread starter toffee
  • Start date
In summary, the eigenvectors in a diagonalized square matrix are simply each column of the matrix, and the eigenvalues are the numbers in each column. This is derived from the characteristic equation and the idea of change of basis.
  • #1
toffee
5
0
why are the eigenvectors in a square matrix which is diagonalised (only has numbers in the M_jj elements), just each column of the matrix? And why are the eigenvalues the actual number in each column.

I can understand if its diagonal, the matrix consists of linearly indepednent therefore orthogal vectors. Are these necessarily the eigenvectors of the matrix?

Many thanks
 
Mathematics news on Phys.org
  • #2
Which of the many matrices you're alluding to but not mentioning are you talking about?

Let M be the original matrix, and let D be its diagonalized form, and let P and Q=P^{-1} be the matrix satisfying

M = PDQ, or D=QMP

When you refer to the rows/columns, you ought to be talking about those of Q/P, when it should be clear what's going on if you're ok with the notion of "change of basis".
 
  • #3
It sounds like toffee is talking about the eigenvectors and eigenvalues of a diagonal matrix, D.

(that the eigenvalues are the diagonal elements themselves comes directly from the characteristic equation : [itex]det|D - \lambda I| = 0 => \Pi _{i=1}^N (D_{ii} - \lambda) = 0 [/itex])
 
  • #4
We write any linear transformation as a matrix by choosing a specific basis and seeing what the transformation does to the basis vectors. That is, if v1, v1, . . ., v1 is a basis, we can write any vector v= a1v1+ a1v1+ anv1 and write v as the "n-tuple" (a1, a2, . . ., an). Each basis vector is represented by (1, 0, ..., 0), (0, 1, . . ., 0), etc.
Applying the linear transformation to v1 is the same as multiplying the matrix times (1, 0, . . ., 0) which gives whatever the coefficients of that vector are.

Suppose T is a linear transformation, over vector space V, with eigenvalues &lambda1, &lambda2, . . ., &lambdan, having corresponding eigenvectors v1, v1, . . ., v1 which form a basis for V. Applying T to v1 ((represented as (1, 0, . . ., 0)) gives λ1: that would be written (&lamba;1, 0, . . ., 0) so the first column is simply that: (&lamba;1, 0, . . ., 0) . That's why the matrix is a diagonal matrix with the eigenvalues on the diagonal.
The "matrix with eigenvectors as columns" you are talking about is, I think, the "change of basis" matrix. If you have a linear transformation written as a matrix in a given basis and want to change to another basis, then you need to multiply by a "change of basis" matrix which is just the same as applying a linear transformation. It's columns are, again, just the result of applying the transformation to the basis vectors. In particular, the "change of basis" matrix from the basis of eigenvectors to your original take (1, 0,..., 0) to the first eigenvector (written in the orginal basis) and so has first column exactly what that eigenvector is.
 

Related to Eigenvalues/vector question

1. What is the difference between eigenvalues and eigenvectors?

Eigenvalues are scalar values that represent the amount by which an eigenvector is scaled when multiplied by a transformation matrix. Eigenvectors are the corresponding non-zero vectors that remain in the same direction after being transformed by the matrix.

2. How do eigenvalues and eigenvectors relate to each other?

Eigenvalues and eigenvectors are closely related because the eigenvalue represents the scaling factor of an eigenvector. In other words, an eigenvector is the direction and magnitude of the transformation, while the eigenvalue is the amount of stretching or shrinking that occurs along that direction.

3. Why are eigenvalues and eigenvectors important in linear algebra?

Eigenvalues and eigenvectors are important in linear algebra because they provide a way to analyze and understand transformations performed on a vector space. They are also used in a variety of applications, such as solving systems of differential equations, image processing, and data compression.

4. How are eigenvalues and eigenvectors calculated?

The process for calculating eigenvalues and eigenvectors involves finding the characteristic polynomial of a transformation matrix, which is then solved for its roots to obtain the eigenvalues. The corresponding eigenvectors can then be found by solving a system of linear equations.

5. What is the significance of the eigenvalue decomposition?

The eigenvalue decomposition is a factorization of a matrix into its eigenvalues and eigenvectors. This can be useful for performing matrix operations, such as matrix inversion and diagonalization, as well as for understanding the behavior of a system represented by the matrix.

Similar threads

  • Advanced Physics Homework Help
Replies
15
Views
2K
Replies
1
Views
851
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • General Math
Replies
5
Views
2K
  • General Math
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
314
  • Quantum Interpretations and Foundations
Replies
15
Views
2K
Replies
4
Views
2K
  • General Math
Replies
11
Views
2K
Back
Top