Linear Algebra: Diagonalization, Transpose, and Disctinct Eigenvectors.

In summary, if an nxn matrix A has n linearly independent eigenvectors, then so does A^T. This can be proven by showing that A^T is diagonalizable, and thus has n linearly independent eigenvectors.
  • #1
JTemple
9
0

Homework Statement



Show that if an nxn matrix A has n linearly independent eigenvectors, then so does A^T

The Attempt at a Solution



Well, I understand the following:

(1) A is diagonalizable.

(2) A = PDP^-1, where P has columns of the independent eigenvectors

(3) A is invertible, meaning it has linearly independent columns, and rows that are not scalar multiples of each other.

(4) A^T has the same eigenvalues of A.


So, based mainly on (4), I can prove the "easy" case, in which there are n distinct eigenvalues.

IE: If A has n distinct eigenvalues, then A^T has those same distinct eigenvalues. Thus, If lambda_1 through lambda_n are distinct, then they each correspond to distinct eigenvectors v_1 through v_n for A and v_1T through v_nT for A^T.

In this case, the eigenvectors could be the same (in the case that A=A^T), but don't have to be.


My problem!

What if the eigenvalues are not distinct, IE there is some lambda_i with multiplicity =k.

I understand that in the case of A, the (A-lambda_i*I_n) matrix must have k free variables in order to give the (stated) n linearly independent eigenvectors.

So, how can I prove that the repeated eigenvalues in the A^T case ALSO correspond to (A^T-lambda_i*I_n) having k free variables and thus n linearly independent eigenvectors?
 
Physics news on Phys.org
  • #2
Using just the equations you've written, and the elementary algebraic properties of the transpose, I bet you can diagonalize A^T, and thus get an algebraic formula for the its eigenvectors.
 
  • #3
I still can't piece the last bit together. I realize that A^T has linearly independent columns, and rows that aren't scalar multiples of each other. I also see that the Transpose has the same diagonal entries as the original. However, the systems I'm setting up with transpose to find Eigenvectors aren't the same at all. In cases where I try with numbers, they still work, which is pretty cool, but I can' seem to guarantee k free variables in the A-lamI of A^T
 
  • #4
I think I got it.

A = PDP^(-1)
A^T = (P^-1)^T * D^T * P^T
D^T = D
(P^-1)^T = (P^T)^-1

A^T = (P^T)^-1 * D P^T

Since P is invertible, it has linearly independent columns and so does P^T.

So let P^T = M

A^T = MDM^-1

Therefore A^T is diagonalizable.

Therefor A^T has n linearly independent eigenvectors (the columns of M).
 

1. What is diagonalization in linear algebra?

Diagonalization in linear algebra is the process of finding a diagonal matrix that is similar to a given square matrix. This process involves finding a set of linearly independent eigenvectors of the given matrix and using them to form the diagonal matrix. Diagonalization is useful for simplifying calculations and solving systems of linear equations.

2. How do you transpose a matrix in linear algebra?

In linear algebra, transposition involves switching the rows and columns of a matrix. This is done by reflecting the elements of the matrix over its main diagonal. The transpose of a matrix A is denoted by A^T. Transposition is important in linear algebra because it allows for the manipulation and solving of systems of linear equations.

3. What are distinct eigenvectors in linear algebra?

In linear algebra, distinct eigenvectors are a set of eigenvectors that have different corresponding eigenvalues. Eigenvectors are special vectors that do not change direction when multiplied by a matrix. They are important in linear algebra because they allow for the diagonalization of matrices and the solving of systems of linear equations.

4. How is diagonalization used in real-world applications?

Diagonalization is used in various real-world applications such as computer graphics, data compression, and quantum mechanics. In computer graphics, diagonalization is used to rotate and scale images. In data compression, diagonalization helps to reduce the size of large data sets. In quantum mechanics, diagonalization is used to find the energy levels of particles.

5. Can a non-square matrix be diagonalized in linear algebra?

No, a non-square matrix cannot be diagonalized in linear algebra. Diagonalization is only possible for square matrices, meaning they have the same number of rows and columns. This is because the diagonal matrix, used in diagonalization, must have the same dimensions as the original matrix. However, some non-square matrices can be "almost" diagonalized by using similar techniques, such as the singular value decomposition method.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
524
  • Calculus and Beyond Homework Help
Replies
1
Views
998
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
0
Views
450
  • Calculus and Beyond Homework Help
Replies
1
Views
4K
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
3K
Replies
3
Views
2K
Back
Top