Can a matrix A be decomposed if it doesn't have n distinct eigenvalues?

Click For Summary
A matrix A can only be decomposed into a diagonal form if it has n distinct eigenvalues, which ensures the existence of n linearly independent eigenvectors. If A lacks distinct eigenvalues, it may not be diagonalizable, meaning P may not be invertible. In such cases, alternative decompositions like the Jordan form or singular value decomposition can be utilized. The inability to diagonalize does not imply that the transformation of vector x is merely a scaling; it may also involve rotation or translation. Understanding these decomposition methods is crucial for analyzing matrices that do not meet the distinct eigenvalue criterion.
JanClaesen
Messages
56
Reaction score
0
If a n x n matrix A has an eigenvalue decomposition, so if it has n different eigenvalues, by the way, is it correct that a n x n matrix that doesn't have n different eigenvalues can't be decomposed? Are the more situations in which it can't be decomposed? Why can't I just put the same eigenvalue more than once in the diagonal matrix D, perhaps P is not anymore invertible?

A=PDP^-1

Ax = PDP^-1 * x
The vector x holds the weights for combining the vectors in the column space of P^-1,
so P^-1 * x are the coordinates of the vector x if unit vectors are used as base and x itself are the coordinates of the vector x if the column space of P^-1 is used as base.
D scales every component of this transformed vector, and finally the coordinates of this deformed vector are expressed again in terms of P^-1.
If a matrix A doesn't have an eigenvalue decomposition, does this imply that the vector x isn't just scaled, but is scaled/rotated/translated?
 
Physics news on Phys.org
Since no-one has replied in over a week I'll take a swing at your questions.

First, your post has so many questions that I will not attempt to answer them all.

By "eigenvalue decomposition" I'm assuming you mean that the nxn matrix is similar to a diagonal matrix, so that A = P D P^{-1}, where D is diagonal. In order to do this, you form P out of the eigenvectors of A. Hence, you need n linearly independent eigenvectors in order for P^{-1} to exist. If A has n distinct eigenvalues, then you are guaranteed to have n linearly independent eigenvectors, so this case is easy. Another easy case is when A is Hermitian (equal to its conjugate transpose), in which case you are also guaranteed to be able to diagonalize, and furthermore P is unitary (I am assuming you are in a complex vector space, so eigenvalues are allowed to be complex, etc.). Otherwise, in general you cannot expect a matrix to be diagonalizable.

If it isn't, then you have a couple of options. First, you can find a P that makes A as diagonal as possible - this is called the Jordan form, and can be useful at times. Another option is to abandon similarity transformations altogether, and use the singular value decomposition which is another matrix decomposition that can be useful in practice.

I hope this helps.

jason
 
Last edited:
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
979
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K