Eigenvalues of AX - XA: Finding Eigenvectors for a Real 2x2 Symmetric Matrix

  • Thread starter Thread starter alchemik
  • Start date Start date
  • Tags Tags
    Eigenvalues
Click For Summary
The discussion focuses on finding the eigenvalues and eigenvectors of the linear mapping L(X) = AX - XA, where A is a 2x2 symmetric matrix. It is established that 0 is one eigenvalue, with its eigenspace consisting of matrices that commute with A, although its algebraic multiplicity is uncertain. The approach of transforming the problem into a 4x4 system is noted as potentially cumbersome. A more efficient method involves diagonalizing A, leading to the conclusion that the eigenvalues of L are 0, 0, k1 - k2, and k2 - k1, where k1 and k2 are the eigenvalues of A. This diagonalization simplifies the process significantly, making it more manageable.
alchemik
Messages
2
Reaction score
0
Hi all,

Here is this problem that I have been at for some time now: find eigenvalues and corresponding eigenvectors of the following linear mapping on a vector space of real 2 by 2 matrices:

L(X) = AX - XA, where A is 2 by 2 symmetric matrix that is not a scalar multiple of identity.

It is clear that 0 is one eigenvalue of the above with an eigenspace consisting of all matrices X that commute with A, I do not know however what its algebraic multiplicity would be, except that it has to be less than 4. To find the other eigenvalues I looked at

AX - XA = \lambda X

a column at a time to obtain a 4 by 4 system.

Ax_{i} - \sum^{2}_{j = 1}a_{ji}x_{i} = \lambda x_{i}

for i = 1, 2, with j indexing the rows of A.

This approach requires me to find eigenvalues and eigenvectors of a 4 by 4 matrix, which may get messy if you have to do it by hand. Do you guys know any other, more efficient way of approaching this problem? I have attempted to use the fact that A can be orthogonally diagonalized in hopes of simplifying the above but with no success.

Thanks.
 
Physics news on Phys.org
Diagonalize A: let A = P-1DP, where D = diag(k1, k2). Given any matrix X, write
X = P^{-1} \begin{pmatrix}a&b\\c&d\end{pmatrix}P.
You can compute that
L(X) = P^{-1} \begin{pmatrix}0&(k_1-k_2)b\\(k_2-k_1)c&0\end{pmatrix}P.
You can now directly read off the eigenvalues of L: 0, 0, k1 - k2, and k2 - k1. The eigenvectors are also obvious.

The same process applies to arbitrary diagonalizable matrices. If A is a n × n diagonalizable matrix and L(X) = AX - XA, and A has eigenvalues k1, ..., kn, then the eigenvalues of L are ki - kj, for each pair of i, j from 1, ..., n.
 
This is indeed a lot more tractable, thanks a lot adriank!
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
9K
  • · Replies 1 ·
Replies
1
Views
2K