Diagonalizing a Matrix A: The Definition and Process Explained

  • Thread starter Thread starter Jhenrique
  • Start date Start date
Click For Summary
A matrix A can be expressed in the form A = BDB^{-1} if it is diagonalizable, which requires a complete set of independent eigenvectors. A matrix is diagonalizable if it has n independent eigenvectors, where n is the size of the matrix. Distinct eigenvalues guarantee independent eigenvectors, but repeated eigenvalues can also lead to diagonalizability if enough independent eigenvectors exist. For example, the matrix with eigenvalues 2 and 3 can be diagonalized, while the matrix with a double eigenvalue of 1 cannot due to insufficient independent eigenvectors. Understanding these conditions is crucial for determining the diagonalizability of a matrix.
Jhenrique
Messages
676
Reaction score
4
Given a matrix A, is possible to rewrite A like:

##A = B D B^{-1}##

##
\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
\end{bmatrix}
=
\begin{bmatrix}
?_{11} & ?_{12} \\
?_{21} & ?_{22} \\
\end{bmatrix}

\begin{bmatrix}
\lambda_{1} & 0 \\
0 & \lambda_{2} \\
\end{bmatrix}

\begin{bmatrix}
?_{11} & ?_{12} \\
?_{21} & ?_{22} \\
\end{bmatrix}^{-1}
##

(if A is diagonalizable)

Being ##\lambda_i## the i-th root of the characterisc polynomial of A.

But, what is the definition of the matrix B in terms of A?
 
Physics news on Phys.org
I believe you are asking if every matrix is "diagonalizable". The answer to that is "not every matrix"! A matrix is diagonalizable if and only if it has a "complete set" of eigenvectors. That is, an n by n matrix is diagonalizable if and only if it has a set of n independent eigenvectors. Since eigenvectors corresponding to distinct eigenvalues are always independent, if all eigenvalues of a matrix are distinct then it is diagonalizable. But a matrix with repeated eigenvalues may still be diagonalizable.

If A has n independent eigenvectors, then we can construct the matrix B having the eigenvectors of A as columns. Since the eigenvectors are independent, B is invertible and then we have A= BDB^{-1}.

The matrix \begin{bmatrix}8 & -3 \\ 10 & -3\end{bmatrix} has eigenvalues 2 and 3. Eigenvectors corresponding to eigenvalue 2 are multiples of \begin{bmatrix}1 \\ 2\end{bmatrix} and eigenvectors corresponding to eigenvalue 3 are multiples of \begin{bmatrix}3 \\ 5 \end{bmatrix}. So if we let B= \begin{bmatrix}1 & 3 \\ 2 & 5\end{bmatrix}, we have B^{-1}= \begin{bmatrix}-5 & 3 \\ 2 & -1\end{bmatrix}.

And then BDB^{-1}= \begin{bmatrix}1 & 3 \\ 2 & 5\end{bmatrix}\begin{bmatrix}2 & 0 \\ 0 & 3\end{bmatrix}\begin{bmatrix}-5 & 3 \\ 2 & -1\end{bmatrix}= \begin{bmatrix}8 & -3 \\ -10 & -3\end{bmatrix}

But, again, not every matrix is diagonalizable. The matrix \begin{bmatrix}1 & 1 \\ 0 & 1\end{bmatrix} has 1 as a double eigenvalue but the only eigenvectors are the multiples of \begin{bmatrix}1 \\ 0 \end{bmatrix}.
 
Last edited by a moderator:
  • Like
Likes 1 person
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
27
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K