Find a matrix ##C## such that ##C^{-1} A C## is a diagonal matrix

Hall
Messages
351
Reaction score
87
Homework Statement
Let A be a square matrix.
Relevant Equations
##C^{-1} AC##
I’m really unable to solve those questions which ask to find a nonsingular ##C## such that
$$
C^{-1} A C$$
is a digonal matrix. Some people solve it by finding the eigenvalues and then using it to form a diagonal matrix and setting it equal to $$C^{-1} A C$$. Can you please tell me from scratch the logic behind the solving those problems?
 
  • Like
Likes Delta2
Physics news on Phys.org
There must be lots of resources online that show why constructing ##C## from the eigenvectors of ##A## works.
 
  • Like
Likes PhDeezNutz
PeroK said:
There must be lots of resources online that show why constructing ##C## from the eigenvectors of ##A## works.
As well as your textbook.
 
  • Like
Likes PhDeezNutz
Let's say you constructed a matrix ##C## from a basis of eigenvectors. What is the result of applying ##A## to one of the columns of ##C##? What, then, happens when you apply ##C^{-1}## to the result?

That being said, I read this somewhere and have no idea how this was derived. But I do know why it works.
 
  • Like
Likes Hall
Three things to remember here is
- a diagonal matrix is one that maps ##e_i## to a multiple of ##e_i##.

- If ##v_i## is the ##i##th column of ##C##, then ##Ce_i=v_i##.

- Also by definition of the inverse, ##C^{-1} v_i = e_i##.

Now try applying all the matrices to ##e_i## and see what you get at the end.
 
  • Like
Likes Hall
@Hall: Notice such matrix C is not guaranteed to exist for all matrices A. Some assumptions must be made on A. Do you know what they are?
 
  • Like
Likes Hall
@Office_Shredder @WWGD Thanks for coming in. A few honorable men developed an impression that I was asking for spoon-feeding, so, it has become imperative now to unbosom what I already know and what I wanted to know:

Let ##A## be a ##2 \times 2## matrix and represents the linear transformation ##T : V \to V##, w.r.t. basis ##(e_1, e_2)##.

##T## might have two eigenvalues ##\lambda _1## and ##\lambda _2##. Then, w.r.t. eigenvectors ##u_1## and ##u_2##, the diagonal matrix
$$
\Lambda =
\begin{bmatrix}
\lambda_1 & 0\\
0 & \lambda_2\\
\end{bmatrix}$$
Will also represent ##T##. Thus, ##\Lambda## and ##A## are similar, therefore there exits a nonsingular ##C## such that
$$
\Lambda = C^{-1} A C$$

But in Question number 2 of exercise 4.10 of Apostol’s Calculus Vol II, the author just asks to convert A into a diagonal matrix by combining it thus
$$
C^{-1} AC$$
And my doubt is why do we equate it to ##\Lambda## only and not to any other diagonal matrix, say
$$
\begin{bmatrix}
\alpha &0 \\
0 & \beta\\
\end{bmatrix}$$
 
##A## is diagonalisable if it is similar to a diagonal matrix. It is known that ##A## is diagonalisable if and only if ##A=PDP^{-1}##, where ##D## is a diagonal matrix whose diagonal elements are the eigenvalues of ##A##. For every eigenvalue, pick an eigenvector corresponding to that value - these eigenvectors are used to make ##P##. Suffices to find bases for the individual eigenspaces, for instance.
 
Eclair_de_XII said:
Let's say you constructed a matrix ##C## from a basis of eigenvectors. What is the result of applying ##A## to one of the columns of ##C##? What, then, happens when you apply ##C^{-1}## to the result?

That being said, I read this somewhere and have no idea how this was derived. But I do know why it works.
Oh! Thanks. I think I can demonstrate that. Considering ##A## to be a 2 x 2 matrix, and letting its eigenvalues to be ##\lambda_1## and ##\lambda_2##. Let the basis of eigenvectors for ##\lambda_1## be ##(x_1, x_2)## and the basis of eigenvectors for ##\lambda_2## be ##(y_1, y_2)##. Then,
$$
C =
\begin{bmatrix}
x_1 & y_1\\
x_2 & y_2\\
\end{bmatrix}$$
The first column of ##AC## (of course A times C, not air conditioner)
$$
A \times
\begin{bmatrix}
x_1 \\
x_2\\
\end{bmatrix}
=
\begin{bmatrix}
\lambda_1 x_1\\
\lambda _1 x_2\\
\end{bmatrix}$$

And the second column is
$$
A \times
\begin{bmatrix}
y_1\\
y_2\\
\end{bmatrix}
=
\begin{bmatrix}
\lambda_2 y_1\\
\lambda_2 y_2\\
\end{bmatrix}
$$

Thus, we have
$$
A ~C =
\begin{bmatrix}
\lambda_1 x_1 & \lambda_2 y_1\\
\lambda_1 x_2 & \lambda_2 y_2\\
\end{bmatrix}
$$

Now, multiplying that with ##C^{-1}##, we have the first column as
$$
C^{-1} \times \lambda_1
\begin{bmatrix}
x_1\\
x_2\\
\end{bmatrix}
=
\lambda _1
\begin{bmatrix}
1\\
0\\
\end{bmatrix}
$$
And the second column would be
$$
C^{-1} \lambda_2
\begin{bmatrix}
y_1\\
y_2\\
\end{bmatrix}
=
\begin{bmatrix}
0\\
\lambda_2 \\
\end{bmatrix}
$$

Thus,
$$
C^{-1} A C =
\begin{bmatrix}
\lambda_1 &0\\
0 & \lambda_2\\
\end{bmatrix}
$$

Hence, ##A## is diagonalized.

Thanks for giving this idea.
 
Back
Top