MHB Finding the Matrix of a Linear Transformation

Sudharaka
Gold Member
MHB
Messages
1,558
Reaction score
1
Hi everyone, :)

Here's another question I encountered recently. I am writing the question and my full solution. Many thanks if you can go through it and find a mistake, or confirm whether it's correct, or can contribute with any other useful comments. :)

Problem:

Find the matrix of a linear transformation \(f:M_{2}(\mathbb{R})\rightarrow M_{2}(\mathbb{R})\) given by \(f(X)=X^t\), with respect to a basis of matrix units \(E_{ij}\). Is \(f\) diagonalizable?

My Solution:

So first we shall represent the basis of matrix units by column vectors as follows.

\[\begin{pmatrix}1&0\\0&0\end{pmatrix}\rightarrow \begin{pmatrix}1\\0\\0\\0\end{pmatrix}\]

\[\begin{pmatrix}0&1\\0&0\end{pmatrix}\rightarrow \begin{pmatrix}0\\1\\0\\0\end{pmatrix}\]

\[\begin{pmatrix}0&0\\1&0\end{pmatrix}\rightarrow \begin{pmatrix}0\\0\\1\\0\end{pmatrix}\]

\[\begin{pmatrix}0&0\\0&1\end{pmatrix}\rightarrow \begin{pmatrix}0\\0\\0\\1\end{pmatrix}\]

Now we have,

\[f\begin{pmatrix}1\\0\\0\\0\end{pmatrix}= \begin{pmatrix}1\\0\\0\\0\end{pmatrix}\]

\[f\begin{pmatrix}0\\1\\0\\0\end{pmatrix}= \begin{pmatrix}0\\0\\1\\0\end{pmatrix}\]

\[f\begin{pmatrix}0\\0\\1\\0\end{pmatrix}= \begin{pmatrix}0\\1\\0\\0\end{pmatrix}\]

\[f\begin{pmatrix}0\\0\\0\\1\end{pmatrix}= \begin{pmatrix}0\\0\\0\\1\end{pmatrix}\]

Hence the matrix of linear transformation is,

\[M=\begin{pmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{pmatrix}\]

I have basically used the method outlined in the Wikipedia article.

Now to find whether \(M\) is diagonalizable we shall check whether it has two four linearly independent eigenvectors. Unfortunately this has only one linearly independent eigenvector (dimension of eigenspace is one). Therefore \(M\) is not diagonalizable. :)
 
Physics news on Phys.org
Sudharaka said:
Here's another question I encountered recently. I am writing the question and my full solution. Many thanks if you can go through it and find a mistake, or confirm whether it's correct, or can contribute with any other useful comments.

Looks pretty good. :)
\[M=\begin{pmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{pmatrix}\]

I have basically used the method outlined in the Wikipedia article.

Now to find whether \(M\) is diagonalizable we shall check whether it has two four linearly independent eigenvectors. Unfortunately this has only one linearly independent eigenvector (dimension of eigenspace is one). Therefore \(M\) is not diagonalizable.

Hmm, which single linearly independent eigenvector is that?
What are the eigenvalues for that matter?
 
I like Serena said:
Looks pretty good. :)

Hmm, which single linearly independent eigenvector is that?
What are the eigenvalues for that matter?

Thanks for the confirmation. Sorry I got the wrong answer for the last part of the question since I just scribbled it down 'cause I was in a hurry to get the answer. There are two eigenvalues, \(\lambda_1=-1\) and \(\lambda_2=1\) and four linearly independent eigenvectors.

\[v_1=(0,\,1,\,-1,\,0)\]

\[v_2=(1,\,0,\,0,\,0)\]

\[v_3=(0,\,0,\,0,\,1)\]

\[v_4=(0,\,1,\,1,\,0)\]

So \(M\) in fact is diagonalizable. :)
 
Sudharaka said:
Thanks for the confirmation. Sorry I got the wrong answer for the last part of the question since I just scribbled it down 'cause I was in a hurry to get the answer. There are two eigenvalues, \(\lambda_1=-1\) and \(\lambda_2=1\) and four linearly independent eigenvectors.

\[v_1=(0,\,1,\,-1,\,0)\]

\[v_2=(1,\,0,\,0,\,0)\]

\[v_3=(0,\,0,\,0,\,1)\]

\[v_4=(0,\,1,\,1,\,0)\]

So \(M\) in fact is diagonalizable.

Yep. That looks much better. :cool:
 
I like Serena said:
Yep. That looks much better. :cool:

Thanks for helping me out. I truly appreciate it. :)
 
As I see it, the trick is not to be too hasty in calculating the characteristic polynomial.

Expanding by minors along the first column we get:

$\det(xI - M) = (x - 1)\begin{vmatrix}x&-1&0\\-1&x&0\\0&0&x-1 \end{vmatrix}$

$= (x - 1)[(x^2(x - 1) + 0 + 0 - 0 - 0 - (-1)(-1)(x - 1)] = (x - 1)(x^2(x - 1) - (x - 1)$

$= (x - 1)(x^2 - 1)(x - 1) = (x - 1)^3(x + 1)$

If you stop and think about it, all the transpose map does is switch the 1,2 and the 2,1 entry. Therefore, we clearly have:

$E_{11}, E_{22}$ as linearly independent eigenvectors belonging to 1.

In fact, the transpose map preserves all SYMMETRIC matrices, and thus preserves the third (usual) basis vector of the subspace of symmetric matrices:

$E_{12} + E_{21}$.

So even if you had concluded you had just the eigenvalue 1, you still should have come up with an eigenbasis with 3 elements.

The transpose also sends any skew-symmetric matrix to its negative, so the subspace of skew-symmetric matrices is an invariant subspace of the transpose map. For 2x2 matrices, this subspace has just one (usual) basis vector:

$E_{12} - E_{21}$ (the diagonal entries must be 0, and the lower off-diagonal entry is completely determined by the upper one).

Explicitly, these are 4 linearly independent elements of $M_2(\Bbb R)$, the first 3 clearly belong the eigenvalue 1, and the last belongs to the eigenvalue -1 (I am stating it like this to show that you need not pass to a "vectorized" representation of $M_2(\Bbb R)$).

Succintly, one can express the above as:

$M_2(\Bbb R) = \text{Sym}_2(\Bbb R) \oplus \text{Skew}_2(\Bbb R)$, and I leave it to you to consider how you might generalize this result to nxn matrices.

If we turn our reasoning "upside down" (provided we have an underlying field with characteristic not equal to 2, otherwise symmetric = skew-symmetric), we can conclude that every matrix has a UNIQUE representation as the sum of a symmetric and skew-symmetric one, namely:

$M = \frac{1}{2}(M + M^T) + \frac{1}{2}(M - M^T)$

which relies on an algebraic trick you may well encounter in other settings (such as even and odd functions).
 
Back
Top