MHB Finding the Matrix of a Linear Transformation

Sudharaka
Gold Member
MHB
Messages
1,558
Reaction score
1
Hi everyone, :)

Here's another question I encountered recently. I am writing the question and my full solution. Many thanks if you can go through it and find a mistake, or confirm whether it's correct, or can contribute with any other useful comments. :)

Problem:

Find the matrix of a linear transformation \(f:M_{2}(\mathbb{R})\rightarrow M_{2}(\mathbb{R})\) given by \(f(X)=X^t\), with respect to a basis of matrix units \(E_{ij}\). Is \(f\) diagonalizable?

My Solution:

So first we shall represent the basis of matrix units by column vectors as follows.

\[\begin{pmatrix}1&0\\0&0\end{pmatrix}\rightarrow \begin{pmatrix}1\\0\\0\\0\end{pmatrix}\]

\[\begin{pmatrix}0&1\\0&0\end{pmatrix}\rightarrow \begin{pmatrix}0\\1\\0\\0\end{pmatrix}\]

\[\begin{pmatrix}0&0\\1&0\end{pmatrix}\rightarrow \begin{pmatrix}0\\0\\1\\0\end{pmatrix}\]

\[\begin{pmatrix}0&0\\0&1\end{pmatrix}\rightarrow \begin{pmatrix}0\\0\\0\\1\end{pmatrix}\]

Now we have,

\[f\begin{pmatrix}1\\0\\0\\0\end{pmatrix}= \begin{pmatrix}1\\0\\0\\0\end{pmatrix}\]

\[f\begin{pmatrix}0\\1\\0\\0\end{pmatrix}= \begin{pmatrix}0\\0\\1\\0\end{pmatrix}\]

\[f\begin{pmatrix}0\\0\\1\\0\end{pmatrix}= \begin{pmatrix}0\\1\\0\\0\end{pmatrix}\]

\[f\begin{pmatrix}0\\0\\0\\1\end{pmatrix}= \begin{pmatrix}0\\0\\0\\1\end{pmatrix}\]

Hence the matrix of linear transformation is,

\[M=\begin{pmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{pmatrix}\]

I have basically used the method outlined in the Wikipedia article.

Now to find whether \(M\) is diagonalizable we shall check whether it has two four linearly independent eigenvectors. Unfortunately this has only one linearly independent eigenvector (dimension of eigenspace is one). Therefore \(M\) is not diagonalizable. :)
 
Physics news on Phys.org
Sudharaka said:
Here's another question I encountered recently. I am writing the question and my full solution. Many thanks if you can go through it and find a mistake, or confirm whether it's correct, or can contribute with any other useful comments.

Looks pretty good. :)
\[M=\begin{pmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{pmatrix}\]

I have basically used the method outlined in the Wikipedia article.

Now to find whether \(M\) is diagonalizable we shall check whether it has two four linearly independent eigenvectors. Unfortunately this has only one linearly independent eigenvector (dimension of eigenspace is one). Therefore \(M\) is not diagonalizable.

Hmm, which single linearly independent eigenvector is that?
What are the eigenvalues for that matter?
 
I like Serena said:
Looks pretty good. :)

Hmm, which single linearly independent eigenvector is that?
What are the eigenvalues for that matter?

Thanks for the confirmation. Sorry I got the wrong answer for the last part of the question since I just scribbled it down 'cause I was in a hurry to get the answer. There are two eigenvalues, \(\lambda_1=-1\) and \(\lambda_2=1\) and four linearly independent eigenvectors.

\[v_1=(0,\,1,\,-1,\,0)\]

\[v_2=(1,\,0,\,0,\,0)\]

\[v_3=(0,\,0,\,0,\,1)\]

\[v_4=(0,\,1,\,1,\,0)\]

So \(M\) in fact is diagonalizable. :)
 
Sudharaka said:
Thanks for the confirmation. Sorry I got the wrong answer for the last part of the question since I just scribbled it down 'cause I was in a hurry to get the answer. There are two eigenvalues, \(\lambda_1=-1\) and \(\lambda_2=1\) and four linearly independent eigenvectors.

\[v_1=(0,\,1,\,-1,\,0)\]

\[v_2=(1,\,0,\,0,\,0)\]

\[v_3=(0,\,0,\,0,\,1)\]

\[v_4=(0,\,1,\,1,\,0)\]

So \(M\) in fact is diagonalizable.

Yep. That looks much better. :cool:
 
I like Serena said:
Yep. That looks much better. :cool:

Thanks for helping me out. I truly appreciate it. :)
 
As I see it, the trick is not to be too hasty in calculating the characteristic polynomial.

Expanding by minors along the first column we get:

$\det(xI - M) = (x - 1)\begin{vmatrix}x&-1&0\\-1&x&0\\0&0&x-1 \end{vmatrix}$

$= (x - 1)[(x^2(x - 1) + 0 + 0 - 0 - 0 - (-1)(-1)(x - 1)] = (x - 1)(x^2(x - 1) - (x - 1)$

$= (x - 1)(x^2 - 1)(x - 1) = (x - 1)^3(x + 1)$

If you stop and think about it, all the transpose map does is switch the 1,2 and the 2,1 entry. Therefore, we clearly have:

$E_{11}, E_{22}$ as linearly independent eigenvectors belonging to 1.

In fact, the transpose map preserves all SYMMETRIC matrices, and thus preserves the third (usual) basis vector of the subspace of symmetric matrices:

$E_{12} + E_{21}$.

So even if you had concluded you had just the eigenvalue 1, you still should have come up with an eigenbasis with 3 elements.

The transpose also sends any skew-symmetric matrix to its negative, so the subspace of skew-symmetric matrices is an invariant subspace of the transpose map. For 2x2 matrices, this subspace has just one (usual) basis vector:

$E_{12} - E_{21}$ (the diagonal entries must be 0, and the lower off-diagonal entry is completely determined by the upper one).

Explicitly, these are 4 linearly independent elements of $M_2(\Bbb R)$, the first 3 clearly belong the eigenvalue 1, and the last belongs to the eigenvalue -1 (I am stating it like this to show that you need not pass to a "vectorized" representation of $M_2(\Bbb R)$).

Succintly, one can express the above as:

$M_2(\Bbb R) = \text{Sym}_2(\Bbb R) \oplus \text{Skew}_2(\Bbb R)$, and I leave it to you to consider how you might generalize this result to nxn matrices.

If we turn our reasoning "upside down" (provided we have an underlying field with characteristic not equal to 2, otherwise symmetric = skew-symmetric), we can conclude that every matrix has a UNIQUE representation as the sum of a symmetric and skew-symmetric one, namely:

$M = \frac{1}{2}(M + M^T) + \frac{1}{2}(M - M^T)$

which relies on an algebraic trick you may well encounter in other settings (such as even and odd functions).
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top