Give a basis to get the specific matrix M

In summary, we can find a diagonal $M_B(\phi)$ by calculating the eigenvalues and corresponding eigenvectors for a linear map. If the map is diagonalizable, the eigenvectors form a basis that satisfies the condition. In this case, we have also found an upper triangular matrix since a diagonal matrix is also upper triangular.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :giggle:

We have the following linear maps \begin{align*}\phi_1:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}x+y\\ x-y\end{pmatrix} \\ \phi_2:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}-y\\ x\end{pmatrix} \\ \phi_3:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}y\\ 0\end{pmatrix} \end{align*}

1. Give (if possible) for each $i\in \{1,2,3\}$ a Basis $B_i$ of $\mathbb{R}^2$ such that $M_{B_i}(\phi_i)$ an upper triangular matrix.
2. Give (if possible) for each $i\in \{1,2,3\}$ a Basis $B_i$ of $\mathbb{R}^2$ such that $M_{B_i}(\phi_i)$ an diagonal matrix.

I have done the following:

Let $\mathcal{B}_i=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}x_1\\ y_1 \end{pmatrix}$ and $b_2=\begin{pmatrix}x_2\\ y_2 \end{pmatrix}$.

For question 1 :

- It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\left (\phi_1(b_1)\mid \phi_1(b_2)\right )=\left (\phi_1\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_1\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix}\end{equation*}
So that it is an upper triangular matrix, it must be $x_1-y_1=0$. Then we have that $x_1=y_1$.
Then we have for example such a basis $\mathcal{B}_1=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}1\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 0 \end{pmatrix}$.
These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\begin{pmatrix}2 & 1 \\ 0 & 1\end{pmatrix}\end{equation*} is an upper triangular matrix. - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\left (\phi_2(b_1)\mid \phi_2(b_2)\right )=\left (\phi_2\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_2\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}-y_1 & -y_2 \\ x_1 & x_2\end{pmatrix}\end{equation*}
So that it is an upper triangular matrix, it must be $x_1=0$. Then we have for example such a basis $\mathcal{B}_2=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 1 \end{pmatrix}$.
These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\begin{pmatrix}-1 & 1 \\ 0 & 1\end{pmatrix}\end{equation*} is an upper triangular matrix. - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\left (\phi_3(b_1)\mid \phi_3(b_2)\right )=\left (\phi_3\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_3\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}y_1 & y_2 \\ 0 & 0\end{pmatrix}\end{equation*}
This is already an upper triangular matrix, so we can take an arbitrary basis, e.g. $\mathcal{B}_3=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 1 \end{pmatrix}$.
These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}-1 & 1 \\ 0 & 1\end{pmatrix}\end{equation*} is an upper triangular matrix.
For question 2 :

- It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\left (\phi_1(b_1)\mid \phi_1(b_2)\right )=\left (\phi_1\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_1\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix}\end{equation*}
So that it is a diagonal matrix, it must be $x_1-y_1=x_2+y_2=0$, then $x_1=y_1$ and $x_2=-y_2$. Then we have for example such a basis $\mathcal{B}_1=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}1\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ -1 \end{pmatrix}$.
These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)=\begin{pmatrix}2 & 0 \\ 0 & 2\end{pmatrix}\end{equation*} is a diagonal matrix. - It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\left (\phi_2(b_1)\mid \phi_2(b_2)\right )=\left (\phi_2\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_2\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}-y_1 & -y_2 \\ x_1 & x_2\end{pmatrix}\end{equation*}
So that it is a diagonal matrix, it must be $x_1=y_2=0$. Then we have for example such a basis $\mathcal{B}_2=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 0 \end{pmatrix}$.
These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_2}(\phi_2)=\begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}\end{equation*} is a diagonal matrix. It holds that \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\left (\phi_3(b_1)\mid \phi_3(b_2)\right )=\left (\phi_3\begin{pmatrix}x_1\\ y_1 \end{pmatrix}\mid \phi_3\begin{pmatrix}x_2\\ y_2 \end{pmatrix}\right )=\begin{pmatrix}y_1 & y_2 \\ 0 & 0\end{pmatrix}\end{equation*}
So that it is a diagonal matrix, it must be $y_2=0$. Then we have for example such a basis $\mathcal{B}_3=\{b_1, b_2\}$, with $b_1=\begin{pmatrix}0\\ 1 \end{pmatrix}$ and $b_2=\begin{pmatrix}1\\ 0 \end{pmatrix}$.
These vectors are linearly independent and the matrix \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}1 & 0 \\ 0 & 0\end{pmatrix}\end{equation*} is a diagonal matrix. Is everything correct? :unsure:
 
Physics news on Phys.org
  • #2
Hi mathmari!

What is $M_B(\phi)$? 🤔

I would expect it to be the matrix of the transformation $\phi$ with respect to the basis $B$.
But if so, then we would have $M_B(\phi) = (b_1\mid b_2)^{-1} (\phi(b_1)\mid \phi(b_2))$. :oops:

Consider for instance the identity transformation $\text{id}$.
With respect to a basis $B$ it should be $M_B(\text{id})=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ shouldn't it? And not $(b_1\mid b_2)$? :unsure:
 
  • #3
Klaas van Aarsen said:
What is $M_B(\phi)$? 🤔

I would expect it to be the matrix of the transformation $\phi$ with respect to the basis $B$.
But if so, then we would have $M_B(\phi) = (b_1\mid b_2)^{-1} (\phi(b_1)\mid \phi(b_2))$. :oops:

Consider for instance the identity transformation $\text{id}$.
With respect to a basis $B$ it should be $M_B(\text{id})=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ shouldn't it? And not $(b_1\mid b_2)$? :unsure:

Ahh ok!

Yes, it is the matrix of the transformation.

So, what do we have to do? :unsure:
 
  • #4
We can find a diagonal $M_B(\phi)$ by calculating the eigenvalues and corresponding eigenvectors.
If $\phi$ is diagonalizable, then the eigenvectors form a basis that satisfies the condition.
In that case we have also found an upper triangle matrix, since a diagonal matrix is upper triangular. 🤔
 
  • #5
In general how is $M_B^B(\phi_a)$ for a matrix $a$, or $M_B^E(\text{id})$ or $M_E^B(\text{id})$ defined? For example let $$b_1=\begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}, b_2=\begin{pmatrix}1 \\ 0\\ -1\end{pmatrix}, b_3=\begin{pmatrix}-1 \\ 1\\ 0\end{pmatrix}$$

Then is the following correct?
\begin{equation*}\mathcal{M}_{\mathcal{E}}^{\mathcal{B}}(\text{id})=\left (\gamma_{\mathcal{E}}(b_1)\mid \gamma_{\mathcal{E}}(b_2)\mid \gamma_{\mathcal{E}}(b_3)\right )=\left (b_1\mid b_2\mid b_3\right )=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}\end{equation*}

\begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})=\left (\gamma_{\mathcal{B}}(e_1)\mid \gamma_{\mathcal{B}}(e_2)\mid \gamma_{\mathcal{B}}(e_3)\right )\end{equation*}
For each $e_i$ we apply Gauss algorithm:

\begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ 0 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ -1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ -1 \\ -1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ -1 \\ 1 \end{matrix}\end{pmatrix} \end{align*}
So we get \begin{equation*}\gamma_B(e_1)=\begin{pmatrix}-\frac{1}{3}\\ \frac{5}{3} \\ \frac{1}{3}\end{pmatrix}\end{equation*} \begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ -2 \end{matrix}\end{pmatrix} \end{align*}
So we get \begin{equation*}\gamma_B(e_2)=\begin{pmatrix}\frac{4}{3}\\ \frac{1}{3} \\ \frac{2}{3}\end{pmatrix}\end{equation*} \begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \end{align*}
So we get \begin{equation*}\gamma_B(e_3)=\begin{pmatrix}-1\\ \frac{2}{3} \\ -\frac{1}{3}\end{pmatrix}\end{equation*} That means that \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\phi_a)=\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\end{equation*}
And : \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} :unsure:
 
  • #6
mathmari said:
In general how is $M_B^B(\phi_a)$ for a matrix $a$, or $M_B^E(\text{id})$ or $M_E^B(\text{id})$ defined?

I usually get confused with the upper and lower indices, and I think some texts swap their meaning. o_O
Anyway, let me give one definition, which matches what you write afterwards.

$M_E^B(\phi)$ is the matrix such that when we multiply it with a vector with respect to the basis $B$, the result is the vector with respect to the basis $E$ mapped according to the transformation $\phi$.

So suppose that $\gamma_E(b_1)$ is the vector with respect to $E$ of the first basis vector $b_1$ of $B$.
Then $M_E^B(\phi)\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\phi(b_1))$. 🤔

If $\phi$ is the identity $\text{id}$, we get $M_E^B(\text{id})\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\text{id}(b_1))=\gamma_E(b_1)$.
mathmari said:
And : \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*}
I think something went wrong, because the result should be the identity matrix. :oops:

Also, it seems that $\phi_a$ and $\text{id}$ have been mixed up in a number of places. o_O
 
  • #7
Klaas van Aarsen said:
I usually get confused with the upper and lower indices, and I think some texts swap their meaning. o_O
Anyway, let me give one definition, which matches what you write afterwards.

$M_E^B(\phi)$ is the matrix such that when we multiply it with a vector with respect to the basis $B$, the result is the vector with respect to the basis $E$ mapped according to the transformation $\phi$.

So suppose that $\gamma_E(b_1)$ is the vector with respect to $E$ of the first basis vector $b_1$ of $B$.
Then $M_E^B(\phi)\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\phi(b_1))$. 🤔

If $\phi$ is the identity $\text{id}$, we get $M_E^B(\text{id})\begin{pmatrix}1\\0\end{pmatrix}=\gamma_E(\text{id}(b_1))=\gamma_E(b_1)$.

So, what I have done in my previous post is not correct, is it? :unsure:
Klaas van Aarsen said:
I think something went wrong, because the result should be the identity matrix. :oops:

Also, it seems that $\phi_a$ and $\text{id}$ have been mixed up in a number of places. o_O

Ahh... The matrix $a$ is $
\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ and I thought that the result is equal to teh matrix $a$, since $\phi_a(b_i)$ is the $i$-th column of $a$ ? Or isn't $M_B^B(\text{id})$ defined like that? :unsure:
 
  • #8
mathmari said:
So, what I have done in my previous post is not correct, is it?

Ahh... The matrix $a$ is $
\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ and I thought that the result is equal to teh matrix $a$, since $\phi_a(b_i)$ is the $i$-th column of $a$ ? Or isn't $M_B^B(\text{id})$ defined like that?
Now that you mention what $a$ is, it makes a bit more sense.

Either way, $M_B^B(\text{id})$ makes no reference to $a$ nor $\phi_a$ does it? So it can not be equal to anything that does refer to $\phi_a$. :oops:
mathmari said:
For each $e_i$ we apply Gauss algorithm:

\begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ 0 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ -1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ -1 \\ -1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}1 \\ -1 \\ 1 \end{matrix}\end{pmatrix} \end{align*}
So we get \begin{equation*}\gamma_B(e_1)=\begin{pmatrix}-\frac{1}{3}\\ \frac{5}{3} \\ \frac{1}{3}\end{pmatrix}\end{equation*}\begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ 0 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 1 \\ -2 \end{matrix}\end{pmatrix} \end{align*}
So we get \begin{equation*}\gamma_B(e_2)=\begin{pmatrix}\frac{4}{3}\\ \frac{1}{3} \\ \frac{2}{3}\end{pmatrix}\end{equation*}\begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}0 \\ 0 \\ 1 \end{matrix}\end{pmatrix} \end{align*}
So we get \begin{equation*}\gamma_B(e_3)=\begin{pmatrix}-1\\ \frac{2}{3} \\ -\frac{1}{3}\end{pmatrix}\end{equation*}
For the record, we can do the Gaussian elimination in one go.
That is, we can apply Gauss to $\begin{pmatrix}B\mid I\end{pmatrix}$ instead of $\begin{pmatrix}B\mid e_i\end{pmatrix}$ for each $i$ separately. 🤔

mathmari said:
That means that \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\phi_a)=\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\end{equation*}

Shouldn't it be $\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})$ instead? :unsure:
mathmari said:
And : \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*}

This looks wrong. (Shake)
I believe we have $\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\text{id})\ne\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )$ and $\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )\ne a$.
 
  • #9
Klaas van Aarsen said:
Now that you mention what $a$ is, it makes a bit more sense.

Either way, $M_B^B(\text{id})$ makes no reference to $a$ nor $\phi_a$ does it? So it can not be equal to anything that does refer to $\phi_a$. :oops:

Oh there is a typo... There it should be $\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)$ (Tmi)
 
  • #10
So to clarify :

We have $B=\{b_1, b_2, b_3\}$ with $$b_1=\begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}, b_2=\begin{pmatrix}1 \\ 0\\ -1\end{pmatrix}, b_3=\begin{pmatrix}-1 \\ 1\\ 0\end{pmatrix}$$ which is basis of $\mathbb{R}^3$.

Then to calculate $M_E^B(\text{id})$ and $M_B^E(\text{id})$ we do the following?
\begin{equation*}\mathcal{M}_{\mathcal{E}}^{\mathcal{B}}(\text{id})=\left (\gamma_{\mathcal{E}}(b_1)\mid \gamma_{\mathcal{E}}(b_2)\mid \gamma_{\mathcal{E}}(b_3)\right )=\left (b_1\mid b_2\mid b_3\right )=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}\end{equation*}
\begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})=\left (\gamma_{\mathcal{B}}(e_1)\mid \gamma_{\mathcal{B}}(e_2)\mid \gamma_{\mathcal{B}}(e_3)\right )\end{equation*} Then suppose $a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ then to calculate $M_B^B(\phi_a)$ do we do the following?
\begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*} :unsure:
 
  • #11
mathmari said:
So to clarify :

We have $B=\{b_1, b_2, b_3\}$ with $$b_1=\begin{pmatrix}1 \\ 1\\ 1\end{pmatrix}, b_2=\begin{pmatrix}1 \\ 0\\ -1\end{pmatrix}, b_3=\begin{pmatrix}-1 \\ 1\\ 0\end{pmatrix}$$ which is basis of $\mathbb{R}^3$.

Then to calculate $M_E^B(\text{id})$ and $M_B^E(\text{id})$ we do the following?
\begin{equation*}\mathcal{M}_{\mathcal{E}}^{\mathcal{B}}(\text{id})=\left (\gamma_{\mathcal{E}}(b_1)\mid \gamma_{\mathcal{E}}(b_2)\mid \gamma_{\mathcal{E}}(b_3)\right )=\left (b_1\mid b_2\mid b_3\right )=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}\end{equation*}
\begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{E}}(\text{id})=\left (\gamma_{\mathcal{B}}(e_1)\mid \gamma_{\mathcal{B}}(e_2)\mid \gamma_{\mathcal{B}}(e_3)\right )\end{equation*}

What are $\gamma_{\mathcal{E}}$ and $\gamma_{\mathcal{B}}$? 🤔

mathmari said:
Then suppose $a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}$ then to calculate $M_B^B(\phi_a)$ do we do the following?
\begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)=\left (\phi_a(b_1)\mid \phi_a(b_2)\mid \phi_a(b_3)\right )=a=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\end{equation*}

Can it be that we have $\mathcal{M}_{\mathcal{E}}^{\mathcal{E}}(\phi_a)=a$ instead? 🤔
And $\mathcal{M}_{\mathcal{E}}^{\mathcal{E}}(\phi_a)=\left (\gamma_{\mathcal{E}}(\phi_a(e_1))\mid \gamma_{\mathcal{E}}(\phi_a(e_2))\mid \gamma_{\mathcal{E}}(\phi_a(e_3))\right )$? 🤔

Perhaps we can back to this after we clarified what $\gamma_{\mathcal{E}}$ and $\gamma_{\mathcal{B}}$ are. o_O
 
  • #12
$\gamma_B(v)$ is the vector of coefficients when we write the vector $v$ as a linear combination of the elements of the basis $B$.

For example $\gamma_B(e_i)=\begin{pmatrix}c_1\\ c_2 \\ c_3\end{pmatrix}$ with \begin{equation*}e_i=c_1b_1+c_2b_2+c_3b_3=c_1\begin{pmatrix}1 \\ 1 \\ 0\end{pmatrix}+c_2\begin{pmatrix}1 \\ 0 \\ 1\end{pmatrix}+c_3\begin{pmatrix}0 \\ 1 \\ 1\end{pmatrix}\end{equation*}
 
  • #13
mathmari said:
$\gamma_B(v)$ is the vector of coefficients when we write the vector $v$ as a linear combination of the elements of the basis $B$.

For example $\gamma_B(e_i)=\begin{pmatrix}c_1\\ c_2 \\ c_3\end{pmatrix}$ with \begin{equation*}e_i=c_1b_1+c_2b_2+c_3b_3=c_1\begin{pmatrix}1 \\ 1 \\ 0\end{pmatrix}+c_2\begin{pmatrix}1 \\ 0 \\ 1\end{pmatrix}+c_3\begin{pmatrix}0 \\ 1 \\ 1\end{pmatrix}\end{equation*}
Okay! :geek:

Let $V$ be our abstract vector space.
Let $\mathbb{R}^3_E$ be the space of column vectors with respect to basis $E$.
Let $\mathbb{R}^3_B$ be the space of column vectors with respect to basis $B$.
And let's assume that $\phi_a$ is the map $V\to V$ such that it corresponds to matrix multiplication with $a$ with respect to the standard basis $E$.

Then here's a diagram that shows the relevant relationships.

1612043175776.png


In particular we can deduce from it that:
$$M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$$
🤔
 
Last edited:
  • #14
Klaas van Aarsen said:
Okay! :geek:

Let $V$ be our abstract vector space.
Let $\mathbb{R}^3_E$ be the space of column vectors with respect to basis $E$.
Let $\mathbb{R}^3_B$ be the space of column vectors with respect to basis $B$.
And let's assume that $\phi_a$ is the map $V\to V$ such that it corresponds to matrix multiplication with $a$ with respect to the standard basis $E$.

Then here's a diagram that shows the relevant relationships.

View attachment 10951

In particular we can deduce from it that:
$$M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$$
🤔

Ahh ok! And are the matrices $M^E_B(\text{id}) $ and $ M^E_B(\text{id})^{-1}$ that I calculated above correct? :unsure:
 
  • #15
mathmari said:
Ahh ok! And are the matrices $M^E_B(\text{id}) $ and $ M^E_B(\text{id})^{-1}$ that I calculated above correct?
They look correct to me. (Nod)
 
  • #16
Klaas van Aarsen said:
They look correct to me. (Nod)

Great!

As for my initial post... what is then $M_{B_i}$ ? Is it like $M_{B_i}^{B_i}$ ? :unsure:
 
  • #17
mathmari said:
As for my initial post... what is then $M_{B_i}$ ? Is it like $M_{B_i}^{B_i}$ ?
That is what I'd expect yes. 🤔
 
  • #18
Klaas van Aarsen said:
That is what I'd expect yes. 🤔

Ok.. So with $M_B(\phi) = (b_1\mid b_2)^{-1} (\phi(b_1)\mid \phi(b_2))$ we have the following :

Let $b_1=\begin{pmatrix}x_1\\ y_1 \end{pmatrix}$ and $b_2=\begin{pmatrix}x_2\\ y_2 \end{pmatrix}$.

With $\phi_1$ :
\begin{align*}M_B(\phi_1)&=\begin{pmatrix}x_1 & x_2 \\ y_1 & y_2\end{pmatrix}^{-1}\cdot \begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix}=\frac{1}{x_1y_2-x_2y_1}\begin{pmatrix}y_2 & -x_2 \\ -y_1 & x_1\end{pmatrix}\cdot \begin{pmatrix}x_1+y_1 & x_2+y_2 \\ x_1-y_1 & x_2-y_2\end{pmatrix} \\ & =\frac{1}{x_1y_2-x_2y_1}\begin{pmatrix}x_1y_2+y_1y_2 -x_1x_2+x_2y_1&x_2y_2+y_2^2-x_2^2+x_2y_2 \\ -x_1y_1-y_1^2+x_1^2-x_1y_1 & -x_2y_1-y_1y_2+x_1x_2-x_1y_2\end{pmatrix} \\ & =\frac{1}{x_1y_2-x_2y_1}\begin{pmatrix}x_1y_2+y_1y_2 -x_1x_2+x_2y_1& 2x_2y_2+y_2^2-x_2^2 \\ -2x_1y_1-y_1^2+x_1^2 & -x_2y_1-y_1y_2+x_1x_2-x_1y_2\end{pmatrix}\end{align*}
Now we have to solve a system such that this matrix is an upper triangular.

Is that way correct? Or is there an other approach? :unsure:
 
  • #19
Yes, that looks correct. (Nod)
However, we do not have to solve the entire system. It suffices if the bottom left matrix entry is $0$.
That is, we only needed to find $-2x_1 y_1 -y_1^2+x_1^2$, and we need to find $x_1$ and $y_1$ such that it is $0$. 🤔

Another approach.
Let $U=(u_{ij})=M_B(\phi)$ be the desired upper triangular matrix.
Let $b_1$ be the first vector in the desired basis.
Then $U\begin{pmatrix}1\\0\end{pmatrix} = \begin{pmatrix}u_{11}\\0\end{pmatrix} = u_{11}\begin{pmatrix}1\\0\end{pmatrix}$.
In other words, $u_{11}$ must be an eigenvalue of $\phi$ and $b_1$ must be the corresponding eigenvector. 🤔
 
  • #20
Klaas van Aarsen said:
Another approach.
Let $U=(u_{ij})=M_B(\phi)$ be the desired upper triangular matrix.
Let $b_1$ be the first vector in the desired basis.
Then $U\begin{pmatrix}1\\0\end{pmatrix} = \begin{pmatrix}u_{11}\\0\end{pmatrix} = u_{11}\begin{pmatrix}1\\0\end{pmatrix}$.
In other words, $u_{11}$ must be an eigenvalue of $\phi$ and $b_1$ must be the corresponding eigenvector. 🤔

So you take as $b_1$ the vector $\begin{pmatrix}1\\0\end{pmatrix} $ ?
 
  • #21
mathmari said:
So you take as $b_1$ the vector $\begin{pmatrix}1\\0\end{pmatrix} $ ?
Not exactly. (Shake)

We consider $b_1$ an as yet unknown vector.
The representation of that vector with respect to the basis $B$ is $\gamma_B(b_1)=\begin{pmatrix}1\\0\end{pmatrix}$.
That is, $1\cdot b_1 + 0\cdot b_2$. (Sweating)
 
  • #22
Klaas van Aarsen said:
They look correct to me. (Nod)

Using the equality $M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$ we get
\begin{align*}M^B_B(\phi_a)&=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1} \\ & =\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}^{-1}\\ & =\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}\frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & -\frac{2}{3} \\ -\frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix} \\ & = \begin{pmatrix}\frac{2}{9} & -\frac{1}{9} & \frac{11}{9} \\- \frac{2}{9} & \frac{13}{9} & -\frac{8}{9} \\ 0 & \frac{1}{3} & \frac{1}{3}\end{pmatrix} \end{align*}

This must be equal to \begin{equation*}\mathcal{M}_{\mathcal{B}}^{\mathcal{B}}(\phi_a)=\left (\gamma_{\mathcal{B}}\left (\phi_a(b_1)\right )\mid \gamma_{\mathcal{B}}\left (\phi_a(b_2)\right )\mid \gamma_{\mathcal{B}}\left (\phi_a(b_3)\right )\right )\end{equation*} or not? I got an other result :

We have that \begin{align*}&\phi_a(b_1)=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}=\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \\ & \phi_a(b_2)=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix}=\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix} \\ & \phi_a(b_3)=\begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}=\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \end{align*}

\begin{align*}\begin{pmatrix} \left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 1 & 1 & -1 \\ 1 & 0 & 1\end{matrix}\end{pmatrix} \ & \overset{Z_2:Z_2-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 1 & -1 & 0\end{matrix}
\end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 0 & 2 & -1 \\ 1 & 0 & 1\end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-Z_1}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & -2 & 1\end{matrix}
\end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 0 & 2 & -1 \\ 0 & 1 & 1\end{matrix}\end{pmatrix} \ \\ & \overset{Z_3:Z_3-2\cdot Z_2}{\longrightarrow } \ \begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & 1 & -1 \\ 0 & -1 & 2 \\ 0 & 0 & -3\end{matrix}
\end{matrix}\right|\begin{matrix}1 & -1 & 0 \\ 0 & 2 & -1 \\ 0 & -3 & 3\end{matrix}\end{pmatrix} \end{align*}

For $\gamma_B(\phi_a(b_1))$ we get the equations \begin{align*}c_1+c_2-c_3&= 1 \\ -c_2+2c_3&=0 \\ -3c_3&= 0\end{align*} So we get $\gamma_B(\phi_a(b_1))=\begin{pmatrix}1\\ 0 \\ 0\end{pmatrix}$. For $\gamma_B(\phi_a(b_2))$ we get the equations \begin{align*}c_1+c_2-c_3&=-1 \\ -c_2+2c_3&=\ 2 \\ -3c_3&=-3\end{align*} So we get $\gamma_B(\phi_a(b_2))=\begin{pmatrix}0\\ 1 \\ 1\end{pmatrix}$. For $\gamma_B(\phi_a(b_3))$ we get the equations \begin{align*}c_1+c_2-c_3&= \ 0 \\ -c_2+2c_3&=-1 \\ -3c_3&= \ 3\end{align*} So we get $\gamma_B(\phi_a(b_3))=\begin{pmatrix}4\\ -4 \\ -1\end{pmatrix}$. Have I done something wrong or have I understood that wrong? :unsure:
 
  • #23
Klaas van Aarsen said:
Not exactly. (Shake)

We consider $b_1$ an as yet unknown vector.
The representation of that vector with respect to the basis $B$ is $\gamma_B(b_1)=\begin{pmatrix}1\\0\end{pmatrix}$.
That is, $1\cdot b_1 + 0\cdot b_2$. (Sweating)

So $\mathcal{M}_{\mathcal{B}_1}(\phi_1)$ is a diagonal matrix and so also an upper triangular matrix if it is of the form $\begin{pmatrix}u_{11} & 0 \\ 0 & u_{22}\end{pmatrix}$.

Then we get \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)\gamma_{\mathcal{B}_1}(b_1)=\begin{pmatrix}u_{11} \\ 0\end{pmatrix}=u_{11}\begin{pmatrix}1 \\ 0\end{pmatrix}=u_{11}\gamma_{\mathcal{B}_1}(b_1)\end{equation*}
So $u_{11}$ is an eigenvalue of $\phi$ and $b_1$ the corresponding eigenvector.

We have that \begin{equation*}\phi \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}1 & 1 \\ 1 & -1\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*}
So $u_{11}=\sqrt{2}$ and $b_1=\begin{pmatrix}1+\sqrt{2} \\ 1\end{pmatrix}$.

We also have that \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_1)\gamma_{\mathcal{B}_1}(b_2)=\begin{pmatrix}0 \\ u_{22}\end{pmatrix}=u_{22}\begin{pmatrix}0 \\ 1\end{pmatrix}=u_{22}\gamma_{\mathcal{B}_1}(b_2)\end{equation*}
So $u_{22}$ is an eigenvalue of $\phi$ and $b_1$ the corresponding eigenvector.
Then $u_{22}=-\sqrt{2}$ and $b_2=\begin{pmatrix}1-\sqrt{2} \\ 1\end{pmatrix}$. Is that correct? Or can we not consider both cases (upper tridiagonal and diagonal) together? :unsure:
 
  • #24
For map $\phi_2$ we cannot do that like that :
\begin{equation*}\phi_2 \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*}

There are only complex eigenvalues :unsure:
 
  • #25
For the last map we have:
\begin{equation*}\phi_3 \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*}
So there is only one eigenvalue $u_{11}=0$ and $b_1=\begin{pmatrix}1 \\ 0\end{pmatrix}$.

:unsure:
 
  • #26
mathmari said:
Using the equality $M^B_B(\phi_a)=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1}$ we get
\begin{align*}M^B_B(\phi_a)&=M^E_B(\text{id}) \cdot a \cdot M^E_B(\text{id})^{-1} \\ & =\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}^{-1}\\ & =\begin{pmatrix}-\frac{1}{3} & \frac{4}{3} & -1 \\ \frac{5}{3} & \frac{1}{3} & \frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}\frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & -\frac{2}{3} \\ -\frac{1}{3} & \frac{2}{3} & -\frac{1}{3}\end{pmatrix} \\ & = \begin{pmatrix}\frac{2}{9} & -\frac{1}{9} & \frac{11}{9} \\- \frac{2}{9} & \frac{13}{9} & -\frac{8}{9} \\ 0 & \frac{1}{3} & \frac{1}{3}\end{pmatrix} \end{align*}

It should be:
$$M^B_B(\phi_a) = M^E_B(\text{id}) \cdot a\cdot M^E_B(\text{id})^{-1} = M^B_E(\text{id})^{-1} \cdot a\cdot M^B_E(\text{id})
=\begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}^{-1}\cdot \begin{pmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{pmatrix}\cdot \begin{pmatrix}1 & 1 & -1 \\ 1 & 0 & 1 \\ 1 & -1 & 0\end{pmatrix}
$$
🤔

For the record, we generally have for a vector $v$:
$$\gamma_B(\phi_a(v)) =M^B_B(\phi_a)\cdot \gamma_B(v) = M^E_B(\text{id}) \cdot a\cdot M^B_E(\text{id})\cdot \gamma_B(v)$$
That is, we start with the representation of $v$ with respect to $B$, which is $\gamma_B(v)$.
We convert it into a representation with respect to $E$ using the matrix $M^B_E(\text{id})$.
Now we can apply the matrix $a$.
The result is a vector with respect to $E$.
So we convert it back into a representation with respect to $B$ using the matrix $M^E_B(\text{id})$. 🧐

mathmari said:
So $\mathcal{M}_{\mathcal{B}_1}(\phi_1)$ is a diagonal matrix and so also an upper triangular matrix if it is of the form $\begin{pmatrix}u_{11} & 0 \\ 0 & u_{22}\end{pmatrix}$.

Is that correct? Or can we not consider both cases (upper tridiagonal and diagonal) together?
Correct. (Nod)
And yes, we can consider both cases together.

mathmari said:
For map $\phi_2$ we cannot do that like that :
\begin{equation*}\phi_2 \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*}

There are only complex eigenvalues
Indeed.
So what can we conclude? 🤔

mathmari said:
For the last map we have:
\begin{equation*}\phi_3 \begin{pmatrix}x \\ y\end{pmatrix}=\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}\end{equation*}
So there is only one eigenvalue $u_{11}=0$ and $b_1=\begin{pmatrix}1 \\ 0\end{pmatrix}$.
What does that mean for the question in the OP? 🤔
 
  • #27
Klaas van Aarsen said:
Correct. (Nod)
And yes, we can consider both cases together.

So we have a basis for both cases, for an upper tridiagonal matrix and a diagonal matrix.
Klaas van Aarsen said:
Indeed.
So what can we conclude? 🤔

That means that there is no basis so that we get a diagonal matrix, right? But what about a tridiagonal matrix? :unsure:

Klaas van Aarsen said:
What does that mean for the question in the OP? 🤔

That means that there is no basis so that we get a diagonal matrix, right? But what about a tridiagonal matrix? :unsure:
 
  • #28
mathmari said:
That means that there is no basis so that we get a diagonal matrix, right? But what about a tridiagonal matrix?

Did you you mean upper triangular instead of tridiagonal? 🤔

We needed a real eigenvalue for an upper triangular matrix, didn't we?
Or alternatively, we can find a solution for $-2x_1 y_1 -y_1^2+x_1^2=0$.
We can solve it as a quadratic equation with respect to $x_1$ can't we? 🤔
mathmari said:
That means that there is no basis so that we get a diagonal matrix, right? But what about a tridiagonal matrix?
Upper triangular instead of tridiagonal?

We found that we needed a real eigenvalue and its eigenvector to ensure the lower left matrix entry becomes $0$.
What else do we need? 🤔
 
  • #29
Klaas van Aarsen said:
Did you you mean upper triangular instead of tridiagonal? 🤔

We needed a real eigenvalue for an upper triangular matrix, didn't we?
Or alternatively, we can find a solution for $-2x_1 y_1 -y_1^2+x_1^2=0$.
We can solve it as a quadratic equation with respect to $x_1$ can't we? 🤔

Oh yes, I mean upper triangular matrix not tridiagonal.

Since there only complex roots, there is no basis such that the matrix $M$ is a diagonal nor an upper triangular matrix, right?
Klaas van Aarsen said:
We found that we needed a real eigenvalue and its eigenvector to ensure the lower left matrix entry becomes $0$.
What else do we need? 🤔

So can we consider any other vector as $b_2$, that is linearly independent to $b_1$, so that the matrix $M$ has the desired form?:unsure:
 
Last edited by a moderator:
  • #30
mathmari said:
Since there only complex roots, there is no basis such that the matrix $M$ is a diagonal nor an upper triangular matrix, right?

So can we consider any other vector as $b_2$, that is linearly independent to $b_1$, so that the matrix $M$ has the desired form?
Yes and yes. (Nod)

And we should verify that the matrix actually does get the desired form. (Sweating)
 
  • #31
For the last map :

Let $b_2=\begin{pmatrix}0 \\ 1\end{pmatrix}$, then $\gamma_{\mathcal{B}_1}(\phi_3(b_2))=\gamma_{\mathcal{B}_1}\begin{pmatrix}1 \\ 0\end{pmatrix}=\begin{pmatrix}1 \\ 0\end{pmatrix}$.
So we get \begin{equation*}\mathcal{M}_{\mathcal{B}_1}(\phi_3)=\begin{pmatrix}1 & 1 \\ 0 & 0\end{pmatrix}\end{equation*} which is an upper triangular matrix.There is no basis such that $\mathcal{M}_{\mathcal{B}_1}(\phi_3)$ is a diagonal matrix, because the element at the second column and first row has to be non zero to get linearly independent vectors, right?


:unsure:
 
Last edited by a moderator:
  • #32
mathmari said:
\begin{align*}\phi_3:\mathbb{R}^2\rightarrow \mathbb{R}, \ \begin{pmatrix}x\\ y\end{pmatrix} \mapsto \begin{pmatrix}y\\ 0\end{pmatrix} \end{align*}
1. Give (if possible) for each $i\in \{1,2,3\}$ a Basis $B_i$ of $\mathbb{R}^2$ such that $M_{B_i}(\phi_i)$ an upper triangular matrix.
2. Give (if possible) for each $i\in \{1,2,3\}$ a Basis $B_i$ of $\mathbb{R}^2$ such that $M_{B_i}(\phi_i)$ an diagonal matrix.
mathmari said:
Let $b_2=\begin{pmatrix}0 \\ 1\end{pmatrix}$, then $\gamma_{\mathcal{B}_3}(\phi_3(b_2))=\gamma_{\mathcal{B}_3}\begin{pmatrix}1 \\ 0\end{pmatrix}=\begin{pmatrix}1 \\ 0\end{pmatrix}$.
So we get \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}1 & 1 \\ 0 & 0\end{pmatrix}\end{equation*} which is an upper triangular matrix.
I've taken the liberty to change $\mathcal{B}_1$ into $\mathcal{B}_3$ in the above quote.
That was what was intended wasn't it? :unsure:

The eigenvalue was $0$ wasn't it?
Shouldn't we have $u_{11}=0$ then?
I don't think we have the correct $\mathcal{M}_{\mathcal{B}_3}(\phi_3)$. (Shake)

mathmari said:
There is no basis such that $\mathcal{M}_{\mathcal{B}_3}(\phi_3)$ is a diagonal matrix, because the element at the second column and first row has to be non zero to get linearly independent vectors, right?
Correct. (Nod)
 
Last edited:
  • #33
Klaas van Aarsen said:
The eigenvalue was $0$ wasn't it?
Should we have $u_{11}=0$ then?
I don't think we have the correct $\mathcal{M}_{\mathcal{B}_3}(\phi_3)$. (Shake)

Ahh yes, we have \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}\end{equation*} :unsure:
 
  • #34
mathmari said:
Ahh yes, we have \begin{equation*}\mathcal{M}_{\mathcal{B}_3}(\phi_3)=\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}\end{equation*}
Right. (Nod)
 
  • #35
Klaas van Aarsen said:
Right. (Nod)

Thank you ! 👌
 

1. What is a matrix?

A matrix is a rectangular array of numbers or symbols arranged in rows and columns. It is often used to represent data or perform mathematical operations.

2. What is a basis?

A basis is a set of vectors that can be used to represent any other vector in a given vector space. It is often used to describe the span or dimension of a vector space.

3. How do you find a basis for a matrix?

To find a basis for a matrix, you can use the row-reduction method to convert the matrix into its reduced row echelon form. The resulting non-zero rows will form a basis for the matrix.

4. Why is it important to have a basis for a matrix?

A basis for a matrix is important because it allows us to represent any vector in the matrix's vector space using a linear combination of the basis vectors. This makes it easier to perform calculations and solve problems involving the matrix.

5. Can a matrix have more than one basis?

Yes, a matrix can have multiple bases. This is because there are often multiple ways to represent a vector in a given vector space. However, any basis for a matrix will have the same number of vectors, known as the dimension of the vector space.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
527
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
984
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
Replies
2
Views
718
Back
Top