# Matrix representation

## Main Question or Discussion Point

I've given the linear transformation $L: \mathbb{R}^4 \rightarrow \mathbb{R}^3$, where

$$L(\mathbf x) = A\mathbf x$$

and

$$A = \left[ \begin{array}{cccc} 1&1&-4&-1 \\ -2&3&1&0 \\ 0&1&-2&2 \end{array} \right]$$

The first question of the problem is that, by using the standard basis $E$ for $\mathbb{R}^4$, I have to determine a new basis $F = [\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3]$ for $\mathbb{R}^3$ such that the matrix representing $L$ in $E$ and $F$ is the reduced row echelon form of the matrix $A$, which I
determined to be

$$U = \left[ \begin{array}{cccc} 1&0&0&-11 \\ 0&1&0&-6 \\ 0&0&1&-4 \end{array} \right]$$

I determined the basis by using, that if $B = (\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3)$, then

$$U = B^{-1}\left( L(\mathbf{e}_1), L(\mathbf{e}_2), L(\mathbf{e}_3), L(\mathbf{e}_4) \right) = B^{-1}A$$

So $B^{-1}$ must be "the product" of elementary operations performed on A, which can easily be found by

$$\left(A|I\right) \rightarrow \left(U|B^{-1}\right)$$

Since $B^{-1}\left(A|I\right) = \left(B^{-1}A|B^{-1}\right) = \left(U|B^{-1}\right)$. I found B as

$$B =\left[ \begin{array}{ccc} 1&1&-4 \\ -2&3&1 \\ 0&1&-2 \end{array} \right]$$

The next question is to explain why it is possible for an arbitrary $m\times n$ matrix A, to determine a new basis F for $\mathbb{R}^m$ (where $L: \mathbb{R}^n \rightarrow \mathbb{R}^m$), but still keep the standard basis for $\mathbb{R}^n$, so that the linear transformation corrosponding to A i the new basis is represented by the reduced row echelon form of A. Like in the last problem.

I'm sorry if the question is purely expressed, but that's how it's expressed in the problem.
The way I see it is that if we have the linear transformation L given by $L(\mathbf{x}) = A\mathbf{x}$, then by using the reduced row echelon form of A, let's call it U, the transformation can be expressed in new basis as

$$[L(\mathbf{x})]_F = U[\mathbf{x}]_E$$

But where E will be the standard basis.

I don't know exactly how to explain this. It doesn't look like it would be a sufficient explaination to use the steps in the 1. question. A hint to how I can explain it would really help.

Last edited:

Related Linear and Abstract Algebra News on Phys.org
Regarding your question about an "explanation", you might think of this problem in terms of equivalence relations on matrices. Here's some starting material.

1) Left Associate: (your case) Matrices A and B (both mxn) are left associate iff there exists a non-singular (mxm) Q such that B = (Q^-1)A. Multiplication by (Q^-1) corresponds to performing a sequence of elementary row operations. If A represents a linear transformation
T: U -> V relative to basis A in U and basis B in V, then matrix B represents T relative to A and a new basis in V. There is an accompanying theorem. It's really all you need in the way of explanation.

Here are some other equivalence relations you might explore.

2) Right Associate: You can imagine how this one is defined.

3) Equivalent: To be more consistent with the above naming convenvtion, some call it Associate, (e.g., E.Nering). Note that associate is definitely not a standard term for this relation, and I suspect left and right in 1) and 2) aren't either.

4) Similar: Perhaps the most interesting of the four.