# Change of basis

1. Sep 22, 2015

### squenshl

1. The problem statement, all variables and given/known data
Let $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a linear function. Suppose that with the standard bases for $\mathbb{R}^n$ and $\mathbb{R}^m$ the function $f$ is represented by the matrix $A$. Let $b_1, b_2, \ldots, b_n$ be a new set of basis vectors for $\mathbb{R}^n$ and $c_1, c_2, \ldots, c_m$ be a new set of
basis vectors for $\mathbb{R}^m$. What is the matrix that represents $f$ when the linear spaces are described in terms of the new basis vectors?

2. Relevant equations

3. The attempt at a solution
Suppose $f : \mathbb{R}^n \rightarrow \mathbb{R}^n$ is represented by the matrix $A$ when we describe $\mathbb{R}^n$ in terms of the standard basis vectors $e_1, e_2, \ldots, e_n$ and that we have a new set of basis vectors $b_1, b_2, \ldots, b_n$. Then when $\mathbb{R}^n$ is described in terms of these new basis vectors the linear function $f$ will be represented by the matrix $B^{-1}AB$.

2. Sep 22, 2015

### MrAnchovy

What about the set of basis vectors $c_1, c_2, \ldots, c_m$ for $\mathbb{R}^m$?

3. Sep 22, 2015

### squenshl

Here's an example with $n=m=2$. Suppose that with the standard bases for $\mathbb{R}^n$ and $\mathbb{R}^m$ the function $f$ is represented by the matrix $\begin{bmatrix} 3 & 1 \\ 1 & 2 \end{bmatrix}$. Let $\begin{bmatrix} 3 \\ 2 \end{bmatrix}$ and $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ be a new set of vectors for $\mathbb{R}^2$. The matrix that represents $f$ when $\mathbb{R}^2$ is described in terms of the new basis vectors is $$B^{-1}AB = \begin{bmatrix} 1 & -1 \\ -2 & 3 \end{bmatrix} \begin{bmatrix} 3 & 1 \\ 1 & 2 \end{bmatrix} \begin{bmatrix} 3 & 1 \\ 2 & 1 \end{bmatrix} = \begin{bmatrix} 4 & 1 \\ -1 & 1 \end{bmatrix}.$$
My question is how exactly do I incorporate the matrix $C$ into this???
It looks quite simple but I just can't see it!!!!

4. Sep 22, 2015

### squenshl

Maybe $C^{-1}B^{-1}ABC$.

5. Sep 23, 2015

### Fredrik

Staff Emeritus
Suppose that $T:X\to Y$ is linear, that $U=(u_1,\dots,u_n)$ is an ordered basis for $X$, and that $V=(v_1,\dots,v_m)$ is an ordered basis for $Y$. Your book must have given you a formula that associates a matrix with the linear transformation T (and the two ordered bases U and V). Something like this:
$$\left([T]_{V,U}\right)_{ij}=(Tu_j)_i.$$ The right-hand side denotes the $i$th component of $u_j$ with respect to $V$. If $X=Y$, you can use this formula to determine a relationship between $[T]_{V,V}$ and $[T]_{U,U}$. Edit: That last sentence is irrelevant to this problem, because it's asking for something else. See my next post below.

Last edited: Sep 28, 2015
6. Sep 27, 2015

### squenshl

The only formula given is $B^{-1}AB$
I'm completely lost on this.

7. Sep 28, 2015

### nuuskur

Would like to point out that $f$ is not represented by a matrix, $f$ is a function, a vector's image can be represented in matrix form with respect to $f$. There exists a matrix $A$ such that $f(x) = Ax$.

The transformation matrix can be written with column vectors, where the first column is the image of the first initial basis vector, second column is the image of the second initial basis vector and so on. This is by definition.

It turns out that the matrices with respect to the initial basis and the new basis respectively, are similar, that is the meaning of $B^{-1}AB$, where $B$ is regular, because it is invertible.

Last edited: Sep 28, 2015
8. Sep 28, 2015

### Fredrik

Staff Emeritus
Given where? There's no B in the problem statement. How do you define your B?

In post #1, you're letting us know that you know that if two matrices represent the same linear operator in different bases, then they are similar (i.e. if one of them is A, then there's a matrix B such that the other one is $B^{-1}AB$). But if the book hasn't proved a theorem that gives you a formula for the matrix B, then you will have to work directly with the definitions in post #5, and the definition of matrix multiplication: $(AB)_{ij} =\sum_k A_{ik}B_{kj}$. Edit: Also, since the problem involves two changes of ordered bases, you should expect two matrices to show up in the answer.

This isn't an easy problem, because it requires you to understand the definitions well enough to use them correctly, and because it involves two changes of ordered basis instead of just one. But it's fairly straightforward in the sense that if you choose a good notation and apply the definitions correctly in every step, you will get the result you want.

The notations I used in post #5 are good enough, but we also need something to distinguish between the ith component of a vector x with respect to U, and the ith component of the same vector with respect to V. You could e.g. denote the former by $x_i$ and the latter by $x_i'$. In your case, that would be $x_i$ for the ith component with respect to the standard ordered basis, and $x_i'$ for the ith component with respect to the other one.

In general, when you ask for assistance here, you should post your attempt up to the point where you're stuck. If you're stuck right at the start, then you should at least explain what definitions and/or theorems you think you should be using, and what it is about them that confuses you or makes you think they won't solve the problem.

Edit: These are some suggestions to get you started. I will denote the standard ordered bases for $\mathbb R^n$ and $\mathbb R^m$ by E. (If that feels weird, you can use notations like $E_n$ and $E_m$). I will denote the other two ordered bases by B and C. In my notation, what's given in the problem is that $[f]_{E,E}=A$. The problem is asking you to use that to find $[f]_{C,B}$. Post #5 gives you a formula for its ij component. It's $([f]_{C,B})_{ij}=(fb_j)_i'$. How do you proceed from this? A good start would be to multiply by $c_i$ and sum over i. Then you can use the definitions (and what I'm saying in the last paragraph below) to simplify the right-hand side. When you've taken that as far as you can, you should look for another way to rewrite the left-hand side.

Some other useful notations: Let M be the the linear operator such that $b_i=Me_i$ for all i. Let N be the linear operator such that $c_i=Ne_i$ for all i. Note that $b_i=Me_i=\sum_j (Me_i)_j e_j =\sum_j ([M]_{E,E})_{ji} e_j$. It's convenient and relatively harmless to simplify the notation from $[M]_{E,E}$ to just M. So we can write $b_i=\sum_j M_{ji} e_j$.

Last edited: Sep 28, 2015