Transition Matrix and Ordered Bases

LosTacos
Messages
79
Reaction score
0
Let B and C be ordered bases for ℝn. Let P be the matrix whose columns are the vectors in B and let Q be the matrix whose columns are the vectors in C. Prove that the transition matrix from B to C equals Q-1P.


I am stuck. Here is what I have.

I know that if B is the standard basis in ℝn, then the transition matrix from B to C is given by [1st vector in C 2nd vector in C ... nth vector in C]-1.

Also, if C is a standard basis in ℝn, then the transition matrix from B to C is given by [1st vector in B 2 vector in B ... nth vector in B].

Since I konw what the transition matrix is from B to C given different standard bases, I am having a difficult time relating this to teh columns of each.
 
Physics news on Phys.org
You're going to need a notation for the basis vectors. I suggest
\begin{align} B&=\{e_1,\dots,e_n\}\\
C &=\{f_1,\dots,f_n\}
\end{align} How do you define the transition matrix from B to C? Is it the M defined by
$$f_i=\sum_j M_{ij} e_j$$ or the M defined by
$$f_i=Me_i=\sum_j (Me_i)_j e_j=\sum_j M_{ji} e_j?$$ (The latter M is the transpose of the former). You want to prove that (with one of these choices of M), we have ##M=Q^{-1}P##. This is equivalent to ##QM=P##, which is equivalent to ##P_{ij}=(QM)_{ij}=##what? Use the definition of matrix multiplication to rewrite ##(QM)_{ij}##. Then you can start thinking about rows and columns.
 
Im confused with the notation of the matrix. How do you rewrite the rows and columns
 
I'm not sure if you're asking about what I did or about the problem.

One notation that can be useful is to denote the number on row i, column j of a matrix A by ##A^i_j## instead of ##A_{ij}##. Then you can just denote the ith row by ##A^i##.

So for example, we have ##P_i=e_i## for all i.
 
If we have basis \{u_1, u_2, \cdot\cdot\cdot, u_n\} for vector space U, then we can represent a vector u= a_1u_1+ a_2u_2+ \cdot\cdot\cdot+ a_nu_n as the array \left<a_1, a_2, \cdot\cdot\cdot, a_n\right>.
In particular the basis vectors themselves are very easy:
u_1= \left< 1, 0, \cdot\cdot\cdot, 0\right>
u_2= \left<0, 1, \cdot\cdot\cdot, 0\right>
... u_n= \left<0, 0, \cdot\cdot\cdot, 1\right>
Now, look at what when you multiply each of those, written as a column by a matrix:
Multiplying u_1 gives just the first column, multiplying u_2 gives the second column, etc.
Example:
\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{bmatrix}\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix}= \begin{bmatrix}a_{12} \\ a_{22} \\ a_{32}\end{bmatrix}
which will then be the coefficients of the expansion of Au in whatever basis we are using for the range space. That is, to represent linear transformation A from U to V, using a given ordered basis for each, apply A to each basis vector for U in turn, writing the result as a linear combination of the basis vectors for V. The coefficients of that linear combination will be the columns of the matrix representation.
 
Last edited by a moderator:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top