I Understanding [T]_gamma and Its Purpose in Linear Transformations

DhineshKumar
Messages
5
Reaction score
0
let's consider we have a linear transformation T: R^2->R^3 and alpha={ordered basis of R^2} and beta{ordered basis of R^3} and gama={v1,v2}, v1=(1,-1),v2=(2,-5). now I need to find [T]_gama(associated matrix)? When i read about it, i understood it as, first we have to find transformation of each of the vectors from gama, [T(v1) , T(v2)] and write T(v1),T(v2) as linear combination of gamma vectors. The coeff. written in column would give me [T]_gamma.

I want to know whether what i have understood is right or wrong? and moreover i want to know why we need different forms [T]_(alpha/beta/gama) ?
 
Physics news on Phys.org
The matrix for the transformation, with respect to the ##\alpha## and ##\beta ## bases, will have two columns, each with three elements.
The elements of the first column will be the coordinates of ##Tv_1##, in the ##\beta## basis.
The elements of the 2nd column will be the coordinates of ##Tv_2##, in the ##\beta## basis.
where ##v_1## and ##v_2## are the two vectors in the ##\alpha## basis.
I don't know what the ##\gamma## basis is that you're referring to, though. Only two bases are needed.
 
Yes, to find the matrix representing a linear transformation from one vector space to another, apply the linear transformation to each basis vector in some ordered basis, then write the result as a linear combination of basis vectors in the other space. That will give a column of the matrix representation. As for "why we need different forms", linear transformations apply to vectors while matrices apply to arrays of numbers. You need to connect one with the other and bases allow you to do that.

For example, suppose our linear transformation maps (x, y), in R^2 to (x- y, x+ y, y) in R^3. The "usual basis" for R^2 is {u1, u2}= {(1, 0), (0, 1)} and the "usual basis" for R^3 is {v1, v2, v3}= {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Apply the linear transformation to u1= (1, 0) gives (1- 0, 1+ 0, 0)= (1, 0, 0)= 1(1, 0, 0)+ 0(0, 1, 0)+ 0(0, 0, 1)= 1v_1+ 1v_2+ 0v3 so the first column of the matrix is \begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix}. Applying it to u2= (0, 1) gives (0- 1, 0+ 1, 1)= (-1, 1, 1)= -1(1, 0, 0)+ 1(0, 1, 0)+ 1(0, 0, 1)= -1v1+ 1v2+ 1v3 so the second column is \begin{bmatrix} -1 \\ 1 \\ 1\end{bmatrix}. The matrix corresponding to the linear transformation is \begin{bmatrix} 1 & -1 \\ 1 & 1 \\ 0 & 1\end{bmatrix}.
You can check that \begin{bmatrix}1 & -1 \\ 1 & 1 \\ 0 & 1 \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix}= \begin{bmatrix}x- y \\ x+ y \\ y \end{bmatrix}

Another basis for R^3 would be {(1, 1, 0), (1, -1, 0), (0, 0, 1)}, Applying the given linear transformation to (1, 0) and (0, 1) as before, we again get (1, 1, 0) and (-1, 1, 1). But now we want to find a, b, and c so that (1, 1, 0)= a(1, 1, 0)+ b(1, -1, 0)+ c(0, 0, 1). That gives the three equations a+ b= 1, a- b= 1, c= 0. Adding the first two equations 2a= 2 so a= 1 and then b= 0. (1, 0, 0)= 1(1, 1, 0)+ 0(1, -1, 0)+ 0(0, 0, 1). The first column is \begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix}.

Applying the given linear transformation to (0, 1) we get, as before, (-1, 1, 1) and want to find a, b, and c such that (-1, 1, 1)= a(1, 1, 0)+ b(1, -1, 0)+ c(0, 0, 1). That gives the three equations a+ b= -1, a- b= 1, c= 1. Adding the first two equations 2a= 0 so a= 0 and b= -1. c= 1. The second column is \begin{bmatrix}0 \\ -1 \\ 1 \end{bmatrix}.

The matrix is \begin{bmatrix}1 & 0 \\ 0 & -1 \\ 0 & 1\end{bmatrix}.
 
Thank you so much @Hallsoflvy for your explanation.
 
@andrewkirk gamma bases are separate bases that I define other than standard canonical bases. And it has vectors that I've mentioned in the question.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
1
Views
2K
Replies
1
Views
3K
Replies
7
Views
2K
Replies
59
Views
9K
Replies
6
Views
3K
Replies
10
Views
2K
Replies
4
Views
2K
Back
Top