Matrix rep. of Linear Transformation

eckiller
Messages
41
Reaction score
0
Hello all,

I am trying to understand the matrix representation of a linear transformation.

So here is my thought process.

Let B = (b1, b2, ..., bn) be a basis for V, and let Y = (y1, y2, ..., ym) be a basis for W.

T: V --> W

Pick and v in V and express as a linear combo of the basis vectors:

v = sum( ai bi, 1, n)

T(v) = sum( ai T(bi), 1, n)

i.e., the transformed vector T(v) is determined by a linear combination of the transformed basis vectors.

Now coordanitize everything relative to Y, which we can always do since it is an isomorphism.

[T(v)]_Y = sum( ai [T(bi)]_Y, 1, n)

Then we can write this linear combination as a matrix multiplication, i.e., the vectors [T(bi)]_Y give the column vectors of the matrix representation.

Anyway, it took me awhile to get this and I still doubt myself. Is my reasoning correct?
 
Physics news on Phys.org
It is simpler to look at the individual basis vectors. If you have bi in a specific order, then b1 itself is represented by the ntuple (1, 0, 0,..., 0). Writing that as a column vector and multiplying it by the matrix representing T, you see that each number in the first column is multiplied by 1 and all other numbers by 0. That is, the first column is precisely the coefficients of T(b1).

Now look at b2, etc. to get the other columns
 



Hi there,

Your thought process for understanding the matrix representation of a linear transformation is correct. By choosing a basis for both the vector spaces V and W, we can express any vector in V as a linear combination of the basis vectors and the transformed vector T(v) can also be expressed as a linear combination of the transformed basis vectors. This allows us to write T(v) as a matrix multiplication, where the column vectors of the matrix are the transformed basis vectors.

In simpler terms, the matrix representation of a linear transformation is a way of representing the transformation as a matrix, where the columns of the matrix are the transformed basis vectors. This matrix can then be used to perform computations and transformations on vectors in V.

I hope this helps clarify your understanding. Keep up the good work!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top