Linear Transformations and Bases

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I need some help or at least some assurance that my thinking on linear transformations and their matrix representations is correct.

I assume when we specify a linear transformation eg F(x,y, z) = (3x +y, y+z, 2x-3z) for example, that this is specified by its action on the variables and is not with respect to any basis.

However when we specify the matrix of a linear transformation T: V --> W that this is with respect to a basis in V and a basis in W

Of course if we have a linear transformation S: V -->V it could be that the two bases are the same.

If no basis is mentioned regarding the matrix of a linear transformation, then I am assuming the standard bases are assumed.

Can someone either confirm I am correct in my thinking or point out the errors in my thinking?

Peter
 
Physics news on Phys.org
If no basis is mentioned regarding the matrix of a linear transformation, then I am assuming the standard bases are assumed.

What is the standard basis of a generic vector space V?
 
Yes, good point ... I guess I should have specified Euclidean space for that assumption to make sense ...

Are my other assumptions/interpretations OK?
 
Everything is correct, but I think you have a bit of a hangup on how linear transformations on vector spaces are described

I assume when we specify a linear transformation eg F(x,y, z) = (3x +y, y+z, 2x-3z) for example, that this is specified by its action on the variables and is not with respect to any basis.

However when we specify the matrix of a linear transformation T: V --> W that this is with respect to a basis in V and a basis in W

The key here is that in your first example you specified a linear transformation on R3 without defining it as a matrix multiplication, so no basis is required and no matrix is ever constructed. If you wanted to define F as a matrix multiplication you would need to specify a basis of R3 - on Euclidean space this step is often omitted because everyone assumes your basis is the standard basis (1,0,0),(0,1,0),(0,0,1) (and trying to specify F as matrix multiplication with respect to a different basis is literally just extra work).

If we want to specify a linear transformation V--> W as a matrix multiplication we need to pick bases of V and W to identify them with Euclidean space. But we can have linear transformations that are not represented as matrix multiplications. For example let V be the set of all polynomials of degree <= 3 (this is a 4 dimensional vector space over R) and let W be R. Then consider
I:V\to \mathbb{R},\ I(p(x)) = \int_0^1 p(x) dx
I is a linear transformation and I never specified a basis for V in order to tell you the function because I didn't tell you what I was as a matrix multiplication
 
Thanks so much for that post - most helpful

I suppose the essential thing needed to be able to derive a matrix of a linear transformation is that the vector spaces involved need to be over a field F where F = R or C.

Is that correct?

Missed your example due to some latex error or other - pity - would have like to have viewed your example

Thanks again!

Peter
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top