When are linear transformations not invariant?

evilpostingmong
Messages
338
Reaction score
0
I am studying invariance, and I came across this dilemma.
Suppose we have a subspace with the basis <v1, v2> of the subspace (lets say U2)
and we were to map v=c1v1+c2v2 and we let c2=0.
Now c1T(v1)+c2T(v2)=k1c1v1+0*T(v2)= k1c1v1.
I am doing a proof and need to
know what the question means by the intersection of a collection of
subspaces, and I believe that this is what it refers to,
since we can map c1v1 of <v1> (the basis of "U1") to U1 and arrive at the same
answer.
 
Last edited:
Physics news on Phys.org
evilpostingmong said:
I am studying invariance, and I came across this dilemma.
Suppose we have a subspace with the basis <v1, v2> of the subspace (lets say U2)
and we were to map v=c1v1+c2v2 and we let c2=0.
Now c1T(v1)+c2T(v2)=k1c1v1+0*T(v2)= k1c1v1.
I am doing a proof and need to
know what the question means by the intersection of a collection of
subspaces, and I believe that this is what it refers to,
since we can map c1v1 of <v1> (the basis of "U1") to U1 and arrive at the same
answer.
First, it is not the linear transformation that is or is not invariant, it is a set of vectors that is or is not invariant under a transformation.

So T(c1v1+ c2v2)= T(c1v1) (because c2=0) T(c1v1)= c1T(v1). Now where do you get that c1T(v1)= k1c1v1? That is equivalent, of course, to saying that T(v1)= k v1 so that v1 is an eigenvector of T with eigenvalue k. The "intersection" of a collection of subspaces is just the intersection of the sets: those vectors that are in all of the subspaces. In linear algebra it is comparitively easy to show that the intersection of a collection of subspaces is itself a subspace.

I cannot see that "intersection of a collection of subspaces" has anything to do with T(c1v1). What, exactly, are you trying to prove?
 
Oh sorry, kind of got caught up in the definition of invariance.
I do know how to prove this, but it was the definition that
got me stuck, but you made it a bit more clear. Thank you!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top