Not clear about the change of basis in new space

Sumanta
Messages
25
Reaction score
0
Actually after I wrote down the query on the invertible matrix which I posted a few days ago I happened to refer again to Kunze Huffman and found that this is a standard theorem regarding transformation of linear operator from one basis to another.

Then I realized that the point which was not clear was that if Tv is a vector in basis B then how could with respect to B' I could write P(Tv) where P is the matrix of transformation from B to B'. What is unclear is that when u are doing this u are actually trying to premultiply a vector which is already in the space W, but according to theorem 8 on pg 53 of the book it says that

Suppose I is an n x n invertible matrix over F. Let V be an n dimensional vector space over F and let B be an ordered basis of V . Then there exists unique ordered basis B' of V such that [alpha] in basis B= P[alpha ] in basis B'.

So how is it here the vector space in which the vector is going to reside and the basis are completely different. Am I missing something very obvious. My thinking is that probably even if the space W has smaller dimension than V it is extended by adding 0s to it to equate V and then trying to apply the above technique. Still I am highly confused of applying a matrix NxN which would transform V -> V on something in W.

Sorry but I do not know how to use the subscripts here for clarity but hopefully I have been able to make my doubt across.
 
Physics news on Phys.org
Sumanta said:
So how is it here the vector space in which the vector is going to reside and the basis are completely different.

It's possible for a vector space to have an infinite number of basis. For example, take the vector space \mathbb{R}^2. Now B_1=\{ (0,1) , (1,0) \} is a basis for the vector space. Notice that B_2=\{ (1,2) , (2,1) \} is also a basis for the same vector space, and so on.

Then I realized that the point which was not clear was that if Tv is a vector in basis B then how could with respect to B' I could write P(Tv) where P is the matrix of transformation from B to B'.

When we represent a linear transform by a matrix, remember that it's only meaningful relative to some ordered basis. So, you can represent the same linear transform with respect to a different ordered basis.

Did that help in clarifying your doubt?
 
Last edited:
siddharth said:
It's possible for a vector space to have an infinite number of basis. For example, take the vector space \mathbb{R}^2. Now B_1=\{ (0,1) , (1,0) \} is a basis for the vector space. Notice that B_2=\{ (1,2) , (2,1) \} is also a basis for the same vector space, and so on.



When we represent a linear transform by a matrix, remember that it's only meaningful relative to some ordered basis. So, you can represent the same linear transform with respect to a different ordered basis.

Did that help in clarifying your doubt?

The points which u mention is clear but what is unclear is the following What is unclear is that when u are doing this u are actually trying to premultiply a vector which is already in the space W. So does it mean if W ism dim space and P is n x n and m < n then when u multiply the P with Tv do u assume that the u extend the dimension of a vector in W to n by adding n -m 0 s to the end
 
Sumanta said:
Then I realized that the point which was not clear was that if Tv is a vector in basis B then how could with respect to B' I could write P(Tv) where P is the matrix of transformation from B to B'.

x(e') = P(e)^-1 x(e), where x(e) and x(e') are representations of a vector x in basis (e), (e'), respectively, and P(e)^-1 is the transformation matrix from (e) to (e').
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top