Understanding Change of Basis in Vector Spaces

princy
Messages
14
Reaction score
0
hi.. can anyone say what is the concept behind change of basis.. y do we change a vector of one basis to another?
 
Physics news on Phys.org
In general, each component of a vector in one basis becomes a linear combination of the components of the vectors in the other basis.
 
One reason why we change bases because we are looking for another representation of some vector. Representing a vector as a vector generated by a basis gives us a concrete interpretation of this vector, and if we were ever given an "easy" basis, we can give ourselves an "easy" representation of a vector.
For example, suppose you had a basis {v_1, ... v_n } so that T ( v_i ) = c_i v_i where c_i is some constant and v_i is anything in your basis. Then any vector in your space, w, can be written as w = a1v1 + a2v2 +... . + an vn, and Tw = a1 Tv1 + ... an Tvn = a1 c1 v1 +... an cn vn
So all T does to some vector w in V is scale the coefficients by another factor
the matrix interpretation would be that if you had a n x n diagonal matrix, and an n x 1 column vector, to multiply those two, you'd only have to multiply each row by whatever is on the corresponding row on the matrix ( whatever is on the diagonal )
 
yes, some bases make calculations easier, a matrix might have a "nicer" form, such as upper triangular, or diagonal, or one might want to make a basis orthonormal, to simplify calcuating inner products.

another reason might be that you have, for example, some data that is given in terms of certain linearly independent functions, but you want to express these in terms of "standard functions". perhaps "cost" is determined by one polynomial, and "productivity" by another polynomial, and you want to express the results in terms of 1,x,x^2,x^3, etc.

sometimes, one basis makes the geometry more transparent, and the spatial relationships more obvious. you might transform a "slanted" space, to one that has perpendicular axes, to get a better feel for what things "look like".
 
Beware that if you want to express a linear transformation by a matrix in some "weird" space, you have to include the change of basis matrix, which is the identity transformation of vectors between weird space and Euclidean space.

For example,
S=(s1,s2,s3) is the one that identity transform matrix from S-space to Euclidean 3D space.

But what is the matrix from Euclidean 3D space to S-space? It is S inverse!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top