Can a Change of Basis Simplify Linear Problems?

computerages
Messages
3
Reaction score
0
Hello!

I was wondering if someone can tell me about any application to change of basis... The application can be of any sort, though.

Thanks!
 
Physics news on Phys.org
Here's one from Lang's linear algebra that's pretty nice.

A is a nxn symmetric matrix
X(t) is given in terms of coordinates which are functions of t (x1(t),x2(t),...,xn(t))

We want to find all the solutions to dX(t)/dt = AX(t)
We change our basis to an orthonormal basis with eigenvectors from A. Let this new coordinates be represented by y1,y2,...yn. Thus the linear map represented by A in the first basis is now a diagonal matrix where the entries are the eigenvalues of A.
Thus with respect to the new coordinates the general equation is dy_i/dt = lamda_i*y
y_i = c*e^lamda_i

In short, changing the basis made solving the diff eq trivial.
 
a linear map can usually be represented by infinitely many different matrices, i.e. those matrices form an "orbit" under the action of the group of all invertible matrices, by conjugation.now in that huge array of matrices, it is quite likely there are some which are simpler than others, and thus which reveal more clearly the behavior of the map, and its suitability for the problem you have at hand.

in the example above the problem was to solve a differential equation, but it could be to solve any other linear problem.

the first skill often learned in computational linear algebra courses, gaussian elimination, is a change of basis operation designed to produce from an arbitrary matrix of equations, a matrix whose solutions are readily visible.one version of the implicit function theorem in calculus says that after a change of variables, essentially a non linear change of basis, every smooth function with surjective derivative becomes a linear projection.

interesting theorems like the cayley hamilton theorem, which are true for all linear maps, are more easily proved for special matrix representations like diagonal matrices. the density of such matrices then implies the result for all matrices.

the point is that anything you want to prove is probably easier for a diagonal matrix than an arbitrary one. then you can use density or else try to se if your result is also clear for jordan matrices, or rational canonical ones.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top