Understanding Diagonalization and Eigenvalues in Matrix Transformations

  • Thread starter Thread starter mathsciguy
  • Start date Start date
  • Tags Tags
    Diagonalization
mathsciguy
Messages
134
Reaction score
1
Let's say I have a matrix M such that for vectors R and r in xy-coordinate system:
R=Mr
Suppose we diagonalized it so that there is another matrix D such that for vectors R' (which is also R) and r' (which is also r) in x'y'-coordinate system:
R'=Dr'

D is a matrix with zero elements except for its main diagonal, also, these elements are the eigenvalues of matrix M. The order of these values in the diagonal are supposed to be arbitrary, as what my textbook says.

Let's look at the case of 2D vectors and in which the eigenvalues are perpendicular to each other (thus there is rotation of the original xy axes to x'y' axes by some value \theta), let's say the matrix D operates on r' such that the components of r' transforms to R' in a way that:
X'=x'\ and\ Y'=6y'

Hence M acts on vectors such that it 'streches' them to the direction of y'.

My question is, I am told that the choice of the order of the eigenvalues in a diagonalized matrix is arbitrary, and thus the choice of which of the eigenvectors corresponds to x' and y' axes are also arbitrary. Are we supposed to just examine the behavior of the vectors in the xy-coordinate system so that we'd know which of the eigenvectors would be parallel to x' or y'? For this example would the vectors' component along the angle \theta degrees be the one multiplied by 6 or is it along the one along the direction \theta + \frac{\Pi}{2} from the x-axis?
 
Last edited:
Physics news on Phys.org
Are the vector spaces real? Have you learned about change of basis? Keep in mind not all matrices can be diagonalized. The idea is we can chose our basis, and a diagonal basis (when available) is convenient. The information about how the coordinates are related is in the change of base matrix. If S is the change of basis matrix (in this case made of eigenvectors) x'=Sx (some books use the transpose matrix). So S tells us what you ask, and we often find S at the same time we find D. If we should forget S, find D without finding S, or be given D without S we can either find S or find out what we want to know about S. As you alluded to we can apply M or D to a basis to see what the corresponding basis is.
 
Hm, yes, if I understood the question correctly, then yes I am working with matrices and vectors restricted to reals. About change of basis, I'm not quite sure about that, I have learned how to make orthonormal basis vectors out of linearly independent vectors though (Gram-Schmidt method), that's the only thing remotely related that I can think of. I am also aware that not all matrices can be diagonalized.

I am not exactly that familiar about linear algebra. I'm more like a newb, also as much as I hate it, I am only learning from a math methods textbook so I can't enjoy the proofs and some abstractions. So I guess, the explanation was a bit hand wavy to me.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top