How can matrices be used as a basis for linear mappings?

  • Thread starter Thread starter nickthegreek
  • Start date Start date
  • Tags Tags
    Basis Matrices
nickthegreek
Messages
11
Reaction score
0
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english). Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
Physics news on Phys.org
nickthegreek said:
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english).
"Nullspace" is perfectly good English. I'm not sure which "vectorspace" you mean but I suspect you mean the "range"- the sub-space that all matrices in M2 are mapped to by A. The dimension of the nullspace is the "nullity" of the linear mapping and the dimension of the range is its "rank". The "rank nullity theorem" for a linear mapping from U to V says that the rank and nullity add to the dimension of U.

Of course, X is in the null space if and only If AX= XA. In other words, the space of all matrices that commute with A. I'm not sure what it would look like but you could "experiment" by looking at A= \begin{bmatrix}a & b \\ c & d \end{bmatrix} and X= \begin{bmatrix}s & t \\ u & v\end{bmatrix} and then you want to have AX= \begin{bmatrix}as+ bu & at+ bv \\ cs+ du & ct+ dv\end{bmatrix}= \begin{bmatrix}as+ ct & bs+ dt \\ au+ cv & bu+ dv\end{bmatrix}= XA
so that we must have as+ bu= as+ ct or just bu= ct, at+ bv= bs+ dt, cs+du= bu+ dv, and ct+dv= bu+ dv so that ct= bu. The one thing those have in common is ct= bu so that if one of t or u is 0, the other is also. If, in fact, both t and u are 0, the other equations become bv= bs so that v= s. That gives us \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix} as one basis vector and \begin{bmatrix}0 & 1 \\ c/b & 0\end{bmatrix} as another.

Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
I meant the column vectorspace, the range yes! I mixed up E and I, we use E for the identity matrix.

So, what we've done here is(if you could check if my line of thoughts is correct), made the transformation for an arbitrary matrix A and X. Examined the nullspace by setting AX=XA, and shown that the nullity of the transformation is 2. Furthermore, we've found 2 basis vectors with rank 2 of the transformation, so the dimN(F)=dimC(F)=2.

I keep mixing things up, A is the matrix for the transformation right? Which has the basis vectors we've just shown? I've never seen a matrix being a base before. When we have basis vectors, we just put them as columns for a transformation matrix, what do we do with basis matrices for a transformation matrix? This relates to the condition A≠λI how?

Thx for answering HallsofIvy!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top