How can matrices be used as a basis for linear mappings?

  • Thread starter Thread starter nickthegreek
  • Start date Start date
  • Tags Tags
    Basis Matrices
nickthegreek
Messages
11
Reaction score
0
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english). Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
Physics news on Phys.org
nickthegreek said:
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english).
"Nullspace" is perfectly good English. I'm not sure which "vectorspace" you mean but I suspect you mean the "range"- the sub-space that all matrices in M2 are mapped to by A. The dimension of the nullspace is the "nullity" of the linear mapping and the dimension of the range is its "rank". The "rank nullity theorem" for a linear mapping from U to V says that the rank and nullity add to the dimension of U.

Of course, X is in the null space if and only If AX= XA. In other words, the space of all matrices that commute with A. I'm not sure what it would look like but you could "experiment" by looking at A= \begin{bmatrix}a & b \\ c & d \end{bmatrix} and X= \begin{bmatrix}s & t \\ u & v\end{bmatrix} and then you want to have AX= \begin{bmatrix}as+ bu & at+ bv \\ cs+ du & ct+ dv\end{bmatrix}= \begin{bmatrix}as+ ct & bs+ dt \\ au+ cv & bu+ dv\end{bmatrix}= XA
so that we must have as+ bu= as+ ct or just bu= ct, at+ bv= bs+ dt, cs+du= bu+ dv, and ct+dv= bu+ dv so that ct= bu. The one thing those have in common is ct= bu so that if one of t or u is 0, the other is also. If, in fact, both t and u are 0, the other equations become bv= bs so that v= s. That gives us \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix} as one basis vector and \begin{bmatrix}0 & 1 \\ c/b & 0\end{bmatrix} as another.

Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
I meant the column vectorspace, the range yes! I mixed up E and I, we use E for the identity matrix.

So, what we've done here is(if you could check if my line of thoughts is correct), made the transformation for an arbitrary matrix A and X. Examined the nullspace by setting AX=XA, and shown that the nullity of the transformation is 2. Furthermore, we've found 2 basis vectors with rank 2 of the transformation, so the dimN(F)=dimC(F)=2.

I keep mixing things up, A is the matrix for the transformation right? Which has the basis vectors we've just shown? I've never seen a matrix being a base before. When we have basis vectors, we just put them as columns for a transformation matrix, what do we do with basis matrices for a transformation matrix? This relates to the condition A≠λI how?

Thx for answering HallsofIvy!
 
Back
Top