How can matrices be used as a basis for linear mappings?

  • Thread starter Thread starter nickthegreek
  • Start date Start date
  • Tags Tags
    Basis Matrices
Click For Summary
SUMMARY

This discussion centers on defining a linear mapping F: M2-->M2, specifically F(X) = AX - XA for a matrix A, and determining the basis for the nullspace and the column space of this mapping. The participants conclude that the dimensions of both the nullspace and the column space are equal to 2, provided that A is not a scalar multiple of the identity matrix (A ≠ λI). They establish that matrices commuting with A form the nullspace, and they identify two basis vectors for this space, confirming the rank-nullity theorem's applicability in this context.

PREREQUISITES
  • Understanding of linear mappings and matrix operations
  • Familiarity with nullspace and column space concepts
  • Knowledge of the rank-nullity theorem
  • Basic matrix theory, particularly regarding 2x2 matrices
NEXT STEPS
  • Study the properties of linear mappings in matrix algebra
  • Explore the rank-nullity theorem in greater detail
  • Investigate the conditions under which matrices commute
  • Learn about basis vectors and their role in transformations
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in the application of matrices in linear mappings and transformations.

nickthegreek
Messages
11
Reaction score
0
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english). Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
Physics news on Phys.org
nickthegreek said:
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english).
"Nullspace" is perfectly good English. I'm not sure which "vectorspace" you mean but I suspect you mean the "range"- the sub-space that all matrices in M2 are mapped to by A. The dimension of the nullspace is the "nullity" of the linear mapping and the dimension of the range is its "rank". The "rank nullity theorem" for a linear mapping from U to V says that the rank and nullity add to the dimension of U.

Of course, X is in the null space if and only If AX= XA. In other words, the space of all matrices that commute with A. I'm not sure what it would look like but you could "experiment" by looking at A= \begin{bmatrix}a & b \\ c & d \end{bmatrix} and X= \begin{bmatrix}s & t \\ u & v\end{bmatrix} and then you want to have AX= \begin{bmatrix}as+ bu & at+ bv \\ cs+ du & ct+ dv\end{bmatrix}= \begin{bmatrix}as+ ct & bs+ dt \\ au+ cv & bu+ dv\end{bmatrix}= XA
so that we must have as+ bu= as+ ct or just bu= ct, at+ bv= bs+ dt, cs+du= bu+ dv, and ct+dv= bu+ dv so that ct= bu. The one thing those have in common is ct= bu so that if one of t or u is 0, the other is also. If, in fact, both t and u are 0, the other equations become bv= bs so that v= s. That gives us \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix} as one basis vector and \begin{bmatrix}0 & 1 \\ c/b & 0\end{bmatrix} as another.

Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
I meant the column vectorspace, the range yes! I mixed up E and I, we use E for the identity matrix.

So, what we've done here is(if you could check if my line of thoughts is correct), made the transformation for an arbitrary matrix A and X. Examined the nullspace by setting AX=XA, and shown that the nullity of the transformation is 2. Furthermore, we've found 2 basis vectors with rank 2 of the transformation, so the dimN(F)=dimC(F)=2.

I keep mixing things up, A is the matrix for the transformation right? Which has the basis vectors we've just shown? I've never seen a matrix being a base before. When we have basis vectors, we just put them as columns for a transformation matrix, what do we do with basis matrices for a transformation matrix? This relates to the condition A≠λI how?

Thx for answering HallsofIvy!
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 11 ·
Replies
11
Views
6K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K