How can matrices be used as a basis for linear mappings?

In summary, you can view the basis-matrix for a linear transformation as the set of column vectors that commute with the matrix A.
  • #1
nickthegreek
12
0
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english). Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
Physics news on Phys.org
  • #2
nickthegreek said:
Hi. Define a linear mapping F: M2-->M2 by F(X)=AX-XA for a matrix A, and find a basis for the nullspace and the vectorspace(not sure if this is the term in english).
"Nullspace" is perfectly good English. I'm not sure which "vectorspace" you mean but I suspect you mean the "range"- the sub-space that all matrices in M2 are mapped to by A. The dimension of the nullspace is the "nullity" of the linear mapping and the dimension of the range is its "rank". The "rank nullity theorem" for a linear mapping from U to V says that the rank and nullity add to the dimension of U.

Of course, X is in the null space if and only If AX= XA. In other words, the space of all matrices that commute with A. I'm not sure what it would look like but you could "experiment" by looking at [itex]A= \begin{bmatrix}a & b \\ c & d \end{bmatrix}[/itex] and [itex]X= \begin{bmatrix}s & t \\ u & v\end{bmatrix}[/itex] and then you want to have [itex]AX= \begin{bmatrix}as+ bu & at+ bv \\ cs+ du & ct+ dv\end{bmatrix}= \begin{bmatrix}as+ ct & bs+ dt \\ au+ cv & bu+ dv\end{bmatrix}= XA[/itex]
so that we must have as+ bu= as+ ct or just bu= ct, at+ bv= bs+ dt, cs+du= bu+ dv, and ct+dv= bu+ dv so that ct= bu. The one thing those have in common is ct= bu so that if one of t or u is 0, the other is also. If, in fact, both t and u are 0, the other equations become bv= bs so that v= s. That gives us [itex]\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}[/itex] as one basis vector and [itex]\begin{bmatrix}0 & 1 \\ c/b & 0\end{bmatrix}[/itex] as another.

Then I want to show that dim N(F)=dim V(F)=2 for all A, A≠λI, for some real λ. F(A)=F(E)=0, so A and E belongs to the nullspace. Then I define a basis for M2, as the 2x2-matrices B=(B11, B12, B21, B22) which has a 1 at i,j and 0's elsewhere. Well, this is how I am supposed to do, but it confuses me.

How should I view the basis-matrix? For example with linear independency. Let's say we define A to be the 2x2-matrix with elements (a,b,c,d) and map them with F. We get 4 matrices F(Bij) and I want to sort out which ones are linearly independent, with the condition A≠λI. How do I show L.I for matrices?
 
  • #3
I meant the column vectorspace, the range yes! I mixed up E and I, we use E for the identity matrix.

So, what we've done here is(if you could check if my line of thoughts is correct), made the transformation for an arbitrary matrix A and X. Examined the nullspace by setting AX=XA, and shown that the nullity of the transformation is 2. Furthermore, we've found 2 basis vectors with rank 2 of the transformation, so the dimN(F)=dimC(F)=2.

I keep mixing things up, A is the matrix for the transformation right? Which has the basis vectors we've just shown? I've never seen a matrix being a base before. When we have basis vectors, we just put them as columns for a transformation matrix, what do we do with basis matrices for a transformation matrix? This relates to the condition A≠λI how?

Thx for answering HallsofIvy!
 

1. What is a matrix?

A matrix is a rectangular array of numbers, symbols, or expressions arranged in rows and columns. It is commonly used in mathematics and science to represent data and perform mathematical operations.

2. How are matrices used as basis?

Matrices can be used as a basis for solving systems of linear equations, performing transformations, and representing data in higher dimensions. They can also be used in computer graphics and machine learning algorithms.

3. What are the benefits of using matrices as basis?

Using matrices as a basis allows for efficient and organized manipulation of data and equations. It also helps with visualizing and solving complex problems, as well as making computations more manageable.

4. Can matrices be used for non-linear problems?

While matrices are primarily used for linear problems, they can also be extended to handle non-linear problems through techniques such as matrix calculus and optimization methods.

5. How can I learn more about using matrices as basis?

There are many resources available online, such as tutorials, video lectures, and practice problems, that can help you learn more about using matrices as basis. You can also consult with a mathematician or take a course in linear algebra for a more in-depth understanding.

Similar threads

  • Linear and Abstract Algebra
Replies
11
Views
3K
  • Linear and Abstract Algebra
Replies
14
Views
1K
Replies
12
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
965
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
14
Views
532
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
996
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
774
Back
Top