# Matrix Transform

Hi,
Suppose I have an nxn matrix A. (If needed it can be assumed invertible). I can perform a transform on the matrix in the following way:
D=C*A*C^-1. C can be chosen to be any nxn invertible matrix.
Does this transform have any meaning, which can be easily understood or visualized?
What space is covered by possible values of D for a given A?
What is the minimal set of matrices A, parametrized by as few as possible parameters, which covers all possible matrices D?
2x2 case is of particular interest, but a general answer would certainly be useful.

Thanks

marcusl
Gold Member
Sounds like a homework problem. Please show your attempt at a solution.

Sounds like a homework problem. Please show your attempt at a solution.

This is not a homework problem.
I noticed that I formulated it the way homework/exam questions are often formulated with multiple paragraphs, but that's just because I am trying to fully understand what is going on here.

Hi,
Suppose I have an nxn matrix A. (If needed it can be assumed invertible). I can perform a transform on the matrix in the following way:
D=C*A*C^-1. C can be chosen to be any nxn invertible matrix.
Does this transform have any meaning, which can be easily understood or visualized?
This is the usual similarity transformation of homomorphisms: you can see both A and D as the representations of the same linear application with respect to different bases. Let's say A is the representation with respect to a base B, and D is the representation with respect to a basis E. Then the matrix C's columns are the vectors of the base B represented in the base E.
What space is covered by possible values of D for a given A?
This "space" is not a vectorial space. It is the set composed of all matrices that have the same Jordan decomposition of A.
What is the minimal set of matrices A, parametrized by as few as possible parameters, which covers all possible matrices D?
This set is composed by a representative matrix for each possible Jordan decomposition of a n x n matrix.

Last edited:
Thanks.
If A is invertible, I should be able to find C, which transforms it into the identity matrix, right?
C*A*C^-1=I
I multiply this by C^-1 from the left and C from the right and get:
A=I
What went wrong?

Mark44
Mentor
Thanks.
If A is invertible, I should be able to find C, which transforms it into the identity matrix, right?
C*A*C^-1=I
No, you don't necessarily get the identity matrix. What you get under certain conditions is a diagonal matrix, one whose entries off the main diagonal are zero.
I multiply this by C^-1 from the left and C from the right and get:
A=I
What went wrong?

It is the set composed of all matrices that have the same rank of A.
No, you don't necessarily get the identity matrix. What you get under certain conditions is a diagonal matrix, one whose entries off the main diagonal are zero.

Don't these two claims contradict? If I can transform A into any matrix of the same rank, then if A has maximal rank, shouldn't I be able to transform it into the identity matrix?

You are right, I was wrong: I edited my post and now it should be correct. Are you familiar with Jordan decompositions of matrices?

HallsofIvy