# B Is identity matrix basis dependent?

1. Oct 14, 2016

### zonde

To me it seems basic question or even obvious but as I am not mathematician I would rather like to check.
Is it true that these two matrices are both identity matrices: $\begin{pmatrix}1&0\\0&1\end{pmatrix}$ and $\begin{pmatrix}\frac{1}{\sqrt2}&-\frac{1}{\sqrt2}\\\frac{1}{\sqrt2}&\frac{1}{\sqrt2}\end{pmatrix}$? It's just that one of them would become diagonal with the right change of basis and the diagonal one would not be diagonal any more with that change of basis, right?

2. Oct 14, 2016

### Orodruin

Staff Emeritus
No. Only the first is an identity matrix. The second is a rotation matrix.

3. Oct 14, 2016

### Staff: Mentor

As soon as you write a matrix and interpret it as a linear function, you already have fixed a basis! Two, to be exact.

4. Oct 14, 2016

### PeroK

Your question is based on a fundamental misunderstanding. The set of 2x2 matrices is a well-defined ring. It has a multiplicative identity element, which is:

$\begin{pmatrix}1&0\\0&1\end{pmatrix}$

Now, the set of linear transformations of $\mathbb{R}^2$ is also a ring. And, is isomorphic to the ring of 2x2 matrices. Given any basis for $\mathbb{R}^2$, every linear transformation maps to a specific 2x2 matrix. The mapping depends on the basis. But (as with all ring isomorphisms) the identity linear transformation must map to the identity matrix. In other words, the identity linear transformation is always represented by the identity matrix.

5. Oct 14, 2016

### zonde

I suppose my misunderstanding comes from thinking about matrices as a kind of set of vectors. For now I will try to absorb what you said.

6. Nov 4, 2016

### Skins

In order for an arbitrary matrix A to be an identity matrix we must have the condition that for all vectors x

Ax = xA = x.

The first is clearly a 2x2 identity matrix. the second isn't since Ax neq x

7. Nov 4, 2016

### Orodruin

Staff Emeritus
Your equations do not make any sense whatsoever if x is not a square matrix of the same dimension as A.

8. Nov 4, 2016

### Skins

True I should have specified that x must be of correct dimension such that multiplication by A is possible in which x would have to be a square 2x2 matrix in order for the commutative law to be applicable as show above. i.e.if A is a identity matrix then

Ax = xA = x iff dimensions of A= dimensions of x.

Thanks for adding the clarification regarding the dimensions of x.

9. Nov 15, 2016

### zinq

"... the diagonal one would not be diagonal any more with that change of basis, right?"

Interesting question raised here.

As you may know: Say that for some vector space V with dim(V) = n has a specified basis A = {a1, ..., an}. And suppose we have a linear transformation

L: V → V​

that is represented by a square n x n matrix M acting on column vectors placed to its right. So that the kth column of M, which is the transpose of (mk1, ..., mkn), represents the vector

L(ak) = mk1 a1 + ... + mkn an

(Note that it is necessary to specify these things before we can make sense of questions about matrices, linear transformations, and change-of-basis matrices.)

Now suppose B = {b1, ..., bn} is another basis for V, and that we would now like to re-express the same linear transformation L in terms of this new basis.

Question 1: How do we do this? Answer: First we need to write each basis vector of the new basis B as a linear combination of the basis vectors of the old basis A. There is always some way to do this. That will give n linear coefficients of the A vectors for each one of the n B vectors, and these need to be arranged in a matrix: Call it CAB. (Sub-question: Exactly how should these n2 numbers be arranged?) Now assume CAB-1 is the inverse of the matrix CAB.

Then with respect to the new basis B, the linear transformation L can be expressed as the product of the three matrices:

CAB-1 M CAB

in that order. Now — if the matrix CAB is arranged correctly, we can apply this matrix to a column vector denoting an element v of V expressed with respect to the new basis B, and the result will be the column vector L(v) that is also expressed in terms of the new basis B.

Question 2: Suppose D is any diagonal n x n matrix such that there exists some n x n invertible matrix C with the property that

C-1 D C​

is not diagonal. What can be said about the matrix D ?

(Perhaps it's easier to think of the complementary question: Suppose D has the complementary property that for every invertible n x n matrix C, we have that

C-1 D C​

is again a diagonal matrix. For which diagonal matrices D is this true for?)

10. Nov 16, 2016

### lavinia

If you think of the matrix as two row vectors written out in terms of a basis then rewriting the rows of the second matrix in terms of the two vectors in the second matrix would give you the identity back again. The first matrix, though, would no longer be the identity matrix.

If you think of a matrix as describing a linear transformation , then the identity transformation will be represented by the identity matrix in every basis:this because the identity is the identity on every vector.

11. Nov 16, 2016

### zonde

Thanks for your answer. I got the idea that taking these matrices as transformation matrices there is only one identity matrix. I have a feeling that in context where I came across them, they where not exactly transformation matrices (it's density matrices in quantum mechanics). Rather they describe averages of squares of eigenvectors over a set of complex vectors. Well, something like that.

12. Nov 16, 2016

### WWGD

Consider a change of basis of the Id (in the basis, representation you are using, or at least, assume that the ID matrix you use is described by the matrix you used), into a matrix I', through the use of a matrix B. Then:

$I'=BIB^{-1} =IBB^{-1}=I$

I hope I did not misunderstand your question.