MHB Matrix Rings - Exercise 1.1.4 (ii) - Berrick and Keating (B&K) - page 12

Click For Summary
The discussion revolves around Exercise 1.1.4 (ii) from Berrick and Keating's book on matrix rings, specifically focusing on proving that a matrix A belongs to the center of a matrix ring M_n(R) if and only if A can be expressed as a = aI for some scalar a in the center of R. Participants explore the implications of A being in the center, leading to the conclusion that matrices of the form aI commute with all matrices in M_n(R). The proof is elaborated through various cases, demonstrating that only scalar multiples of the identity matrix are central. The conversation emphasizes the significance of understanding the structure of the center of matrix rings in relation to the center of the underlying ring R.
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I need help with Exercise 1.1.4 (ii) (Chapter 1: Basics, page 13) concerning matrix rings ... ...

Exercise 1.1.4 (ii) (page 13) reads as follows:
View attachment 2984
I need help with Exercise (ii) above ... indeed I have not been able to progress beyond defining the terms of the problem as follows:Now, the centre of a matrix ring $$M_n (R)$$ over a ring $$R$$ is as follows:

$$ Z(M_n (R)) = \{ A \in M_n (R) \ | \ AM = MA \ \ \forall \ M \in M_n (R) \}$$

We need to show that

$$A \in Z(M_n (R)) \Longleftrightarrow A = aI \ \ \forall \ M \in M_n (R)$$

So, to begin, assume $$A \in Z(M_n (R))$$ ...

Then

$$A \in Z(M_n (R)) \Longrightarrow AM= MA$$

Now we know $$A$$ can be written

$$A = \sum_{h,i} a_{hi} e_{hi}$$ ...

BUT ... where to from here ...

Can someone please help?

***EDIT***

I have been reflecting on this exercise and the proof "the other way" seems easier - that is to show that:

$$ A = aI \text{ with a } \in Z(R) \Longrightarrow A \in Z(M_n (R)) $$

Assume $$A = aI$$ with $$a \in Z(R)$$

Then

$$A = aI \Longrightarrow AM = aIM = aM$$ for any $$M \in M_n (R)$$

Now $$aM = \begin{pmatrix} a m_{11} & a m_{12} & ... & ... & a m_{1n} \\ a m_{21} & a m_{22} & ... & ... & a m_{2n} \\ ... & ... & ... & ... & ... \\ ... & ... & ... & ... & ... \\ a m_{n1} & a m_{n2} & ... & ... & a m_{nn} \end{pmatrix}$$

But $$am_{ij} = m_{ij}a$$ since$$ a \in Z(R) $$

Thus $$aM = \begin{pmatrix} m_{11}a & m_{12}a & ... & ... & m_{1n} a \\ m_{21} a & m_{22} a & ... & ... & m_{2n} a \\ ... & ... & ... & ... & ... \\ ... & ... & ... & ... & ... \\ m_{n1} a & m_{n2} a & ... & ... & m_{nn} a \end{pmatrix} = Ma$$

Thus we have $$AM = aIM = aM = Ma = MIa $$

Can someone please critique this proof of the fact that:

$$ A = aI with a \in Z(R) \Longrightarrow A \in Z(M_n (R)) $$

Hope someone can help ... ...

Peter
 
Last edited:
Physics news on Phys.org
It should be clear that if $A = aI$ that:

$AM = (aI)M = a(IM) = a(MI) = (aM)I$

now since $a \in Z(R)$, we have:

$\displaystyle aM = a\left(\sum_{h,i} a_{hi}e_{hi}\right) = \sum_{h,i} a(a_{hi}e_{hi})$

$\displaystyle = \sum_{h,i} (a_{hi}e_{hi})a = \left(\sum_{h,i} a_{hi}e_{hi}\right)a = Ma$.

Hence: $(aM)I = (Ma)I = M(aI) = MA$.

So the center of the matrix ring clearly contains all such matrices.

Of course, this is the "easy part".

To do the other way, I find it helpful to consider two cases.

Case 1: $A$ is diagonal, but not equal to $aI$ for any $a \in Z(R)$.

Case 1a: $A = bI$ for some $b \not \in Z(R)$. Then there is some $c \in R$ with $bc \neq cb$.

Then $A(cI) = (bI)(cI) = (bc)I \neq (cb)I = (cI)(bI) = (cI)A$, so $A$ does not commute with $cI$.

Case 1b: $A = \text{diag}(a_1,a_2,\dots,a_n)$ with $a_i \neq a_j$ for some $i \neq j$.

In this case, we have (I am going to use capital letters for the $e$ matrices to underscore that these are matrices, not ring-elements):

$A(E_{ij}) = a_jE_{ij}$ whereas:

$(E_{ij})A = a_iE_{ij}$, and these are unequal, so $A$ does not commute with $E_{ij}$.

Case 2: $A$ is not diagonal. Suppose that we have $a_{ij} \neq 0$ for $i \neq j$.

If we denote $A(E_{jk})$ by $B$, we have:

$\displaystyle b_{ik} = \sum_n a_{in}e_{nk} = a_{ij} \neq 0$.

Similarly, if we denote $(E_{jk})A$ by $C$, we have:

$\displaystyle c_{ik} = \sum_n e_{in}a_{nk} = 0$, since $e_{in} = 0$ for all $i \neq j$ and any $n$.

Since $B$ and $C$ differ in the $i,k$-entry, they cannot be equal. Hence $A$ does not commute with $E_{jk}$.

So ONLY those matrices of the form $aI$ are in the center (for every other matrix, we have at least one matrix it doesn't commute with).

Since the mapping $Z(R) \to Z(M_n(R))$ given by $a \mapsto aI$ is a ring-monomorphism, it is common practice to identify these two objects, and say $Z(R)$ IS the center $Z(M_n(R))$. This essentially allows us to obtain $M_n(R)$ as an extension ring of $Z(R)$, something that becomes important when $R$ is a field.
 
Deveno said:
It should be clear that if $A = aI$ that:

$AM = (aI)M = a(IM) = a(MI) = (aM)I$

now since $a \in Z(R)$, we have:

$\displaystyle aM = a\left(\sum_{h,i} a_{hi}e_{hi}\right) = \sum_{h,i} a(a_{hi}e_{hi})$

$\displaystyle = \sum_{h,i} (a_{hi}e_{hi})a = \left(\sum_{h,i} a_{hi}e_{hi}\right)a = Ma$.

Hence: $(aM)I = (Ma)I = M(aI) = MA$.

So the center of the matrix ring clearly contains all such matrices.

Of course, this is the "easy part".

To do the other way, I find it helpful to consider two cases.

Case 1: $A$ is diagonal, but not equal to $aI$ for any $a \in Z(R)$.

Case 1a: $A = bI$ for some $b \not \in Z(R)$. Then there is some $c \in R$ with $bc \neq cb$.

Then $A(cI) = (bI)(cI) = (bc)I \neq (cb)I = (cI)(bI) = (cI)A$, so $A$ does not commute with $cI$.

Case 1b: $A = \text{diag}(a_1,a_2,\dots,a_n)$ with $a_i \neq a_j$ for some $i \neq j$.

In this case, we have (I am going to use capital letters for the $e$ matrices to underscore that these are matrices, not ring-elements):

$A(E_{ij}) = a_jE_{ij}$ whereas:

$(E_{ij})A = a_iE_{ij}$, and these are unequal, so $A$ does not commute with $E_{ij}$.

Case 2: $A$ is not diagonal. Suppose that we have $a_{ij} \neq 0$ for $i \neq j$.

If we denote $A(E_{jk})$ by $B$, we have:

$\displaystyle b_{ik} = \sum_n a_{in}e_{nk} = a_{ij} \neq 0$.

Similarly, if we denote $(E_{jk})A$ by $C$, we have:

$\displaystyle c_{ik} = \sum_n e_{in}a_{nk} = 0$, since $e_{in} = 0$ for all $i \neq j$ and any $n$.

Since $B$ and $C$ differ in the $i,k$-entry, they cannot be equal. Hence $A$ does not commute with $E_{jk}$.

So ONLY those matrices of the form $aI$ are in the center (for every other matrix, we have at least one matrix it doesn't commute with).

Since the mapping $Z(R) \to Z(M_n(R))$ given by $a \mapsto aI$ is a ring-monomorphism, it is common practice to identify these two objects, and say $Z(R)$ IS the center $Z(M_n(R))$. This essentially allows us to obtain $M_n(R)$ as an extension ring of $Z(R)$, something that becomes important when $R$ is a field.

Thanks Deveno ... Appreciate your help ...

Will now be working through the details of your post ...

Thanks again,

Peter
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
2K
Replies
17
Views
3K
Replies
1
Views
1K
Replies
7
Views
2K
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K