MHB Regular representations of finite dimensional algebras

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading "Introduction to Ring Theory" by P. M. Cohn (Springer Undergraduate Mathematics Series)

In Chapter 2: Linear Algebras and Artinian Rings we read the following on page 57:
https://www.physicsforums.com/attachments/3149I am trying to gain an understanding of representations. I would welcome a simple example of representations of algebras as this would help a great deal …Further, Exercise 2.1 (4) reads as follows:

"Verify that the regular representation (2.10 - see above text) is a homomorphism of $$A$$ as a right $$A$$-module.

How is the matrix $$( \rho_{ij} (a) )$$ affected by a change of basis?Can someone please help me get started on this problem?Peter
 
Last edited:
Physics news on Phys.org
Peter said:
I am trying to gain an understanding of representations. I would welcome a simple example of representations of algebras as this would help a great deal …
One very simple example would be the complex numbers, considered as a two-dimensional algebra over the field of real numbers.

If we take $\{1,i\}$ as a basis for $\mathbb{C}$ then every element of $\mathbb{C}$ can be expressed in the form $x+iy$ with $x,y \in\mathbb{R}$. Given an element $a+ib \in\mathbb{C}$, the right multiplication $\rho_{a+ib}$ is the map $x+iy \mapsto (x+iy)(a+ib) = (xa-yb) + i(xb+ya)$.

Now use the basis $\{1,i\}$ to represent the complex number $x+iy$ by the vector $[x,y]$. Then the equation $\rho_{a+ib}(x+iy) = (xa-yb) + i(xb+ya)$ corresponds to the matrix equation $$[xa-yb,\,xb+ya] = [x,\,y]\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$$ Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$

To verify that the regular representation is a multiplicative mapping in this example, notice that the product of complex numbers $(a+ib)(c+id) = (ac-bd) + i(ad+bc)$ corresponds to the matrix equation $\begin{bmatrix}a&b\\ -b&a \end{bmatrix} \begin{bmatrix}c&d\\ -d&c \end{bmatrix} = \begin{bmatrix}ac -bd&ad+bc\\ -(ad+bc)&ac-bd \end{bmatrix} .$
 
Last edited:
Opalg said:
One very simple example would be the complex numbers, considered as a two-dimensional algebra over the field of real numbers.

If we take $\{1,i\}$ as a basis for $\mathbb{C}$ then every element of $\mathbb{C}$ can be expressed in the form $x+iy$ with $x,y \in\mathbb{R}$. Given an element $a+ib \in\mathbb{C}$, the right multiplication $\rho_{a+ib}$ is the map $x+iy \mapsto (x+iy)(a+ib) = (xa-yb) + i(xb+ya)$.

Now use the basis $\{1,i\}$ to represent the complex number $x+iy$ by the vector $[x,y]$. Then the equation $\rho_{a+ib}(x+iy) = (xa-yb) + i(xb+ya)$ corresponds to the matrix equation $$[xa-yb,\,xb+ya] = [x,\,y]\begin{bmatrix}a&-b\\ b&a \end{bmatrix}.$$ Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&-b\\ b&a \end{bmatrix}.$

To verify that the regular representation is a multiplicative mapping in this example, notice that the product of complex numbers $(a+ib)(c+id) = (ac-bd) + i(ad+bc)$ corresponds to the matrix equation $\begin{bmatrix}a&-b\\ b&a \end{bmatrix} \begin{bmatrix}c&-d\\ d&c \end{bmatrix} = \begin{bmatrix}ac -bd&-(ad+bc)\\ ad+bc&ac-bd \end{bmatrix} .$

Thanks so much for the helpful example, Opalg ...

Just working through the detail now …

Thanks again … examples are so helpful!

Peter
 
Peter said:
Thanks so much for the helpful example, Opalg ...

Just working through the detail now …

Thanks again … examples are so helpful!

Peter
A further question I have regarding representations is as follows:In the text above we find the following text:"Given a finite-dimensional algebra $$A$$ over a field $$k$$, of dimension $$n$$, say, consider the right multiplication by an element $$a \in A$$:

$$\rho_a \ : \ x \mapsto xa $$ where $$x \in A$$ … … … (2.10)

This is a linear transformation of $$A$$ and so can be represented by an $$n \times n$$ matrix over $$k$$. Thus we have a mapping $$\rho \ : \ A \to k_n$$, and this is easily seen to be a homomorphism. … … "
I cannot see how (2.10) leads to a mapping of the form $$\rho \ : \ A \to k_n$$?Can someone please explain how this follows? (I am not sure I really understand the notation!)

Peter
 
Peter said:
A further question I have regarding representations is as follows:In the text above we find the following text:"Given a finite-dimensional algebra $$A$$ over a field $$k$$, of dimension $$n$$, say, consider the right multiplication by an element $$a \in A$$:

$$\rho_a \ : \ x \mapsto xa $$ where $$x \in A$$ … … … (2.10)

This is a linear transformation of $$A$$ and so can be represented by an $$n \times n$$ matrix over $$k$$. Thus we have a mapping $$\rho \ : \ A \to k_n$$, and this is easily seen to be a homomorphism. … … "
I cannot see how (2.10) leads to a mapping of the form $$\rho \ : \ A \to k_n$$?Can someone please explain how this follows? (I am not sure I really understand the notation!)

Peter
I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space.

Given a linear transformation $T: V\to V$ from an $n$-dimensional vector space $V$ to itself, you first choose a basis $\{e_1,e_2,\ldots,e_n\}$ for $V$. Then you express each of the vectors $Te_1,Te_2,\ldots,Te_n$ as a linear combination of $e_1,\ldots,e_n$, and you use the coefficients in those combinations to form the rows (or is it the columns? – see Note below) of an $n\times n$ matrix. That is the procedure that Cohn is using to construct the regular representation of $A$.

In that extract from Cohn's book, the need to use a basis for $A$ in order to construct the matrix representation $\rho$ is explained in the paragraph leading up to equation (2.12).

[Note. In my previous comment, I initially had the rows and columns of the matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}$ the wrong way round (I have changed it now). My excuse is that as an analyst I am used to using the left regular representation, which in this example leads to the transpose matrix. Algebraists traditionally prefer the right regular representation (as Cohn does), which I find confusing.]
 
Last edited:
Opalg said:
One very simple example would be the complex numbers, considered as a two-dimensional algebra over the field of real numbers.

If we take $\{1,i\}$ as a basis for $\mathbb{C}$ then every element of $\mathbb{C}$ can be expressed in the form $x+iy$ with $x,y \in\mathbb{R}$. Given an element $a+ib \in\mathbb{C}$, the right multiplication $\rho_{a+ib}$ is the map $x+iy \mapsto (x+iy)(a+ib) = (xa-yb) + i(xb+ya)$.

Now use the basis $\{1,i\}$ to represent the complex number $x+iy$ by the vector $[x,y]$. Then the equation $\rho_{a+ib}(x+iy) = (xa-yb) + i(xb+ya)$ corresponds to the matrix equation $$[xa-yb,\,xb+ya] = [x,\,y]\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$$ Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$

To verify that the regular representation is a multiplicative mapping in this example, notice that the product of complex numbers $(a+ib)(c+id) = (ac-bd) + i(ad+bc)$ corresponds to the matrix equation $\begin{bmatrix}a&b\\ -b&a \end{bmatrix} \begin{bmatrix}c&d\\ -d&c \end{bmatrix} = \begin{bmatrix}ac -bd&ad+bc\\ -(ad+bc)&ac-bd \end{bmatrix} .$

Hi Opalg,

Just a clarification … …

In your post above you write:

" … … Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$ … … … "

However, Cohn writes that the right multiplication $$\rho_a$$ is called a regular representation and $$\rho_a$$ maps $$x$$ to $$xa$$ not a to a matrix … can you clarify?

Peter
 
Opalg said:
I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space.

Given a linear transformation $T: V\to V$ from an $n$-dimensional vector space $V$ to itself, you first choose a basis $\{e_1,e_2,\ldots,e_n\}$ for $V$. Then you express each of the vectors $Te_1,Te_2,\ldots,Te_n$ as a linear combination of $e_1,\ldots,e_n$, and you use the coefficients in those combinations to form the rows (or is it the columns? – see Note below) of an $n\times n$ matrix. That is the procedure that Cohn is using to construct the regular representation of $A$.

In that extract from Cohn's book, the need to use a basis for $A$ in order to construct the matrix representation $\rho$ is explained in the paragraph leading up to equation (2.12).

[Note. In my previous comment, I initially had the rows and columns of the matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}$ the wrong way round (I have changed it now). My excuse is that as an analyst I am used to using the left regular representation, which in this example leads to the transpose matrix. Algebraists traditionally prefer the right regular representation (as Cohn does), which I find confusing.]
Hi Opalg … thanks for the help … …

You write:

"I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space."

Indeed … I checked carefully in Cohn's text and you are correct … …

Just reflecting on the rest of your post, now … …

Peter
 
Peter said:
Hi Opalg,

Just a clarification … …

In your post above you write:

" … … Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$ … … … "

However, Cohn writes that the right multiplication $$\rho_a$$ is called a regular representation and $$\rho_a$$ maps $$x$$ to $$xa$$ not a to a matrix … can you clarify?

Peter
It's true that the terminology gets a bit slippery here. The map $a\mapsto \rho_a$ takes $a\in A$ to $\rho_a$, which is a linear transformation of $A$. However, a linear transformation (of a finite-dimensional space) is often described by a matrix, and Cohn goes on to say "Such a homomorphism into a full matrix ring is called a matrix representation or simply a representation of $A$". So at that stage Cohn is identifying the linear transformation $\rho_a$ with its associated matrix. More precisely, he is defining a map $\rho:A\to k_n$ by saying that $\rho(a)$ is the matrix associated with the linear transformation $\rho_a$.
 

Similar threads

Back
Top