What Is the Structure of Left Ideals in Simple Rings?

  • Thread starter Thread starter gerben
  • Start date Start date
  • Tags Tags
    Rings
gerben
Messages
511
Reaction score
1
I read the following on the wikipedia page about simple rings (http://en.wikipedia.org/wiki/Simple_ring):
Let D be a division ring and M(n,D) be the ring of matrices with entries in D. It is not hard to show that every left ideal in M(n,D) takes the following form:

{M ∈ M(n,D) | The n1...nk-th columns of M have zero entries},

for some fixed {n1,...,nk} ⊂ {1, ..., n}.

I do not see why this is the case. Take the ring of 3 by 3 matrices over the real numbers and the left ideal, J, generated by:

<br /> \begin{pmatrix}<br /> 0 &amp;1 &amp;1\\<br /> 0 &amp;0 &amp;0\\<br /> 0 &amp;0 &amp;0<br /> \end{pmatrix}<br />

then J is not equal to S = {M ∈ M(3,ℝ) | The 1st column of M has zero entries},
since for example the following matrix is in S but not in J:

<br /> \begin{pmatrix}<br /> 0 &amp;1 &amp;0\\<br /> 0 &amp;0 &amp;1\\<br /> 0 &amp;0 &amp;0<br /> \end{pmatrix}<br />

Can anybody help me understand what the wikipedia page is trying to say, or where I am seeing things wrong?
 
Last edited by a moderator:
Physics news on Phys.org
This is a bit earlier on the same wikipedia page:
the full matrix ring over a field does not have any nontrivial ideals (since any ideal of M(n,R) is of the form M(n,I) with I and ideal of R), but has nontrivial left ideals (namely, the sets of matrices which have some fixed zero columns).

Can somebody shed some light on what is so important about these zero columns? I would think that the number of independent columns is important. I just do not see why zero columns are necessary at all.

Isn't the ideal generated by the following matrix, without any zero columns, also a nontrivial left ideal:

<br /> \begin{pmatrix}<br /> 1 &amp;1 \\<br /> 0 &amp;0 \\<br /> \end{pmatrix}<br />
 
I think you can show that for any left ideal I there is a unique vector subspace L of \mathbf{R}^n such that I=\{A\in I:\,AL=0\}. Then, for any k-dimensional subspace of \mathbf{R}^n you can choose a basis such that the first k vectors are in L. This will put your ideal into a canonical form that you are looking form .

For your matrix

A=\begin{pmatrix}<br /> 1 &amp;1 \\<br /> 0 &amp;0 \\<br /> \end{pmatrix}

SAS^{-1}=\begin{pmatrix}<br /> 1 &amp;0 \\<br /> 1 &amp;0 \\<br /> \end{pmatrix}
where
S=\begin{pmatrix}<br /> 1 &amp;1 \\<br /> 1 &amp;-1 \\<br /> \end{pmatrix}

But I am not an expert.
 
Last edited:
arkajad said:
I think you can show that for any left ideal I there is a unique vector subspace L of \mathbf{R}^n such that I=\{A\in I:\,AL=0\}.

Yes, I see that there is a subspace of \mathbf{R}^n that is contained in the kernel of every M ∈ I. This subspace is the orthogonal complement of the row space of matrix A ∈ I that has maximum number of independent rows.

arkajad said:
Then, for any k-dimensional subspace of \mathbf{R}^n you can choose a basis such that the first k vectors are in L. This will put your ideal into a canonical form that you are looking form .

Ah yes I guess that was the idea: on an appropriate basis there will be zero columns.

Thank you very much.
 
Last edited:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top