MHB Quick Question on Modules and Orthogonal Idempotents

  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Modules Orthogonal
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Berrick and Keating's book on Rings and Modules.

Section 2.1.9 on Idempotents reads as follows:

https://www.physicsforums.com/attachments/3097
https://www.physicsforums.com/attachments/3098So, on page 43 we read (see above) ...

" ... ... Note that, conversely, a full set of orthogonal idempotents of End($$M$$) gives rise to a full set of inclusions and projections for $$M$$: for each $$i$$, take $$L_i = e_iM$$, $$\pi_i$$ to be $$\pi_i \ : \ \mapsto e_im$$ and $$\sigma_i$$ to be the evident inclusion map.

I am hoping to fully understand how a full set of orthogonal idempotents of End($$M$$) gives rise to a full set of inclusions and projections for $$M = L_1 \oplus L_2$$ ... but I am unsure what the "evident" inclusion map is? Can someone please help?

Peter
 
Physics news on Phys.org
Peter said:
I am reading Berrick and Keating's book on Rings and Modules.

Section 2.1.9 on Idempotents reads as follows:

https://www.physicsforums.com/attachments/3097
https://www.physicsforums.com/attachments/3098So, on page 43 we read (see above) ...

" ... ... Note that, conversely, a full set of orthogonal idempotents of End($$M$$) gives rise to a full set of inclusions and projections for $$M$$: for each $$i$$, take $$L_i = e_iM$$, $$\pi_i$$ to be $$\pi_i \ : \ \mapsto e_im$$ and $$\sigma_i$$ to be the evident inclusion map.

I am hoping to fully understand how a full set of orthogonal idempotents of End($$M$$) gives rise to a full set of inclusions and projections for $$M = L_1 \oplus L_2$$ ... but I am unsure what the "evident" inclusion map is? Can someone please help?

Peter

How do the authors define a full set of idempotents? I see condition (Idp2) but not (Idp1). Also, in your second paragraph, you already wrote down the projections. The $\sigma_i : L_i \to L_1 \oplus L_2 \oplus \cdots$ such that

$\displaystyle \sigma_i(l_i) = (0,\ldots, 0, l_i, 0,\ldots, 0)$.
 
Peter said:
I am reading Berrick and Keating's book on Rings and Modules.

Section 2.1.9 on Idempotents reads as follows:

https://www.physicsforums.com/attachments/3097
https://www.physicsforums.com/attachments/3098So, on page 43 we read (see above) ...

" ... ... Note that, conversely, a full set of orthogonal idempotents of End($$M$$) gives rise to a full set of inclusions and projections for $$M$$: for each $$i$$, take $$L_i = e_iM$$, $$\pi_i$$ to be $$\pi_i \ : \ \mapsto e_im$$ and $$\sigma_i$$ to be the evident inclusion map.

I am hoping to fully understand how a full set of orthogonal idempotents of End($$M$$) gives rise to a full set of inclusions and projections for $$M = L_1 \oplus L_2$$ ... but I am unsure what the "evident" inclusion map is? Can someone please help?

Peter

We start with $M$ and two orthogonal idempotents:

$e_1:M \to M$

$e_2:M \to M$ with:

$e_1e_2 = e_2e_1 = 0$ (the 0-map), and such that $1_M = e_1 + e_2$.

We define $L_j = e_j(M)$, the image-set of each idempotent, for $j = 1,2$.

Since $1_M = e_1 + e_2$, we have for any $m \in M$:

$m = 1_M(m) = (e_1 + e_2)(m) = e_1(m) + e_2(m)$, and $e_1(m) \in L_1, e_2(m) \in L_2$

Evidently, $M = L_1 + L_2$.

Now if $m \in L_1 \cap L_2$, we have:

$m = e_1(m_1) = e_2(m_2)$.

Since $m = 1_M(m) = e_1(m) + e_2(m)$, we have:

$m = e_1(e_2(m_2)) + e_2(e_1(m_1)) = 0 + 0 = 0$. Hence $L_1 \cap L_2 = \{0_M\}$, and this sum is direct.

It is then clear that $e_1:M \to e_1(M) = L_1$ is the requisite projection onto $L_1$ and similarly for $e_2$, that is we take our projections to be:

$\pi_j = e_j$ for $j = 1,2$. This same construction works for any finite index set.

For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$.

Let's look at the Euclidean plane, for example, which we can regard as the $\Bbb R$-module $\Bbb R^2$.

Consider the two matrices:

$P_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix}; P_2 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$.

It is not hard to verify $P_1^2 = P_1$ and $P_2^2 = P_2$, and that $P_1P_2 = P_2P_1 = 0$, and $P_1 + P_2 = I$

These represent (in the standard basis) the linear transformations:

$p_1(x,y) = (x,0)$
$p_2(x,y) = (0,y)$

It is clear we can uniquely decompose $(x,y)$ in the following manner:

$(x,y) = (x,0) + (0,y) = p_1(x,y) + p_2(x,y) \in p_1(\Bbb R^2) + p_2(\Bbb R^2)$

The inclusion maps are:

$\sigma_1((x,0)) = (x,0)$
$\sigma_2((0,y)) = (0,y)$ (they are "almost invisible" as maps).

Thus we have a direct sum decomposition:

$\Bbb R^2 = (\Bbb R \times \{0\}) \oplus (\{0\} \times \Bbb R)$

In this example, the geometric meaning of "orthogonal" is clear, the images of the projections are at right angles to each other:

$(x,0) \cdot (0,y) = x(0) + 0(y) = 0 + 0 = 0$, so that the angle between them is:

$\arccos\left(\dfrac{(x,0)\cdot(0,y)}{|x||y|}\right) = \arccos(0) = \dfrac{\pi}{2}$
 
Euge said:
How do the authors define a full set of idempotents? I see condition (Idp2) but not (Idp1). Also, in your second paragraph, you already wrote down the projections. The $\sigma_i : L_i \to L_1 \oplus L_2 \oplus \cdots$ such that

$\displaystyle \sigma_i(l_i) = (0,\ldots, 0, l_i, 0,\ldots, 0)$.
Thanks for the help, Euge ...

Idp1 should be showing in the first image ...

(Idp1) $$1 = e_1 + e_2 + \ ... \ ... \ + e_k$$, where $$1 = {id}_M$$ is the identity element of the ring End($$M$$).

Peter

- - - Updated - - -

Deveno said:
We start with $M$ and two orthogonal idempotents:

$e_1:M \to M$

$e_2:M \to M$ with:

$e_1e_2 = e_2e_1 = 0$ (the 0-map), and such that $1_M = e_1 + e_2$.

We define $L_j = e_j(M)$, the image-set of each idempotent, for $j = 1,2$.

Since $1_M = e_1 + e_2$, we have for any $m \in M$:

$m = 1_M(m) = (e_1 + e_2)(m) = e_1(m) + e_2(m)$, and $e_1(m) \in L_1, e_2(m) \in L_2$

Evidently, $M = L_1 + L_2$.

Now if $m \in L_1 \cap L_2$, we have:

$m = e_1(m_1) = e_2(m_2)$.

Since $m = 1_M(m) = e_1(m) + e_2(m)$, we have:

$m = e_1(e_2(m_2)) + e_2(e_1(m_1)) = 0 + 0 = 0$. Hence $L_1 \cap L_2 = \{0_M\}$, and this sum is direct.

It is then clear that $e_1:M \to e_1(M) = L_1$ is the requisite projection onto $L_1$ and similarly for $e_2$, that is we take our projections to be:

$\pi_j = e_j$ for $j = 1,2$. This same construction works for any finite index set.

For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$.

Let's look at the Euclidean plane, for example, which we can regard as the $\Bbb R$-module $\Bbb R^2$.

Consider the two matrices:

$P_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix}; P_2 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$.

It is not hard to verify $P_1^2 = P_1$ and $P_2^2 = P_2$, and that $P_1P_2 = P_2P_1 = 0$, and $P_1 + P_2 = I$

These represent (in the standard basis) the linear transformations:

$p_1(x,y) = (x,0)$
$p_2(x,y) = (0,y)$

It is clear we can uniquely decompose $(x,y)$ in the following manner:

$(x,y) = (x,0) + (0,y) = p_1(x,y) + p_2(x,y) \in p_1(\Bbb R^2) + p_2(\Bbb R^2)$

The inclusion maps are:

$\sigma_1((x,0)) = (x,0)$
$\sigma_2((0,y)) = (0,y)$ (they are "almost invisible" as maps).

Thus we have a direct sum decomposition:

$\Bbb R^2 = (\Bbb R \times \{0\}) \oplus (\{0\} \times \Bbb R)$

In this example, the geometric meaning of "orthogonal" is clear, the images of the projections are at right angles to each other:

$(x,0) \cdot (0,y) = x(0) + 0(y) = 0 + 0 = 0$, so that the angle between them is:

$\arccos\left(\dfrac{(x,0)\cdot(0,y)}{|x||y|}\right) = \arccos(0) = \dfrac{\pi}{2}$

Thanks so much for this post Deveno ... most helpful indeed ... I wish the text had taken the space to show this ... further, thanks for the example ... such examples are extremely helpful!

Just going to work through the detail of your post ... ...

Thanks again ...

Peter
 
Deveno said:
We start with $M$ and two orthogonal idempotents:

$e_1:M \to M$

$e_2:M \to M$ with:

$e_1e_2 = e_2e_1 = 0$ (the 0-map), and such that $1_M = e_1 + e_2$.

We define $L_j = e_j(M)$, the image-set of each idempotent, for $j = 1,2$.

Since $1_M = e_1 + e_2$, we have for any $m \in M$:

$m = 1_M(m) = (e_1 + e_2)(m) = e_1(m) + e_2(m)$, and $e_1(m) \in L_1, e_2(m) \in L_2$

Evidently, $M = L_1 + L_2$.

Now if $m \in L_1 \cap L_2$, we have:

$m = e_1(m_1) = e_2(m_2)$.

Since $m = 1_M(m) = e_1(m) + e_2(m)$, we have:

$m = e_1(e_2(m_2)) + e_2(e_1(m_1)) = 0 + 0 = 0$. Hence $L_1 \cap L_2 = \{0_M\}$, and this sum is direct.

It is then clear that $e_1:M \to e_1(M) = L_1$ is the requisite projection onto $L_1$ and similarly for $e_2$, that is we take our projections to be:

$\pi_j = e_j$ for $j = 1,2$. This same construction works for any finite index set.

For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$.

Let's look at the Euclidean plane, for example, which we can regard as the $\Bbb R$-module $\Bbb R^2$.

Consider the two matrices:

$P_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix}; P_2 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$.

It is not hard to verify $P_1^2 = P_1$ and $P_2^2 = P_2$, and that $P_1P_2 = P_2P_1 = 0$, and $P_1 + P_2 = I$

These represent (in the standard basis) the linear transformations:

$p_1(x,y) = (x,0)$
$p_2(x,y) = (0,y)$

It is clear we can uniquely decompose $(x,y)$ in the following manner:

$(x,y) = (x,0) + (0,y) = p_1(x,y) + p_2(x,y) \in p_1(\Bbb R^2) + p_2(\Bbb R^2)$

The inclusion maps are:

$\sigma_1((x,0)) = (x,0)$
$\sigma_2((0,y)) = (0,y)$ (they are "almost invisible" as maps).

Thus we have a direct sum decomposition:

$\Bbb R^2 = (\Bbb R \times \{0\}) \oplus (\{0\} \times \Bbb R)$

In this example, the geometric meaning of "orthogonal" is clear, the images of the projections are at right angles to each other:

$(x,0) \cdot (0,y) = x(0) + 0(y) = 0 + 0 = 0$, so that the angle between them is:

$\arccos\left(\dfrac{(x,0)\cdot(0,y)}{|x||y|}\right) = \arccos(0) = \dfrac{\pi}{2}$
Hi Deveno,

Just going through your proof that a set of orthogonal idempotents gives rise to a full set of standard inclusions and projections ... and I can see where the orthogonality comes into the proof ... but where is the idempotency assumption used in the proof?

I was expecting to find some part of the proof using the fact that $$e_i(e_i(m)) = e_i(m)$$ [that is $$e^2 = e$$] for $$i = 1,2 $$... but could not find where this was used/applied ...

Another point on which I need some clarification is where you write:

"For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$."

Can you explain how restricting the identity map to $L_j$, for each $j$ leads to maps $$\sigma_1$$ and $$\sigma_2$$ where:

$$\sigma_1(l_1) = (l_1, 0)$$ and

$$\sigma_2(l_2) = (0, l_2)$$Can you help?Peter
 
Last edited:
Peter said:
Hi Deveno,

Just going through your proof that a set of orthogonal idempotents gives rise to a full set of standard inclusions and projections ... and I can see where the orthogonality comes into the proof ... but where is the idempotency assumption used in the proof?

I was expecting to find some part of the proof using the fact that $$e_i(e_i(m)) = e_i(m)$$ [that is $$e^2 = e$$] for $$i = 1,2 $$... but could not find where this was used/applied ...

Another point on which I need some clarification is where you write:

"For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$."

Can you explain how restricting the identity map to $L_j$, for each $j$ leads to maps $$\sigma_1$$ and $$\sigma_2$$ where:

$$\sigma_1(l_1) = (l_1, 0)$$ and

$$\sigma_2(l_2) = (0, l_2)$$Can you help?Peter

We want to show that (Idp1) + (Idp2) implies (SIP1) and (SIP2).

To show that $\pi_i\sigma_i = 1_{L_i}$ (which is (SIP1)),

we thus need to show, since $\pi_i = e_i$ and $\sigma_i = 1_{L_i}$ in our construction "from idempotents"

$e_i(m_i) = m_i$ for all $m_i \in L_i$ ($\ast$)

Since $L_i = e_i(M)$, we may write $m_i = e_i(m)$ for some $m \in M$ so that ($\ast$) becomes:

$e_i(e_i(m)) = e_i(m)$

This is true because $e_i$ is idempotent.

Your second question arises solely because this text blurs the distinction between an "internal" and "external" direct sum.

In the external direct sum, we "put the two parts side by side":

From $l_1 \in L_1$ and $l_2 \in L_2$ we create the elements $(l_1,l_2)$ in $L_1\oplus L_2$.

In the internal direct sum $l_1$ and $l_2$ are already elements of $M = L_1 \oplus L_2$, and we can just write $l_1+l_2$.

To be perfectly rigorous, we probably should draw a notational distinction between the two different $L_1 \oplus L_2$ we get, but they are isomorphic, and the distinction is more or less "purely notational".

If we have a set of idempotents, by assumption, the image modules are submodules of $M$, since $e_i,\ i \in I$ is an element of $\text{End}(M)$. So we have an INTERNAL direct product.

If we have a set of projections and inclusions, we might have an internal or extenal direct sum, we will have a slight difference in the inclusion morphisms:

External: the inclusion morphisms will be monomophisms, the factors need not be submodules, just isomorphic to submodules of $M$.

Internal: the inclusion morphisms will be actual inclusion functions (restricted identity maps).

In my previous example, I gave an internal direct sum decomposition of the Euclidean plane as:

$\Bbb R^2 = (\Bbb R\times \{0\}) \oplus (\{0\} \times \Bbb R)$

We also have an "external" version:

$\Bbb R^2 = \Bbb R \oplus \Bbb R$ where our two "inclusion maps are:

$i_1: x \mapsto (x,0)$
$i_2: y \mapsto (0,y)$ <---these are monomorphisms, not "restricted" identity functions.

The isomorphism between the internal and external versions is:

$(x,0) + (0,y) \mapsto (x,y)$ (note the plus sign on the left, the comma on the right).

It is common practice to blur this distinction: for example when speaking of the complex plane:

$\Bbb C = \{(x,y) = xe_1 + ye_2 = x(1) + y(i) = x + yi: x,y \in \Bbb R\}$

One often speaks of "the real number $a$" treating it as a complex number, instead of the more formal "complex number $a + 0i$".

In a similar vein, the rational number $\dfrac{2}{1}$ isn't really "the same thing" as the integer $2$, the former is an equivalence class of a relation on $\Bbb Z \times \Bbb Z^{\ast}$, whereas the second is an element of $\Bbb Z$.

Yet no one seems troubled by:

$\dfrac{4}{2} = 2$, even though these are clearly "different objects". Why? Because "algebraically" they are the same, they exhibit the same STRUCTURAL properties.

In general, we don't care if we are talking about "a quotient ring" $R/I$ or a "homomorphic image" $\phi(R)$, they are isomorphic (iso- meaning "equal" and -morphic meaning "form" or "appearance") what we do in one has an exact tit-for-tat parallel in the other.

*******

If we are talking about "the real number line", we have just $x$. If we are talking about "the real number line embedded in the Euclidean plane" we have $(x,0)$. The 0 in the second coordinate doesn't change how we "add" in either scenario.

Now if we are decomposing an $R$-module, or an abelian group ($\Bbb Z$-module) we cannot use "real space" to ACCURATELY model how "the parts fit together". It's not real clear how to create a "physical representation" of $\Bbb Z_3 \times \Bbb Z_8$ (perhaps with two gears, possibly, but I digress). But what we CAN do, is abstract the "algebraic" properties of some of these things, and just play with FORM, and see where the rules of form lead us.

This allows us to bring lots of "disparate things" under one "umbrella", dispatching what can appear to be unrelated problems with unified methods. Short exact sequences allow us to count holes in topological objects. Multi-linear algebra gives us a way to evaluate fluid flows.

Direct sum decompositions allow us to figure out the "whole" in terms of "parts". If we want to discover facts about 24, we can instead study 3 and 8, and "sew what we learn together". What allows us to do this, in this instance, is that 3 and 8 are co-prime, so their interaction is "independent". This is an "abstraction" of the independence of $x$ and $y$ in planar space. Geometrically, I think it would be confusing to say:

3 is perpendicular to 8.

But saying $[3]_{24}$ and $[8]_{24}$ generate a set of orthogonal idempotents (via $\alpha$ and $\beta$) of $\Bbb Z_{24}$ makes perfect sense.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top