Quick Question on Modules and Orthogonal Idempotents

• MHB
• Math Amateur
In summary, the conversation discusses the concept of idempotents in the context of rings and modules. It is stated that a full set of orthogonal idempotents of End(M) gives rise to a full set of inclusions and projections for M, and the "evident" inclusion map is explained. The definition and properties of idempotents are also discussed, along with an example involving the Euclidean plane.
Math Amateur
Gold Member
MHB
I am reading Berrick and Keating's book on Rings and Modules.

Section 2.1.9 on Idempotents reads as follows:

https://www.physicsforums.com/attachments/3097
https://www.physicsforums.com/attachments/3098So, on page 43 we read (see above) ...

" ... ... Note that, conversely, a full set of orthogonal idempotents of End($$\displaystyle M$$) gives rise to a full set of inclusions and projections for $$\displaystyle M$$: for each $$\displaystyle i$$, take $$\displaystyle L_i = e_iM$$, $$\displaystyle \pi_i$$ to be $$\displaystyle \pi_i \ : \ \mapsto e_im$$ and $$\displaystyle \sigma_i$$ to be the evident inclusion map.

I am hoping to fully understand how a full set of orthogonal idempotents of End($$\displaystyle M$$) gives rise to a full set of inclusions and projections for $$\displaystyle M = L_1 \oplus L_2$$ ... but I am unsure what the "evident" inclusion map is? Can someone please help?

Peter

Peter said:
I am reading Berrick and Keating's book on Rings and Modules.

Section 2.1.9 on Idempotents reads as follows:

https://www.physicsforums.com/attachments/3097
https://www.physicsforums.com/attachments/3098So, on page 43 we read (see above) ...

" ... ... Note that, conversely, a full set of orthogonal idempotents of End($$\displaystyle M$$) gives rise to a full set of inclusions and projections for $$\displaystyle M$$: for each $$\displaystyle i$$, take $$\displaystyle L_i = e_iM$$, $$\displaystyle \pi_i$$ to be $$\displaystyle \pi_i \ : \ \mapsto e_im$$ and $$\displaystyle \sigma_i$$ to be the evident inclusion map.

I am hoping to fully understand how a full set of orthogonal idempotents of End($$\displaystyle M$$) gives rise to a full set of inclusions and projections for $$\displaystyle M = L_1 \oplus L_2$$ ... but I am unsure what the "evident" inclusion map is? Can someone please help?

Peter

How do the authors define a full set of idempotents? I see condition (Idp2) but not (Idp1). Also, in your second paragraph, you already wrote down the projections. The $\sigma_i : L_i \to L_1 \oplus L_2 \oplus \cdots$ such that

$\displaystyle \sigma_i(l_i) = (0,\ldots, 0, l_i, 0,\ldots, 0)$.

Peter said:
I am reading Berrick and Keating's book on Rings and Modules.

Section 2.1.9 on Idempotents reads as follows:

https://www.physicsforums.com/attachments/3097
https://www.physicsforums.com/attachments/3098So, on page 43 we read (see above) ...

" ... ... Note that, conversely, a full set of orthogonal idempotents of End($$\displaystyle M$$) gives rise to a full set of inclusions and projections for $$\displaystyle M$$: for each $$\displaystyle i$$, take $$\displaystyle L_i = e_iM$$, $$\displaystyle \pi_i$$ to be $$\displaystyle \pi_i \ : \ \mapsto e_im$$ and $$\displaystyle \sigma_i$$ to be the evident inclusion map.

I am hoping to fully understand how a full set of orthogonal idempotents of End($$\displaystyle M$$) gives rise to a full set of inclusions and projections for $$\displaystyle M = L_1 \oplus L_2$$ ... but I am unsure what the "evident" inclusion map is? Can someone please help?

Peter

We start with $M$ and two orthogonal idempotents:

$e_1:M \to M$

$e_2:M \to M$ with:

$e_1e_2 = e_2e_1 = 0$ (the 0-map), and such that $1_M = e_1 + e_2$.

We define $L_j = e_j(M)$, the image-set of each idempotent, for $j = 1,2$.

Since $1_M = e_1 + e_2$, we have for any $m \in M$:

$m = 1_M(m) = (e_1 + e_2)(m) = e_1(m) + e_2(m)$, and $e_1(m) \in L_1, e_2(m) \in L_2$

Evidently, $M = L_1 + L_2$.

Now if $m \in L_1 \cap L_2$, we have:

$m = e_1(m_1) = e_2(m_2)$.

Since $m = 1_M(m) = e_1(m) + e_2(m)$, we have:

$m = e_1(e_2(m_2)) + e_2(e_1(m_1)) = 0 + 0 = 0$. Hence $L_1 \cap L_2 = \{0_M\}$, and this sum is direct.

It is then clear that $e_1:M \to e_1(M) = L_1$ is the requisite projection onto $L_1$ and similarly for $e_2$, that is we take our projections to be:

$\pi_j = e_j$ for $j = 1,2$. This same construction works for any finite index set.

For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$.

Let's look at the Euclidean plane, for example, which we can regard as the $\Bbb R$-module $\Bbb R^2$.

Consider the two matrices:

$P_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix}; P_2 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$.

It is not hard to verify $P_1^2 = P_1$ and $P_2^2 = P_2$, and that $P_1P_2 = P_2P_1 = 0$, and $P_1 + P_2 = I$

These represent (in the standard basis) the linear transformations:

$p_1(x,y) = (x,0)$
$p_2(x,y) = (0,y)$

It is clear we can uniquely decompose $(x,y)$ in the following manner:

$(x,y) = (x,0) + (0,y) = p_1(x,y) + p_2(x,y) \in p_1(\Bbb R^2) + p_2(\Bbb R^2)$

The inclusion maps are:

$\sigma_1((x,0)) = (x,0)$
$\sigma_2((0,y)) = (0,y)$ (they are "almost invisible" as maps).

Thus we have a direct sum decomposition:

$\Bbb R^2 = (\Bbb R \times \{0\}) \oplus (\{0\} \times \Bbb R)$

In this example, the geometric meaning of "orthogonal" is clear, the images of the projections are at right angles to each other:

$(x,0) \cdot (0,y) = x(0) + 0(y) = 0 + 0 = 0$, so that the angle between them is:

$\arccos\left(\dfrac{(x,0)\cdot(0,y)}{|x||y|}\right) = \arccos(0) = \dfrac{\pi}{2}$

Euge said:
How do the authors define a full set of idempotents? I see condition (Idp2) but not (Idp1). Also, in your second paragraph, you already wrote down the projections. The $\sigma_i : L_i \to L_1 \oplus L_2 \oplus \cdots$ such that

$\displaystyle \sigma_i(l_i) = (0,\ldots, 0, l_i, 0,\ldots, 0)$.
Thanks for the help, Euge ...

Idp1 should be showing in the first image ...

(Idp1) $$\displaystyle 1 = e_1 + e_2 + \ ... \ ... \ + e_k$$, where $$\displaystyle 1 = {id}_M$$ is the identity element of the ring End($$\displaystyle M$$).

Peter

- - - Updated - - -

Deveno said:
We start with $M$ and two orthogonal idempotents:

$e_1:M \to M$

$e_2:M \to M$ with:

$e_1e_2 = e_2e_1 = 0$ (the 0-map), and such that $1_M = e_1 + e_2$.

We define $L_j = e_j(M)$, the image-set of each idempotent, for $j = 1,2$.

Since $1_M = e_1 + e_2$, we have for any $m \in M$:

$m = 1_M(m) = (e_1 + e_2)(m) = e_1(m) + e_2(m)$, and $e_1(m) \in L_1, e_2(m) \in L_2$

Evidently, $M = L_1 + L_2$.

Now if $m \in L_1 \cap L_2$, we have:

$m = e_1(m_1) = e_2(m_2)$.

Since $m = 1_M(m) = e_1(m) + e_2(m)$, we have:

$m = e_1(e_2(m_2)) + e_2(e_1(m_1)) = 0 + 0 = 0$. Hence $L_1 \cap L_2 = \{0_M\}$, and this sum is direct.

It is then clear that $e_1:M \to e_1(M) = L_1$ is the requisite projection onto $L_1$ and similarly for $e_2$, that is we take our projections to be:

$\pi_j = e_j$ for $j = 1,2$. This same construction works for any finite index set.

For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$.

Let's look at the Euclidean plane, for example, which we can regard as the $\Bbb R$-module $\Bbb R^2$.

Consider the two matrices:

$P_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix}; P_2 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$.

It is not hard to verify $P_1^2 = P_1$ and $P_2^2 = P_2$, and that $P_1P_2 = P_2P_1 = 0$, and $P_1 + P_2 = I$

These represent (in the standard basis) the linear transformations:

$p_1(x,y) = (x,0)$
$p_2(x,y) = (0,y)$

It is clear we can uniquely decompose $(x,y)$ in the following manner:

$(x,y) = (x,0) + (0,y) = p_1(x,y) + p_2(x,y) \in p_1(\Bbb R^2) + p_2(\Bbb R^2)$

The inclusion maps are:

$\sigma_1((x,0)) = (x,0)$
$\sigma_2((0,y)) = (0,y)$ (they are "almost invisible" as maps).

Thus we have a direct sum decomposition:

$\Bbb R^2 = (\Bbb R \times \{0\}) \oplus (\{0\} \times \Bbb R)$

In this example, the geometric meaning of "orthogonal" is clear, the images of the projections are at right angles to each other:

$(x,0) \cdot (0,y) = x(0) + 0(y) = 0 + 0 = 0$, so that the angle between them is:

$\arccos\left(\dfrac{(x,0)\cdot(0,y)}{|x||y|}\right) = \arccos(0) = \dfrac{\pi}{2}$

Thanks so much for this post Deveno ... most helpful indeed ... I wish the text had taken the space to show this ... further, thanks for the example ... such examples are extremely helpful!

Just going to work through the detail of your post ... ...

Thanks again ...

Peter

Deveno said:
We start with $M$ and two orthogonal idempotents:

$e_1:M \to M$

$e_2:M \to M$ with:

$e_1e_2 = e_2e_1 = 0$ (the 0-map), and such that $1_M = e_1 + e_2$.

We define $L_j = e_j(M)$, the image-set of each idempotent, for $j = 1,2$.

Since $1_M = e_1 + e_2$, we have for any $m \in M$:

$m = 1_M(m) = (e_1 + e_2)(m) = e_1(m) + e_2(m)$, and $e_1(m) \in L_1, e_2(m) \in L_2$

Evidently, $M = L_1 + L_2$.

Now if $m \in L_1 \cap L_2$, we have:

$m = e_1(m_1) = e_2(m_2)$.

Since $m = 1_M(m) = e_1(m) + e_2(m)$, we have:

$m = e_1(e_2(m_2)) + e_2(e_1(m_1)) = 0 + 0 = 0$. Hence $L_1 \cap L_2 = \{0_M\}$, and this sum is direct.

It is then clear that $e_1:M \to e_1(M) = L_1$ is the requisite projection onto $L_1$ and similarly for $e_2$, that is we take our projections to be:

$\pi_j = e_j$ for $j = 1,2$. This same construction works for any finite index set.

For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$.

Let's look at the Euclidean plane, for example, which we can regard as the $\Bbb R$-module $\Bbb R^2$.

Consider the two matrices:

$P_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix}; P_2 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$.

It is not hard to verify $P_1^2 = P_1$ and $P_2^2 = P_2$, and that $P_1P_2 = P_2P_1 = 0$, and $P_1 + P_2 = I$

These represent (in the standard basis) the linear transformations:

$p_1(x,y) = (x,0)$
$p_2(x,y) = (0,y)$

It is clear we can uniquely decompose $(x,y)$ in the following manner:

$(x,y) = (x,0) + (0,y) = p_1(x,y) + p_2(x,y) \in p_1(\Bbb R^2) + p_2(\Bbb R^2)$

The inclusion maps are:

$\sigma_1((x,0)) = (x,0)$
$\sigma_2((0,y)) = (0,y)$ (they are "almost invisible" as maps).

Thus we have a direct sum decomposition:

$\Bbb R^2 = (\Bbb R \times \{0\}) \oplus (\{0\} \times \Bbb R)$

In this example, the geometric meaning of "orthogonal" is clear, the images of the projections are at right angles to each other:

$(x,0) \cdot (0,y) = x(0) + 0(y) = 0 + 0 = 0$, so that the angle between them is:

$\arccos\left(\dfrac{(x,0)\cdot(0,y)}{|x||y|}\right) = \arccos(0) = \dfrac{\pi}{2}$
Hi Deveno,

Just going through your proof that a set of orthogonal idempotents gives rise to a full set of standard inclusions and projections ... and I can see where the orthogonality comes into the proof ... but where is the idempotency assumption used in the proof?

I was expecting to find some part of the proof using the fact that $$\displaystyle e_i(e_i(m)) = e_i(m)$$ [that is $$\displaystyle e^2 = e$$] for $$\displaystyle i = 1,2$$... but could not find where this was used/applied ...

Another point on which I need some clarification is where you write:

"For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$."

Can you explain how restricting the identity map to $L_j$, for each $j$ leads to maps $$\displaystyle \sigma_1$$ and $$\displaystyle \sigma_2$$ where:

$$\displaystyle \sigma_1(l_1) = (l_1, 0)$$ and

$$\displaystyle \sigma_2(l_2) = (0, l_2)$$Can you help?Peter

Last edited:
Peter said:
Hi Deveno,

Just going through your proof that a set of orthogonal idempotents gives rise to a full set of standard inclusions and projections ... and I can see where the orthogonality comes into the proof ... but where is the idempotency assumption used in the proof?

I was expecting to find some part of the proof using the fact that $$\displaystyle e_i(e_i(m)) = e_i(m)$$ [that is $$\displaystyle e^2 = e$$] for $$\displaystyle i = 1,2$$... but could not find where this was used/applied ...

Another point on which I need some clarification is where you write:

"For the inclusions, we already have $L_j \subseteq M$ (we have an internal direct sum, here, we started with the "daddy module"), so we just restrict the identity map to $L_j$, for each $j$."

Can you explain how restricting the identity map to $L_j$, for each $j$ leads to maps $$\displaystyle \sigma_1$$ and $$\displaystyle \sigma_2$$ where:

$$\displaystyle \sigma_1(l_1) = (l_1, 0)$$ and

$$\displaystyle \sigma_2(l_2) = (0, l_2)$$Can you help?Peter

We want to show that (Idp1) + (Idp2) implies (SIP1) and (SIP2).

To show that $\pi_i\sigma_i = 1_{L_i}$ (which is (SIP1)),

we thus need to show, since $\pi_i = e_i$ and $\sigma_i = 1_{L_i}$ in our construction "from idempotents"

$e_i(m_i) = m_i$ for all $m_i \in L_i$ ($\ast$)

Since $L_i = e_i(M)$, we may write $m_i = e_i(m)$ for some $m \in M$ so that ($\ast$) becomes:

$e_i(e_i(m)) = e_i(m)$

This is true because $e_i$ is idempotent.

Your second question arises solely because this text blurs the distinction between an "internal" and "external" direct sum.

In the external direct sum, we "put the two parts side by side":

From $l_1 \in L_1$ and $l_2 \in L_2$ we create the elements $(l_1,l_2)$ in $L_1\oplus L_2$.

In the internal direct sum $l_1$ and $l_2$ are already elements of $M = L_1 \oplus L_2$, and we can just write $l_1+l_2$.

To be perfectly rigorous, we probably should draw a notational distinction between the two different $L_1 \oplus L_2$ we get, but they are isomorphic, and the distinction is more or less "purely notational".

If we have a set of idempotents, by assumption, the image modules are submodules of $M$, since $e_i,\ i \in I$ is an element of $\text{End}(M)$. So we have an INTERNAL direct product.

If we have a set of projections and inclusions, we might have an internal or extenal direct sum, we will have a slight difference in the inclusion morphisms:

External: the inclusion morphisms will be monomophisms, the factors need not be submodules, just isomorphic to submodules of $M$.

Internal: the inclusion morphisms will be actual inclusion functions (restricted identity maps).

In my previous example, I gave an internal direct sum decomposition of the Euclidean plane as:

$\Bbb R^2 = (\Bbb R\times \{0\}) \oplus (\{0\} \times \Bbb R)$

We also have an "external" version:

$\Bbb R^2 = \Bbb R \oplus \Bbb R$ where our two "inclusion maps are:

$i_1: x \mapsto (x,0)$
$i_2: y \mapsto (0,y)$ <---these are monomorphisms, not "restricted" identity functions.

The isomorphism between the internal and external versions is:

$(x,0) + (0,y) \mapsto (x,y)$ (note the plus sign on the left, the comma on the right).

It is common practice to blur this distinction: for example when speaking of the complex plane:

$\Bbb C = \{(x,y) = xe_1 + ye_2 = x(1) + y(i) = x + yi: x,y \in \Bbb R\}$

One often speaks of "the real number $a$" treating it as a complex number, instead of the more formal "complex number $a + 0i$".

In a similar vein, the rational number $\dfrac{2}{1}$ isn't really "the same thing" as the integer $2$, the former is an equivalence class of a relation on $\Bbb Z \times \Bbb Z^{\ast}$, whereas the second is an element of $\Bbb Z$.

Yet no one seems troubled by:

$\dfrac{4}{2} = 2$, even though these are clearly "different objects". Why? Because "algebraically" they are the same, they exhibit the same STRUCTURAL properties.

In general, we don't care if we are talking about "a quotient ring" $R/I$ or a "homomorphic image" $\phi(R)$, they are isomorphic (iso- meaning "equal" and -morphic meaning "form" or "appearance") what we do in one has an exact tit-for-tat parallel in the other.

*******

If we are talking about "the real number line", we have just $x$. If we are talking about "the real number line embedded in the Euclidean plane" we have $(x,0)$. The 0 in the second coordinate doesn't change how we "add" in either scenario.

Now if we are decomposing an $R$-module, or an abelian group ($\Bbb Z$-module) we cannot use "real space" to ACCURATELY model how "the parts fit together". It's not real clear how to create a "physical representation" of $\Bbb Z_3 \times \Bbb Z_8$ (perhaps with two gears, possibly, but I digress). But what we CAN do, is abstract the "algebraic" properties of some of these things, and just play with FORM, and see where the rules of form lead us.

This allows us to bring lots of "disparate things" under one "umbrella", dispatching what can appear to be unrelated problems with unified methods. Short exact sequences allow us to count holes in topological objects. Multi-linear algebra gives us a way to evaluate fluid flows.

Direct sum decompositions allow us to figure out the "whole" in terms of "parts". If we want to discover facts about 24, we can instead study 3 and 8, and "sew what we learn together". What allows us to do this, in this instance, is that 3 and 8 are co-prime, so their interaction is "independent". This is an "abstraction" of the independence of $x$ and $y$ in planar space. Geometrically, I think it would be confusing to say:

3 is perpendicular to 8.

But saying $[3]_{24}$ and $[8]_{24}$ generate a set of orthogonal idempotents (via $\alpha$ and $\beta$) of $\Bbb Z_{24}$ makes perfect sense.

1. What are modules in mathematics?

Modules are a mathematical structure that generalize the notion of vector spaces to other algebraic structures. They consist of a set of elements, a set of operations on these elements, and satisfy certain axioms such as closure, associativity, and distributivity.

2. What are orthogonal idempotents?

Orthogonal idempotents are elements in a ring or algebra that are idempotent (meaning they equal themselves when multiplied by themselves) and are also orthogonal to each other when multiplied together. In other words, their product is equal to zero.

3. What is the significance of orthogonal idempotents in modules?

In modules, orthogonal idempotents play a crucial role in decomposing a module into smaller, simpler submodules. This decomposition is known as the direct sum decomposition and is analogous to the diagonalization of matrices in linear algebra.

4. How are orthogonal idempotents related to the concept of submodules?

Submodules are a subset of a module that forms a module itself. In the case of orthogonal idempotents, they can be used to construct submodules that are complementary, meaning their direct sum is equal to the original module. This allows for a more efficient way to study and understand modules.

5. Can orthogonal idempotents be used in applications outside of mathematics?

Yes, orthogonal idempotents have applications in various fields such as physics, signal processing, and computer science. In physics, they are used to study symmetry and conservation laws, while in signal processing, they are used for data compression and error correction. In computer science, they play a role in coding theory and cryptography.

• Linear and Abstract Algebra
Replies
7
Views
2K
• Linear and Abstract Algebra
Replies
2
Views
1K
• Linear and Abstract Algebra
Replies
12
Views
2K
• Linear and Abstract Algebra
Replies
18
Views
3K
• Linear and Abstract Algebra
Replies
13
Views
2K
• Linear and Abstract Algebra
Replies
5
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
1K
• Linear and Abstract Algebra
Replies
7
Views
1K
• Linear and Abstract Algebra
Replies
4
Views
1K