MHB Why is Sharp's Lemma 1.10 crucial for understanding R-algebras?

  • Thread starter Thread starter Math Amateur
  • Start date Start date
Click For Summary
Sharp's Lemma 1.10 is essential for understanding R-algebras because it establishes the unique ring homomorphism from the integers to any ring, demonstrating that every ring is a Z-algebra. This uniqueness arises from the requirement that homomorphisms preserve unity, leading to the conclusion that the mapping is entirely determined by the image of 1. The discussion also highlights the connection between Z-algebras and polynomial rings, emphasizing that the polynomial ring R[x] serves as the free R-algebra generated by a single element. Additionally, the relationship between Z-modules and Z-algebras is clarified, noting that while every Z-algebra is a Z-module, not all Z-modules possess a bilinear product. Understanding these concepts is crucial for grasping the broader implications in ring theory and algebra.
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
After defining an R-algebra at the bottom of page 5 (see attachment from Sharp), on the top of page 6 we find the following:

----------------------------------------------------------------------------------

"We should point out at once that the concept of an R-algebra introduced in 1.9 above occurs very frequently in ring theory, simply because every ring is a Z-algebra. We explain in 1,10 why this is the case."

-----------------------------------------------------------------------------------Sharp then proceeds as follows:-----------------------------------------------------------------------------------

"Let $$ R $$ be a ring. Then the mapping $$ F \ : \ \mathbb{Z} \to R $$ defined by $$ f(n) = n(1_R) $$ for all $$ n \in \mathbb{Z} $$ is a ring homomorphism and, in fact, is the only ring homomorphism from $$ \mathbb{Z} $$ to $$ R $$.

Here

$$ n(1_R) = 1_R + 1_R + ... \ ... + 1_R $$ (n-terms) ... for n > 0

$$ n(1_R) = 0_R $$ for n = 0

and

$$ n(1_R) = (-1_R) + (-1_R) + ... \ ... + (-1_R) $$ (n-terms) ... for n < 0

It should be clear from 1.5 (The subring criterion - see attachment) that the intersection of any non-empty family of subrings of a ring R is again a subring of R, This observation leads to the following Lemma (Lemma 1.10 - see attachment, page 6). ... ... ... "

------------------------------------------------------------------------------------

My first problem with the above is this:

Sharp writes: "Let $$ R $$ be a ring. Then the mapping $$ F \ : \ \mathbb{Z} \to R $$ defined by $$ f(n) = n(1_R) $$ for all $$ n \in \mathbb{Z} $$ is a ring homomorphism and, in fact, is the only ring homomorphism from $$ \mathbb{Z} $$ to $$ R $$."

But why is this the only ring homomorphism from $$ \mathbb{Z} $$ to $$ R $$?

My second problem is as follows: Sharp connects establishing a Z-Algebra (section 1,10 page 6) with establishing the structure of a polynomial ring - but what is the relationship and big picture here?

My third problem is the following:

Dummit and Foote on page 339 define Z-Modules (see attachment) , but seem to structure them slightly differently using a an element a from an abelian group (and not the identity of the group) whereas Sharp uses the multiplicative identity of a ring in establishing a Z-algebra. I presume this is because the abelian groups are being treated as additive and one cannot use the additive identity - can someone please confirm this please - or clarify and explain the links between Z_algebras and Z-modules.

I would really appreciate some help and clarification of these issues.

Peter
 
Physics news on Phys.org
First off, let's just say we have any old homomorphism:

$f:\Bbb Z \to R$.

Since for integers $k,m$ we have:

$f(km) = f(k)f(m)$

we have:

$f(m) = f(1\ast m) = f(1)f(m)$ and:

$f(k) = f(k \ast 1) = f(k)f(1)$

so we see that $f(1)$ is a multiplicative identity for $f(\Bbb Z)$ (even if $R$ is not commutative).

Moreover, since $f(k)f(m) = f(km) = f(mk) = f(m)f(k)$, we see that the image of $f$ is a commutative sub-ring of $R$ with unity.

If $R$ is a ring with unity, the unity of $R$ is unique, and if we insist that a ring homomorphism preserve the ring unity, then we have no choice but to set:

$f(1) = 1_R$

But then the homomorphism property:

$f(k+1) = f(k) + f(1) = f(k) + 1_R$

yields (via induction on $n$):

$f(n) = nf(1) = n(1_R)$, for all $n \in \Bbb Z^+$.

Also:

$f(0) = f(0+0) = f(0) + f(0)$ shows that $f(0) = 0_R$.

Finally, for $n \in \Bbb Z^+$:

$0_R = f(0) = f(n + (-n)) = f(n) + f(-n)$, which means that:

$f(-n) = -f(n) = n(-1_R)$.

This means that $f$ is completely determined by $f(1)$, and we only have one choice in the matter, so only one homomorphism is possible (this is NOT true if we do not require homomorphisms to preserve unity:

Consider $f:\Bbb Z \to \Bbb Z \times \Bbb Z$ given by $f(k) = (k,0)$ and

$g:\Bbb Z \to \Bbb Z \times \Bbb Z$ given by $g(k) = (k,k)$).

What this means, in the language of category theory, is that the integers are an INITIAL OBJECT in the category of commutative rings with unity (with morphisms consisting of unity-preserving ring homomorphisms). Naively, the integers are your basic "starter ring", they have a special role to play in the theory of commutative rings.

To get a better understanding of what is going on, here, let's look at an alternate definition of $R$-algebra, and show our two definitions are equivalent.

Alternate definition of $R$-algebra:

A (left) $R$-module $M$ together with a bilinear ($R$-linear in both arguments) operation:

$[\cdot,\cdot]:M \times M \to M$

that possesses an identity for this operation, $1_M$.

Suppose now that we have an $R$-algebra in the sense of Sharp, so we have a commutative ring with unity $R$, a commutative ring with unity $S$, and a ring homomorphism (which preserves unity) $f:R \to S$.

For our $R$-module, we will take $M = S$. We define the "scalar multiplication" as:

$r(s) = f(r)s$, where the right-hand side is the ring multiplication in $S$. It should be (hopefully) clear that this indeed does produce an $R$-module:

$(S,+)$ is an abelian group
$(r + r')(s) = f(r + r')s = [f(r) + f(r')]s = f(r)s + f(r')s = r(s) + r'(s)$
$r(s+s') = f(r)(s + s') = f(r)s + f(r)s' = r(s) + r'(s)$
$r(r'(s)) = r(f(r')s) = (f(r))(f(r'))s) = (f(r)f(r'))s = f(rr')s = (rr')(s)$
$1_R(s) = f(1_R)s = 1_Ss = s$

For our binary operation $[\cdot,\cdot]$, we will use the multiplication in $S$. So all we have to verify is bilinearity:

$[s,t+t'] = s(t+t') = st + st' = [s,t] + [s,t']$
$[s+s',t] = (s+s')t = st + s't = [s,t] + [s',t]$
$r[s,t] = r(st) = f(r)(st) = (f(r)s)t = [f(r)s,t] = [r(s),t]$
$r[s,t] = r(st) = r(ts) = f(r)(ts) = (f(r)t)s = s(f(r)t) = [s,f(r)t] = [s,r(t)]$

and identity: clearly we may take $1_M = 1_S$.

Now, conversely, suppose we have the alternate definition, and we wish to prove this is an $R$-algebra in the sense of Sharp. So we need to exhibit a (unity-preserving) ring homomorphism $f:R \to M$. Define:

$f: R \to M$ by $f(r) = r(1_M)$.

The proof that this is such a homomorphism is left to you.

(Note: the requirement that $S$ is commutative can actually be relaxed to the condition that $f(S) \subseteq Z(S)$, the center of $S$, and the above proofs remain the same. Can you spot where this is required?).

********

By dint of the unique homomorphism from $\Bbb Z$ to $R$, we see immediately that any ring is a $\Bbb Z$-algebra, the $\Bbb Z$-module structure comes from the fact that the additive group of a ring is an abelian group. Some care must be taken, however, when writing $\Bbb Z$-linear combinations of elements of $R$, because $R$ may be of finite characteristic.

Every $\Bbb Z$-algebra *is* a $\Bbb Z$-module, but not every $\Bbb Z$-module has a bilinear product associated with it (in much the same way as some vector spaces have a multiplication defined on them, but not every vector space does).

The polynomial ring $R[x]$ is, in fact, the free $R$-algebra generated by the set $\{x\}$. This is what is expressed by the theorem:

Let $R$ be any commutative unital ring with $a \in R$. Then there is a unique ring homomorphism:

$\phi_a:R[x] \to R$

such that if $f:\{x\} \to \{a\}$ is defined the only way possible, and $\iota: \{x\} \to R[x]$ is the inclusion map, then:

$\phi_a \circ \iota = f$

Such a theorem is called a "universal property", because it says $R[x]$ represents the most efficient way of creating an $R$-algebra out of the single element set $\{x\}$.

The homomorphism $\phi_a$ is often called "the evaluation map at $a$" which returns the element of $R$, for a given polynomial $p(x) \in R[x]$ given by forming the expression $p(a)$. It may be an interesting exercise for you to show that $\phi_a$ is completely determined by its value at $x$.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 5 ·
Replies
5
Views
918
  • · Replies 21 ·
Replies
21
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
848
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K