MHB Rings of the form R[X] - Ring Adjunction

  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Form Ring Rings
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading R.Y Sharp's book: "Steps in Commutative Algebra".

On page 6 in 1.11 Lemma, we have the following: [see attachment]

"Let S be a subring of the ring R, and let \Gamma be a subset of R.

Then S[ \Gamma ] is defined as the intersection of all subrings of R which contain S and \Gamma.

Thus, S[ \Gamma ] is a subring of R which contains both S and \Gamma, and it is the smallest such subring of R in the sense that it is contained in every other subring of R that contains S and \Gamma.

In the special case in which \Gamma is a finite set \{ \alpha_1, \alpha_2, ... ... , \alpha_n \} we write S[ \Gamma ] as S [ \alpha_1, \alpha_2, ... ... , \alpha_n ].

In the special case in which S is commutative, and \alpha \in R is such that \alpha s = s \alpha for all s \in S we have

S[ \alpha ] = \{ \ {\sum}_{i = 0}^{t} s_i \alpha^i : t \in {\mathbb{N}}_0 \ s_0, s_1, ... ... , s_t \in S \} ......... (1)------------------------------------------------------------------------------------------------------------------------------------

Then on page 7 Sharp writes:

Note that when R is a commutative ring and X is an indeterminate, then it follows from 1.11 Lemma that our earlier use of R[X] to denote the polynomial ring is consistent with this new use of R[X] to denote 'ring adjunction'.

-------------------------------------------------------------------------------------------------------------------------------------

Now in the polynomial ring R[X] we take a subset of ring elements a_1, a_2, ... ... , a_n \in R and use an indeterminate x (whatever that is?) to form sums like the following:

a_n x^n + a_{n-1} + ... ... + a_1x + a_0 .......... (2)My problems are as follows:

(a) It looks like (1) and (2) have the same structure BUT \alpha is a member of the ring R, and also the subring S whereas x is not a member of R but is an "indeterminate" [maybe I am overthinking this and it does not matter??] Can someone please clarify this matter?

(b) Again, (1) and (2) seem to have the same structure BUT a_1, a_2, ... ... , a_n \in R is just a subset of R - whereas s_0, s_1, ... ... , s_t are elements of a subring. Does this matter? Can someone please clarify?

(c) Sharp specifies that S has to be commutative - but why? I cannot see how this is needed in his Proof on the bottom of page 6. Can someone help.

I would be grateful if someone can clarify the above.

Peter

[Note: This has also been posted on MHF]
 
Physics news on Phys.org
We always have the (canonical) surjective ring-homomorphism:

[math]\phi: S[x] \to S[\alpha] \subseteq R[/math] given by:

[math]\phi(f(x)) = f(\alpha)[/math].

If [math]\alpha[/math] is transcendental over [math]S[/math] (that is, there is NO non-zero polynomial [math]p(x)[/math] in [math]S[x][/math] for which [math]p(\alpha) = 0[/math]), then this is an isomorphism of [math]S[x][/math] with [math]S[\alpha][/math].

Perhaps an example might illustrate this phenomenon more clearly:

let [math]R = \Bbb R[/math], the ring (actualy a field) of real numbers, let [math]S = \Bbb Z[/math], the sub-ring of integers, and let [math]\alpha = \pi[/math] (the ratio of a circle to its diameter). We then obtain a ring isomorphism between integral polynomials in a single variable (indeterminate) and "integral polynomials in [math]\pi[/math]"(which form a sub-ring of the real numbers).

In fact, as we let [math]\alpha[/math] range over the real numbers, we obtain different sub-rings of [math]\Bbb R[/math]...because the quotient ring [math]\Bbb Z[x]/(\text{ker}(\phi))[/math] will have different properties depending on how big the kernel is.

We desire [math]S[/math] to commute with [math]\alpha[/math] because then the usual "high-school" rules of polynomial multiplication hold (otherwise we would have to keep track of "coefficients on both sides" of the "indeterminates"). While this is certainly *possible* to imagine, it greatly increases the difficulty of working with such expressions. One simple way to ensure this desirable state of affairs happens is to require that [math]S[/math] lies within the center of the larger ring [math]R[/math].

The subtle difference between [math]S[x][/math] and [math]S[\alpha][/math] is the kind of difference between a polynomial FUNCTION [math]p[/math] and it's value [math]p(x)[/math] at some particular point [math]x[/math] in the domain of [math]p[/math]. In keeping with the spirit of this, the ring-homomorphism [math]\phi[/math] is often called the "evaluation morphism at [math]\alpha[/math]".

Another key example: the field of complex numbers can be considered "real polynomials in [math]i = \sqrt{-1}[/math]" all of which have degree at most two since:

[math]i^2 + 1 = 0[/math]

effectively means all the higher powers of [math]i[/math] can be reduced to either real numbers or real multiples of [math]i[/math], that is, complex numbers are ALL of the form:

[math]a + bi: a,b \in \Bbb R[/math]

that is:

[math]\Bbb C \cong \Bbb R[/math], which is in turn ring-isomorphic to the ring:

[math]\Bbb R[x]/(x^2 + 1)[/math].

Perhaps more importantly (in the general scheme of things), if a ring [math]R[/math] contains a field within its center (say [math]F[/math]), the ring itself can be considered a vector space over [math]F[/math], using the ring-multiplication as the definition of "scalar multiple", which allows us to use tools of linear algebra to investigate the ring-structure of [math]R[/math]. Even if we just have a sub-ring of the center, we can still use many of the tools of module theory (which is "almost" linear algebra). One of the things mathematicians often seek to do is take some complicated structure, and find "simple building blocks" for it, with the hope that investigations into the simple building blocks will reveal heretofore hidden features of the overall structure.

So long story short: restricting our attention to commutative rings allows us to say a great deal more easily than we would be able to otherwise. If our guiding intuition for rings in general is the integers, abandoning commutativity leads to a loss of "too many things we feel ought to be true". Rest assured, there ARE those who venture into this brave world, but the special "commutative" case is *important*, and not just for historical reasons, but also because of its many applications.
 
Deveno said:
We always have the (canonical) surjective ring-homomorphism:

[math]\phi: S[x] \to S[\alpha] \subseteq R[/math] given by:

[math]\phi(f(x)) = f(\alpha)[/math].

If [math]\alpha[/math] is transcendental over [math]S[/math] (that is, there is NO non-zero polynomial [math]p(x)[/math] in [math]S[x][/math] for which [math]p(\alpha) = 0[/math]), then this is an isomorphism of [math]S[x][/math] with [math]S[\alpha][/math].

Perhaps an example might illustrate this phenomenon more clearly:

let [math]R = \Bbb R[/math], the ring (actualy a field) of real numbers, let [math]S = \Bbb Z[/math], the sub-ring of integers, and let [math]\alpha = \pi[/math] (the ratio of a circle to its diameter). We then obtain a ring isomorphism between integral polynomials in a single variable (indeterminate) and "integral polynomials in [math]\pi[/math]"(which form a sub-ring of the real numbers).

In fact, as we let [math]\alpha[/math] range over the real numbers, we obtain different sub-rings of [math]\Bbb R[/math]...because the quotient ring [math]\Bbb Z[x]/(\text{ker}(\phi))[/math] will have different properties depending on how big the kernel is.

We desire [math]S[/math] to commute with [math]\alpha[/math] because then the usual "high-school" rules of polynomial multiplication hold (otherwise we would have to keep track of "coefficients on both sides" of the "indeterminates"). While this is certainly *possible* to imagine, it greatly increases the difficulty of working with such expressions. One simple way to ensure this desirable state of affairs happens is to require that [math]S[/math] lies within the center of the larger ring [math]R[/math].

The subtle difference between [math]S[x][/math] and [math]S[\alpha][/math] is the kind of difference between a polynomial FUNCTION [math]p[/math] and it's value [math]p(x)[/math] at some particular point [math]x[/math] in the domain of [math]p[/math]. In keeping with the spirit of this, the ring-homomorphism [math]\phi[/math] is often called the "evaluation morphism at [math]\alpha[/math]".

Another key example: the field of complex numbers can be considered "real polynomials in [math]i = \sqrt{-1}[/math]" all of which have degree at most two since:

[math]i^2 + 1 = 0[/math]

effectively means all the higher powers of [math]i[/math] can be reduced to either real numbers or real multiples of [math]i[/math], that is, complex numbers are ALL of the form:

[math]a + bi: a,b \in \Bbb R[/math]

that is:

[math]\Bbb C \cong \Bbb R[/math], which is in turn ring-isomorphic to the ring:

[math]\Bbb R[x]/(x^2 + 1)[/math].

Perhaps more importantly (in the general scheme of things), if a ring [math]R[/math] contains a field within its center (say [math]F[/math]), the ring itself can be considered a vector space over [math]F[/math], using the ring-multiplication as the definition of "scalar multiple", which allows us to use tools of linear algebra to investigate the ring-structure of [math]R[/math]. Even if we just have a sub-ring of the center, we can still use many of the tools of module theory (which is "almost" linear algebra). One of the things mathematicians often seek to do is take some complicated structure, and find "simple building blocks" for it, with the hope that investigations into the simple building blocks will reveal heretofore hidden features of the overall structure.

So long story short: restricting our attention to commutative rings allows us to say a great deal more easily than we would be able to otherwise. If our guiding intuition for rings in general is the integers, abandoning commutativity leads to a loss of "too many things we feel ought to be true". Rest assured, there ARE those who venture into this brave world, but the special "commutative" case is *important*, and not just for historical reasons, but also because of its many applications.


Thanks Deveno, most helpful ...

I need to spend some time working through this and reflecting on what you have said ...

Thanks again,

Peter
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
Back
Top