MHB Ideals in Polynomial rings - Knapp - page 146

  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Polynomial Rings
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Anthony W. Knapp's book, Basic Algebra.

On page 146 in the section of Part IV (which is mainly on groups and group actions) which digresses onto rings and fields, we find the following text on the nature of ideals in the polynomial rings $$\mathbb{Q} [X] , \mathbb{R} [X] , \mathbb{C} [X]$$.View attachment 2888
In the above example we find the text:

"... ... ... The equality $$ C(X) = A(X) - f(X)B(X) $$ shows that $$C(X)$$ is in $$I$$, and the minimality of deg f implies that $$C(X) = 0$$. ... ... ... "

Can someone please help me to understand why the minimality of deg f implies that $$C(X) = 0$$?

Peter
 
Last edited:
Physics news on Phys.org
Peter said:
I am reading Anthony W. Knapp's book, Basic Algebra.

On page 146 in the section of Part IV (which is mainly on groups and group actions) which digresses onto rings and fields, we find the following text on the nature of ideals in the polynomial rings $$\mathbb{Q} [X] , \mathbb{R} [X] , \mathbb{C} [X]$$.View attachment 2888
In the above example we find the text:

"... ... ... The equality $$ C(X) = A(X) - f(X)B(X) $$ shows that $$C(X)$$ is in $$I$$, and the minimality of deg f implies that $$C(X) = 0$$. ... ... ... "

Can someone please help me to understand why the minimality of deg f implies that $$C(X) = 0$$?

Peter

Hi Peter,

There were two possibilities: $C(X) = 0$ or $\text{deg}\, C < \text{deg}\, f$. Since $C(X)$ is in $I$, if $C(X)$ is nonzero, then $\text{deg}\, C \ge \text{deg}\, f$ (since $f$ is the element of $I$ of smallest degree). This contradicts the inequality $\text{deg}\, C < \text{deg}\, f$. Therefore $C(X) = 0$.
 
The text is just observing that for a field, $F[x]$ is Euclidean (with "$d$" function being the degree of a polynomial), and any such Euclidean domain is a principal ideal domain.

Indeed, in any Euclidean domain $D$, given an ideal $I$ of $D$, we have $I = (x)$ for some $x \in D$. The argument is just the same:

Suppose that $x \in I \neq (0)$ is such that $d(x)$ is minimal (if $I = (0)$, choose $x = 0$). Let $y$ be any other element of $I$. I claim $x|y$. Since $D$ is Euclidean, we may write:

$y = qx + r$ where $d(r) < d(x)$, or $r = 0$.

Now $r = y - qx \in I + RI = I + I = I$. Hence, by the minimality of $d(x)$, we must conclude $r = 0$, and $y = qx$.

So every element of $I$ is $ax$, for some $a \in D$, that is: $I = (x)$.

Euclidean domains are very "nice rings". Algebraically, they share many of the desirable features of the integers:

1) Unique factorization
2) No zero divisors
3) Simple ideal structure
4) Primes = irreducibles
5) A division algorithm, allowing for easy computation in quotient rings
6) the Bezout identity for GCD's

One of the remarkable features about POLYNOMIAL rings over a field is that they provide "additional structure" to the field, which turns out to be just enough "extra stuff" to create larger fields out of smaller fields.

In essence, what is happening here, is the abstracting of the way "algebraic numbers" were added to the rational field. This process was set in motion by the observation of the pythagoren identity for a right triangle:

$a^2 + b^2 = c^2$.

If $a = b = 1$ (as is the case with half a "unit square"), then $c$ is a square root of 2:

$1 + 1 = 2 = c^2$, that is:

$c^2 - 2 = 2 - 2 = 0$.

Algebraic expressions involving $\sqrt{2}$ were originally written as "formal sums", the expression:

$a+b\sqrt{2}$ was held to be "non-simplifiable". It was taken as "obvious" that such things referred to an actual "magnitude" and that magnitudes could algebraically manipulated, and were "numbers" (and formed a field, although they were not called as such). Nowadays the modern term is "adjunction", and pre-supposes we can create an even larger structure (in this case, the algebraic CLOSURE of the rational numbers) in which all such expressions live.

Almost everything I have written here was certainly known to someone like Euler (and most of it to Euclid), but they would not have had the same VOCABULARY. It would not be until Evariste Galois investigated solutions of polynomials IN GENERAL, that "shuffling of roots" would come to be seen as the most salient feature involved with SOLVING polynomials (a vestige of this, or perhaps, a hint of things to come, is found when high-school students learn to "rationalize" the denominators of things like:

$\dfrac{x}{\sqrt{2} + \sqrt{3}}$

by multiplying by "the conjugate", which is another root of the SAME polynomial $\sqrt{2} + \sqrt{3}$ satisfies).

It would be some time still, after that, that it would be realized that such "shufflings" captured something ESSENTIAL about "reversible algebraic operations" (solving equations), including our two favorite such operations: addition, and multiplication.

Addition can be seen as the "abstraction" of COUNTING, and multiplication as the "abstraction" of TRANSFORMING, or more precisely:

Abelian groups are $\Bbb Z$-modules and monoids are endomorphisms of some set (groups are the subset of invertible transformations of said set). In this sense, one can honestly say:

"Integers are the single most important algebraic structure there is, to understand anything else, you must know them".

When one learns long-division, one is doing "deep ring theory", although that is hardly apparent at the time. It is little wonder it proves to be so difficult the first time around.
 
Euge said:
Hi Peter,

There were two possibilities: $C(X) = 0$ or $\text{deg}\, C < \text{deg}\, f$. Since $C(X)$ is in $I$, if $C(X)$ is nonzero, then $\text{deg}\, C \ge \text{deg}\, f$ (since $f$ is the element of $I$ of smallest degree). This contradicts the inequality $\text{deg}\, C < \text{deg}\, f$. Therefore $C(X) = 0$.

Thanks Euge ... yes, simple implication of the Euclidean division algorithm ... hmm ... should have seen it ...

Thanks again for the help ...

Peter

- - - Updated - - -

Deveno said:
The text is just observing that for a field, $F[x]$ is Euclidean (with "$d$" function being the degree of a polynomial), and any such Euclidean domain is a principal ideal domain.

Indeed, in any Euclidean domain $D$, given an ideal $I$ of $D$, we have $I = (x)$ for some $x \in D$. The argument is just the same:

Suppose that $x \in I \neq (0)$ is such that $d(x)$ is minimal (if $I = (0)$, choose $x = 0$). Let $y$ be any other element of $I$. I claim $x|y$. Since $D$ is Euclidean, we may write:

$y = qx + r$ where $d(r) < d(x)$, or $r = 0$.

Now $r = y - qx \in I + RI = I + I = I$. Hence, by the minimality of $d(x)$, we must conclude $r = 0$, and $y = qx$.

So every element of $I$ is $ax$, for some $a \in D$, that is: $I = (x)$.

Euclidean domains are very "nice rings". Algebraically, they share many of the desirable features of the integers:

1) Unique factorization
2) No zero divisors
3) Simple ideal structure
4) Primes = irreducibles
5) A division algorithm, allowing for easy computation in quotient rings
6) the Bezout identity for GCD's

One of the remarkable features about POLYNOMIAL rings over a field is that they provide "additional structure" to the field, which turns out to be just enough "extra stuff" to create larger fields out of smaller fields.

In essence, what is happening here, is the abstracting of the way "algebraic numbers" were added to the rational field. This process was set in motion by the observation of the pythagoren identity for a right triangle:

$a^2 + b^2 = c^2$.

If $a = b = 1$ (as is the case with half a "unit square"), then $c$ is a square root of 2:

$1 + 1 = 2 = c^2$, that is:

$c^2 - 2 = 2 - 2 = 0$.

Algebraic expressions involving $\sqrt{2}$ were originally written as "formal sums", the expression:

$a+b\sqrt{2}$ was held to be "non-simplifiable". It was taken as "obvious" that such things referred to an actual "magnitude" and that magnitudes could algebraically manipulated, and were "numbers" (and formed a field, although they were not called as such). Nowadays the modern term is "adjunction", and pre-supposes we can create an even larger structure (in this case, the algebraic CLOSURE of the rational numbers) in which all such expressions live.

Almost everything I have written here was certainly known to someone like Euler (and most of it to Euclid), but they would not have had the same VOCABULARY. It would not be until Evariste Galois investigated solutions of polynomials IN GENERAL, that "shuffling of roots" would come to be seen as the most salient feature involved with SOLVING polynomials (a vestige of this, or perhaps, a hint of things to come, is found when high-school students learn to "rationalize" the denominators of things like:

$\dfrac{x}{\sqrt{2} + \sqrt{3}}$

by multiplying by "the conjugate", which is another root of the SAME polynomial $\sqrt{2} + \sqrt{3}$ satisfies).

It would be some time still, after that, that it would be realized that such "shufflings" captured something ESSENTIAL about "reversible algebraic operations" (solving equations), including our two favorite such operations: addition, and multiplication.

Addition can be seen as the "abstraction" of COUNTING, and multiplication as the "abstraction" of TRANSFORMING, or more precisely:

Abelian groups are $\Bbb Z$-modules and monoids are endomorphisms of some set (groups are the subset of invertible transformations of said set). In this sense, one can honestly say:

"Integers are the single most important algebraic structure there is, to understand anything else, you must know them".

When one learns long-division, one is doing "deep ring theory", although that is hardly apparent at the time. It is little wonder it proves to be so difficult the first time around.

Thanks so much for the extensive help Deveno ... your help is much appreciated in my goal to understand ring and module theory ...

Working through this now and trying to ensure that I fully understand both what you say and the implications of what you say ...

Thanks again,

Peter
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top