Why Is \(a\) a Unit in \(F[x]\) in Lemma 2 of Nicholson's Book?

Click For Summary

Discussion Overview

The discussion centers on the reasoning behind why an element \(a\) in a field \(F\) is considered a unit in the polynomial ring \(F[x]\), as referenced in Lemma 2 of W. Keith Nicholson's book on Abstract Algebra. Participants explore the implications of \(a\) being a unit in \(F\) and its relationship to \(F[x]\), including definitions and properties of fields and polynomial rings.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Peter questions why \(a\) is a unit in \(F[x]\) if it is a unit in \(F\), seeking a formal explanation.
  • Some participants argue that since \(a \in F\) and \(F \subset F[x]\), \(a\) must also be in \(F[x]\), and being non-zero in \(F\) implies it is a unit in \(F\).
  • It is noted that the identity element in \(F[x]\) is the constant polynomial \(1\), and if \(a\) is a unit in \(F\), then there exists a constant polynomial \(\frac{1}{a}\) in \(F[x]\) such that \(a \cdot \frac{1}{a} = 1\).
  • Peter suggests that the argument may depend on a homomorphism between \(F\) and \(F[x]\), questioning how this relates to the unit status of \(a\).
  • Another participant emphasizes that by definition of a field, every non-zero element is invertible, thus a unit.
  • Discussion includes the concept of isomorphism between \(F\) and the constant polynomials in \(F[x]\), indicating that units in \(F\) map to units in \(F[x]\).
  • One participant elaborates on the structure of fields and rings, discussing the properties of units and the implications for polynomial rings.

Areas of Agreement / Disagreement

Participants generally agree that \(a\) being a unit in \(F\) implies it is also a unit in \(F[x]\), though some nuances regarding the formal justification and the role of homomorphisms remain contested.

Contextual Notes

The discussion touches on definitions and properties of fields and polynomial rings, but does not resolve all aspects of the justification for why units in \(F\) correspond to units in \(F[x]\). There are references to homomorphisms and isomorphisms that may require further clarification.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading W. Keith Nicholson's book: Introduction to Abstract Algebra (Third Edition) ...

I am focused on Section 4.3:Factor Rings of Polynomials over a Field.

I need some help with the proof of Lemma 2 on page 223-224.

The relevant text from Nicholson's book is as follows:https://www.physicsforums.com/attachments/4634In the above text we read the following:

" ... ... To see that it is one-to-one let $$\theta (a) = \overline{0}.$$ Then $$\overline{a} = \overline{0}$$ so $$a + A = 0 + a$$; that is $$a \in A$$. If $$a \ne 0$$, then $$A = F[x]$$, because $$a$$ is a unit in $$F[x]$$ ... ... "I cannot see why a is a unit in F[x] ... can someone please explain why this is the case ...

... it may be quite simple ... but anyway, I hope someone can help ...

Peter
 
Physics news on Phys.org
Peter said:
I cannot see why a is a unit in F[x] ... can someone please explain why this is the case ...

Since $a \in F$ and $F\subset F[x]$, then $a\in F[x]$. Further, $a \neq 0$ in the field $F$ implies $a$ is a unit in $F$. So $a$ must be a unit in $F[x]$ (explicitly, $ab = 1$ in $F$ implies $ab = 1$ in $F[x]$).
 
The identity element in $F[x]$ is the constant poynomial $1$.

Of course, if $a \in F$ is a unit (i.e., non-zero), then that means that $\dfrac{1}{a} \in F$ (and thus in $F[x]$ as well, as a constant polynomial), and we have:

$a\cdot \dfrac{1}{a} = 1$ in $F[x]$.
 
Deveno said:
The identity element in $F[x]$ is the constant poynomial $1$.

Of course, if $a \in F$ is a unit (i.e., non-zero), then that means that $\dfrac{1}{a} \in F$ (and thus in $F[x]$ as well, as a constant polynomial), and we have:

$a\cdot \dfrac{1}{a} = 1$ in $F[x]$.
My thanks to Deveno and Euge for their help ...

Peter

- - - Updated - - -

Euge said:
Since $a \in F$ and $F\subset F[x]$, then $a\in F[x]$. Further, $a \neq 0$ in the field $F$ implies $a$ is a unit in $F$. So $a$ must be a unit in $F[x]$ (explicitly, $ab = 1$ in $F$ implies $ab = 1$ in $F[x]$).

Euge,

Maybe it is obvious somehow ... BUT ... formally and rigorously ...

... how would you show that if $$a$$ is a unit in $$F$$ then $$a$$ is a unit in $$F[x]$$ ...

... ... why, exactly is this the case ...

Can you help?

Peter*** EDIT/NOTE ***

Maybe the argument depends on there being a homomorphism $$\phi$$ between $$F$$ and $$F[x]$$ ... so that if we have

If $$ab = 1$$ in $$F$$

then

$$\phi (ab) = \phi(a) \phi(b) = \phi(1)$$ in $$F[x] $$Is that correct?

Peter
 
Last edited:
Peter, by definition of a field (in fact, an axiom for a field), every nonzero element of a field is invertible, i.e. a unit.
 
Euge said:
Peter, by definition of a field (in fact, an axiom for a field), every nonzero element of a field is invertible, i.e. a unit.
Hi Euge,

Yes I understand that because $$a$$ is in a field $$F$$ that a is a unit in $$F$$ ... but I was puzzled as to why it was then necessarily also a unit in $$F[X]$$ ... but suspect that it is also a unit in $$F[x]$$ because there is a homomorphism $$\phi: F \rightarrow F[x]$$ ...

See my post above ...

Thanks again for your help ... I really appreciate your support ...

Peter
 
I wrote in my last post

"explicitly, $ab = 1$ in $F$ implies $ab = 1$ in $F[x]$"

the simple reason being $F$ is a subset of $F[x]$.
 
A field is "two (abelian) groups in one".

Explicitly, $F(,+)$ is an abelian group, and $(F - \{0\},\cdot)$ is an abelian group.

We further require that the map $L_a: F \to F$ ("left multiplication by $a$") given by $L_a(b) = a\cdot b$ is an $(F,+)$-homomorphism. This is the familiar distributive law:

$L_a(b+c) = L_a(b) + L_a(c)$, or: $a\cdot (b+c) = a\cdot b + a\cdot c$.

This is a "compatibility" requirement, ensuring our two operations play nice together.

A ring is similar, we still require an additive abelian group. But now, we do not require that $(R - \{0\},\cdot)$ be an abelian group, rather only that $(R,\cdot)$ be a simpler structure, known as a semigroup.

We still require $L_a$ be additive (an additive group homomorphism). However, since semigroups are not, typically, commutative (abelian), we also have to make this requirement of $R_a: R \to R$, where $R_a(b) = b\cdot a$.

Life is easier for us, of course, if $(R,\cdot)$ is commutative, and easier still if $(R,\cdot)$ forms a "commutative semigroup with identity" (the proper term is "commutative monoid").

As in groups, the identity element of a monoid is unique. If the multiplicative semigroup of a ring $R$ forms a commutative monoid, we call $R$ a commutative ring with unity.

Any monoid $M$(whether in a ring, or not) has a group associated with it-its "group of units" (uusally written $U(M)$). These are the "invertible" elements of $M$, that is, those elements $u$ of $M$ for which there exists $v \in M$ such that:

$uv = vu = 1_M$.

It is easy to see that the group of units actually form a sub-monoid of $M$ (which is why we can make a group from them-we have closure: if $u,v \in U(M)$, then $uv$ has the inverse $v^{1}u^{-1}$ and so is likewise in $U(M)$).

In rings, all the "action" is in the multiplicative semigroup. Since rings without unity (where we have *just* a semi-group) can behave somewhat perversely, we usually insist rings have a multiplicative monoid (but this is a point of some contention, even amognst mathematicians).

In rings, one of our "goals" is to "divide as much as we can". We can't *always* divide, like we can in fields, but at least the ability to establish $a|b$ (that is: $b = ac$) let's us "break down" things into hopefully "simpler things" (N.B., this doesn't always work very well). In polynomial rings, this process is called "factoring".

Now the multiplicative identity of a field, $1_F$ is *still" the multiplicative identity of the ring $F[x]$, that is:

$f(x)\cdot 1 = 1\cdot f(x) = f(x)$, for all $f(x) \in F[x]$.

If $a \neq 0 \in F$, by the very definition of a field, $a$ is a unit in $F$: we have $U(F) = F -\{0\}$.

Note that the map $i: F \to F[x]$ given by $i(a) = a$ is an injective ring-homomorphism. This means, essentially, that "constant polynomials" are a COPY of the field $F$ inside the polynomial ring $F[x]$.

Since $i(F)$ is thus *isomorphic* to $F$, it maps units to units. This isomorphism $i$ is "so simple" it's practically transparent, like the emperor's new clothes. In other words, if I instruct you to add $2$ to $x + 4$, and you say, "wait, the NUMBER $2$, or the (constant) polynomial $2$?", I might just smack you.

A final example: in $\Bbb Q[x]$, we have $g(x) = 4$ is a unit. Why? Because the constant polynomial $f(x) = \dfrac{1}{4}$ is also in $\Bbb Q[x]$ and:

$f(x)g(x) = \dfrac{1}{4}\cdot 4 = 1$ (although our product is the "constant polynomial' $1$, remember we IDENTIFY $1$ and $i(1)$ because $i$ is an isomorphism).

Here is another way to look at it:

consider the evaluation map $\phi_0: F[x] \to F$, This is a ring homomorphism, so:

$\phi_0(f(x)g(x)) = \phi_0(f(x))\phi_0(g(x))$, that is:

$(fg)(0) = f(0)g(0)$.

If $fg = 1$, then $f(0)g(0) = 1$, so the constant terms of $f$ and $g$ are units in $F$ (non-zero). We multiply the constant terms of $f$ and $g$ together to get the constant term of $fg$.

Note that $\phi_0 \circ i = \text{id}_F$, the identity map on $F$, and that $i \circ \phi_0$ is the identity map on $i(F)$.

Now, if $f(x)$ "has $x$'s in it", it is NOT invertible (not a unit of $F[x]$). Mostly this is because $\dfrac{1}{x} \not\in F[x]$). But the "rigorous" way to argue is:

If $\text{deg}(f) > 0$ and $fg = 1$, then $\text{deg}(fg) > 0$. But $\text{deg}(1) = 0$, contradiction.

Since $0$ (the $0$-polynomial) is never a unit (in ANY ring), the only possible units in $F[x]$ are the 0-degree polynomials, that is, the elements of $F^{\ast}$. In short:

$U(F[x]) = U(F)$.
 
Deveno said:
A field is "two (abelian) groups in one".

Explicitly, $F(,+)$ is an abelian group, and $(F - \{0\},\cdot)$ is an abelian group.

We further require that the map $L_a: F \to F$ ("left multiplication by $a$") given by $L_a(b) = a\cdot b$ is an $(F,+)$-homomorphism. This is the familiar distributive law:

$L_a(b+c) = L_a(b) + L_a(c)$, or: $a\cdot (b+c) = a\cdot b + a\cdot c$.

This is a "compatibility" requirement, ensuring our two operations play nice together.

A ring is similar, we still require an additive abelian group. But now, we do not require that $(R - \{0\},\cdot)$ be an abelian group, rather only that $(R,\cdot)$ be a simpler structure, known as a semigroup.

We still require $L_a$ be additive (an additive group homomorphism). However, since semigroups are not, typically, commutative (abelian), we also have to make this requirement of $R_a: R \to R$, where $R_a(b) = b\cdot a$.

Life is easier for us, of course, if $(R,\cdot)$ is commutative, and easier still if $(R,\cdot)$ forms a "commutative semigroup with identity" (the proper term is "commutative monoid").

As in groups, the identity element of a monoid is unique. If the multiplicative semigroup of a ring $R$ forms a commutative monoid, we call $R$ a commutative ring with unity.

Any monoid $M$(whether in a ring, or not) has a group associated with it-its "group of units" (uusally written $U(M)$). These are the "invertible" elements of $M$, that is, those elements $u$ of $M$ for which there exists $v \in M$ such that:

$uv = vu = 1_M$.

It is easy to see that the group of units actually form a sub-monoid of $M$ (which is why we can make a group from them-we have closure: if $u,v \in U(M)$, then $uv$ has the inverse $v^{1}u^{-1}$ and so is likewise in $U(M)$).

In rings, all the "action" is in the multiplicative semigroup. Since rings without unity (where we have *just* a semi-group) can behave somewhat perversely, we usually insist rings have a multiplicative monoid (but this is a point of some contention, even amognst mathematicians).

In rings, one of our "goals" is to "divide as much as we can". We can't *always* divide, like we can in fields, but at least the ability to establish $a|b$ (that is: $b = ac$) let's us "break down" things into hopefully "simpler things" (N.B., this doesn't always work very well). In polynomial rings, this process is called "factoring".

Now the multiplicative identity of a field, $1_F$ is *still" the multiplicative identity of the ring $F[x]$, that is:

$f(x)\cdot 1 = 1\cdot f(x) = f(x)$, for all $f(x) \in F[x]$.

If $a \neq 0 \in F$, by the very definition of a field, $a$ is a unit in $F$: we have $U(F) = F -\{0\}$.

Note that the map $i: F \to F[x]$ given by $i(a) = a$ is an injective ring-homomorphism. This means, essentially, that "constant polynomials" are a COPY of the field $F$ inside the polynomial ring $F[x]$.

Since $i(F)$ is thus *isomorphic* to $F$, it maps units to units. This isomorphism $i$ is "so simple" it's practically transparent, like the emperor's new clothes. In other words, if I instruct you to add $2$ to $x + 4$, and you say, "wait, the NUMBER $2$, or the (constant) polynomial $2$?", I might just smack you.

A final example: in $\Bbb Q[x]$, we have $g(x) = 4$ is a unit. Why? Because the constant polynomial $f(x) = \dfrac{1}{4}$ is also in $\Bbb Q[x]$ and:

$f(x)g(x) = \dfrac{1}{4}\cdot 4 = 1$ (although our product is the "constant polynomial' $1$, remember we IDENTIFY $1$ and $i(1)$ because $i$ is an isomorphism).

Here is another way to look at it:

consider the evaluation map $\phi_0: F[x] \to F$, This is a ring homomorphism, so:

$\phi_0(f(x)g(x)) = \phi_0(f(x))\phi_0(g(x))$, that is:

$(fg)(0) = f(0)g(0)$.

If $fg = 1$, then $f(0)g(0) = 1$, so the constant terms of $f$ and $g$ are units in $F$ (non-zero). We multiply the constant terms of $f$ and $g$ together to get the constant term of $fg$.

Note that $\phi_0 \circ i = \text{id}_F$, the identity map on $F$, and that $i \circ \phi_0$ is the identity map on $i(F)$.

Now, if $f(x)$ "has $x$'s in it", it is NOT invertible (not a unit of $F[x]$). Mostly this is because $\dfrac{1}{x} \not\in F[x]$). But the "rigorous" way to argue is:

If $\text{deg}(f) > 0$ and $fg = 1$, then $\text{deg}(fg) > 0$. But $\text{deg}(1) = 0$, contradiction.

Since $0$ (the $0$-polynomial) is never a unit (in ANY ring), the only possible units in $F[x]$ are the 0-degree polynomials, that is, the elements of $F^{\ast}$. In short:

$U(F[x]) = U(F)$.

This is going off topic, but after reading this, it made me think it's long overdue for us to write an algebra guide together in the commentary section! (Rofl)
 
Last edited:
  • #10
Euge said:
This is going off topic, but after reading this, it made me think it's long overdue for us to write an algebra guide together in the commentary section! (Rofl)
That would be awesome!
 
  • #11
Deveno said:
A field is "two (abelian) groups in one".

Explicitly, $F(,+)$ is an abelian group, and $(F - \{0\},\cdot)$ is an abelian group.

We further require that the map $L_a: F \to F$ ("left multiplication by $a$") given by $L_a(b) = a\cdot b$ is an $(F,+)$-homomorphism. This is the familiar distributive law:

$L_a(b+c) = L_a(b) + L_a(c)$, or: $a\cdot (b+c) = a\cdot b + a\cdot c$.

This is a "compatibility" requirement, ensuring our two operations play nice together.

A ring is similar, we still require an additive abelian group. But now, we do not require that $(R - \{0\},\cdot)$ be an abelian group, rather only that $(R,\cdot)$ be a simpler structure, known as a semigroup.

We still require $L_a$ be additive (an additive group homomorphism). However, since semigroups are not, typically, commutative (abelian), we also have to make this requirement of $R_a: R \to R$, where $R_a(b) = b\cdot a$.

Life is easier for us, of course, if $(R,\cdot)$ is commutative, and easier still if $(R,\cdot)$ forms a "commutative semigroup with identity" (the proper term is "commutative monoid").

As in groups, the identity element of a monoid is unique. If the multiplicative semigroup of a ring $R$ forms a commutative monoid, we call $R$ a commutative ring with unity.

Any monoid $M$(whether in a ring, or not) has a group associated with it-its "group of units" (uusally written $U(M)$). These are the "invertible" elements of $M$, that is, those elements $u$ of $M$ for which there exists $v \in M$ such that:

$uv = vu = 1_M$.

It is easy to see that the group of units actually form a sub-monoid of $M$ (which is why we can make a group from them-we have closure: if $u,v \in U(M)$, then $uv$ has the inverse $v^{1}u^{-1}$ and so is likewise in $U(M)$).

In rings, all the "action" is in the multiplicative semigroup. Since rings without unity (where we have *just* a semi-group) can behave somewhat perversely, we usually insist rings have a multiplicative monoid (but this is a point of some contention, even amognst mathematicians).

In rings, one of our "goals" is to "divide as much as we can". We can't *always* divide, like we can in fields, but at least the ability to establish $a|b$ (that is: $b = ac$) let's us "break down" things into hopefully "simpler things" (N.B., this doesn't always work very well). In polynomial rings, this process is called "factoring".

Now the multiplicative identity of a field, $1_F$ is *still" the multiplicative identity of the ring $F[x]$, that is:

$f(x)\cdot 1 = 1\cdot f(x) = f(x)$, for all $f(x) \in F[x]$.

If $a \neq 0 \in F$, by the very definition of a field, $a$ is a unit in $F$: we have $U(F) = F -\{0\}$.

Note that the map $i: F \to F[x]$ given by $i(a) = a$ is an injective ring-homomorphism. This means, essentially, that "constant polynomials" are a COPY of the field $F$ inside the polynomial ring $F[x]$.

Since $i(F)$ is thus *isomorphic* to $F$, it maps units to units. This isomorphism $i$ is "so simple" it's practically transparent, like the emperor's new clothes. In other words, if I instruct you to add $2$ to $x + 4$, and you say, "wait, the NUMBER $2$, or the (constant) polynomial $2$?", I might just smack you.

A final example: in $\Bbb Q[x]$, we have $g(x) = 4$ is a unit. Why? Because the constant polynomial $f(x) = \dfrac{1}{4}$ is also in $\Bbb Q[x]$ and:

$f(x)g(x) = \dfrac{1}{4}\cdot 4 = 1$ (although our product is the "constant polynomial' $1$, remember we IDENTIFY $1$ and $i(1)$ because $i$ is an isomorphism).

Here is another way to look at it:

consider the evaluation map $\phi_0: F[x] \to F$, This is a ring homomorphism, so:

$\phi_0(f(x)g(x)) = \phi_0(f(x))\phi_0(g(x))$, that is:

$(fg)(0) = f(0)g(0)$.

If $fg = 1$, then $f(0)g(0) = 1$, so the constant terms of $f$ and $g$ are units in $F$ (non-zero). We multiply the constant terms of $f$ and $g$ together to get the constant term of $fg$.

Note that $\phi_0 \circ i = \text{id}_F$, the identity map on $F$, and that $i \circ \phi_0$ is the identity map on $i(F)$.

Now, if $f(x)$ "has $x$'s in it", it is NOT invertible (not a unit of $F[x]$). Mostly this is because $\dfrac{1}{x} \not\in F[x]$). But the "rigorous" way to argue is:

If $\text{deg}(f) > 0$ and $fg = 1$, then $\text{deg}(fg) > 0$. But $\text{deg}(1) = 0$, contradiction.

Since $0$ (the $0$-polynomial) is never a unit (in ANY ring), the only possible units in $F[x]$ are the 0-degree polynomials, that is, the elements of $F^{\ast}$. In short:

$U(F[x]) = U(F)$.
Thanks so much Deveno ... I have worked through your post several times ... it is extremely instructive and helpful ...

I think I have said this before ... but I'll say it again ... if you ever write a textbook on abstract algebra ... I will be lining up to buy it!

Thanks again for such a helpful post ...

Clearly, you should be a teacher ...

Peter
 

Similar threads

Replies
7
Views
2K
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K