I How to show ##p(x)=g(x)x\pm 1\in\Bbb{Q}[x]## is irreducible in ##\Bbb{Q}_{\Bbb{Z}}[x]##?

  • Thread starter Thread starter elias001
  • Start date Start date
  • Tags Tags
    Abstract algebra
  • #31
@fresh_42 when the question asks to show irreducibilty of an element in a polynomial ring, one of the criteria is that suc an element has to be a non-unit. Since a non-unit is the product of either two non-units or a product of an unit and a non-unit. For irreducibles, it would be the latter case. That is why I focus so much on non-unit elements. i thought that would be a more direct way of solving the problem. If we can show polynomials ##p(x)## in ##\Bbb{Q}[x], p(0)=\pm 1,## is a non unit element, then in effect, it is irreducible. We can't simply assume that ##p(x)## in ##\Bbb{Q}_{\Bbb{Z}}[x], p(0)=\pm 1,## are a non unit elements in ##\Bbb{Q}[x]## and are the only ones. That would require proof. Sorry I should have explain more about my train of thoughts and what specifically what I was trying to do and what difficulties I am running into. Also, I just want to make sure i am not missing any minor details that I don't understand, or small details I thought I understood completely but I really don't because I overlook some minor details about a definition.

By the way, I created three posts, are they too long or too confusing?



 
Last edited:
Physics news on Phys.org
  • #32
elias001 said:
@fresh_42 when the question asks to show irreducibilty of an element in a polynomial ring, one of the criteria is that suc an element has to be a non-unit.
Yes, but that should not be your focus. The focus is on irreducibility. If ##r=a\cdot b## and ##a## (or ##b##) is a unit, then ##r=a\cdot b## isn't a "proper" factorization because ##r\sim a^{-1}r=b## and ##a^{-1}r## is more or less indistinguishable from ##r## in terms of factors. Units don't change the situation.

Units are necessary to come up with a contradiction: Assume ##r## is not irreducible. Then ##r=a\cdot b.## Then follow some calculations, and we arrive at an equation ##r=a'\cdot b'## in another ring where we know that ##r## is irreducible. This means that ##r=a'\cdot b'## isn't a proper factorization, i.e., one of the factors ##a'## or ##b'## must be a unit. This is either what we wanted to show, that one of the factors is a unit, or we also have to rule this out, too, and search for a contradiction. Here, and not anywhere before, do we need to know what units look like in order to derive that contradiction. The contradiction then tells us that ##r=a'\cdot b'## cannot exist, but we derived it from ##r=a\cdot b.## So ##r=a\cdot b## was already impossible. But if ##r=a\cdot b## was impossible, ##r## must have been irreducible in the first place.

That's roughly the line of argument. Units play only a minor role. The crucial part is the equation ##r=a\cdot b.## All thoughts in this proof are guided by the question "what if not".

elias001 said:
By the way, I created three posts, are they too long or too confusing?
They are a bit long. That makes it a project to even read them, and harder to answer since several statements have to be addressed. People on the internet like short statements, anywhere. However, there isn't a rule. It is just that your chances of getting an answer increase as the lengths of your posts decrease. You mustn't forget that others aren't at home in your world of thought, and with every question, they first have to build up and access that world of thought. That requires a lot of concentration, the longer the post is.
 
Last edited:
  • #33
@,freeh_42 Thank you so much for clarifying all my plausible misunderstanding. The chapter section where this question is located in Hungerford's text is where I spend the most time on it. There were many exercises where it involved a lot of concepts I have to pay very much attention to the subtle wordings. Also, many of the exercises I try to give direct proofs as much as possible which is different than what is stated in Hungerford's solutions. Also, some of his written solutions, there are a lot of skipped steps where i filled in the blank myself.

If possible, can I ask if you can take a look at my other three posts please the one with a lot of screenshots, they are all about understanding statements and notations about cokernel, coimage, kernel and image. The other two, one is about whether I filled a row entry correctly in a table and the other is understanding which rule of inference was used in a proof involving quantifier statement. Also, the post with those cokernel and coimage related notations. I am trying not to involve category theory. I learned category theory and i am not impressed with all the hype. I can proof all those lemmas involving exact sequences without seeing any computational examples. As i go deeper into any area of algebras, this seems to be a continue trend. I try asking a sinple computational question on localization of a prime ideal involving only single variable polynomial as an example on mse, and no one answered me. The thing is, they keep complaining about not knowing the basics, but when i asked something that should be basic to all those experts, none of them answered. Thank you in advance.
 
Last edited:
  • #34
elias001 said:
If possible, can I ask if you can take a look at my other three posts please the one with a lot of screenshots, they are all about understanding statements and notations about cokernel, coimage, kernel and image.
Cokernels and coimages are only important in homological algebra or category theory. Are you sure you need this? They are rarely used in linear algebra.
elias001 said:
The other two, one is about whether I filled a row entry correctly in a table ...
I'm not really an expert in automata theory, and I had difficulties understanding that post.
elias001 said:
... and the other is understanding which rule of inference was used in a proof involving quantifier statement. Thank you in advance.
Yes, I tried, but I haven't seen what ##X## is in this post, so I stopped reading it.

Give me some time.
 
  • #35
for the one on Rule of inference, are you able to see the screenshots? i can upload pdf pages. I can type out the question and the exercise solution when I get home.


For the post on automata theory, I think the calculations are correct.

for the post on cokernel and coimage. I just have two questions,

1. The two phrases: "The image of ##f## is the kernel of the cokernel of ##f##",
"The coimage of ##f## is the cokernel of the kernel of ##f##" and how that relates to im ##f##=ker(coker ##f##) and coim ##f##= coker(ker ##f##), coker(ker(coker ##f##))=coker ##f## and ker(coker(ker ##f##))=ker ##f##.

2 There is a Proposition in the second last screenshot where it says any morphism ##f## can be uniquely factor into ker(coker ##f##), coker(ker ##f##), but in it's proof, it uses notation like ##(\text{coker }f)f=0 ## and ##f=me=m(\text{ker }r)e'=m'e',\text{ where }m'=m\text{ ker }r##.

I plan to go into algebraic number theory. On MSE, the guy martin brandenburg, the one i told you about in my message. in response to one of my post, he kept criticizing me about i have to look at things through the lens of universal mapping properties or that I won't go far in advanced algebra texts if I keep looking at everything in terms of elements of sets. The thing is I have never ever formally taken any abstract algebra course. I am not sure if I will be introduced to something call the universal mapping property. I know there are undergraduate algebra text that does that, but since we will be using Dummit and Foote's abstract algebra text, I am not sure when universal properties of anything will get talked about I know there are two exercises on inverse and direct limits in the Intro to rings chapter.

I learned category theory through Arbib and Manes' text Arrows, Structures and functors. I took a break from learning group theory and meander my way through that text. It was a really painful experience, because I hardly knew my algebra and the examples it used in topology was quotient topology, when it was discussing equalizers, coequalizers, initial and final topologies. I mature a lot in the process. I plan to go into algebraic number theory and applied math. in algebraic number theory, I just have to know how to understand category theory language if I ever need to see anything from algebraic geometry. i dont plan on specializing in the subject. I stopped at the adjoint functor and the yoneda lemma sections. I think I should know more math before I tackle either. The thing that really frustrate or impress me was whenever I ask or see a category concept being ask on mse, someone will come and answer with "oh such object is not possible in the category of math object X you are asking about". Then they would just whip out some esoteric example.
 
  • #36
  • #37
@fresh_42 I felt i did not completely answer your question about whether I need to know about cokernel and coimage. I have come across both concepts in linear algebra texts. Even in ones that are consider to be elementary. I think they are consider as part of the four fundamental subspace, and if the cokernel of a matrix can actually be computed, similarly for the coimage. I first encountered them in my own reading on category theory and learning about doing exercises in exact sequences from Thomas Blyth's Module Theory text. I learned to prove those 3 by 3, four by four and that snake lemma. Apparently, in the context of category theory/homological algebra, one can talk about cokernel, coimage in terms of morphism/maps instead of just quotient sets in the context of exact sequences. I should probably do another post to clarify what I am confused about. I mean it would be nice to know how to clearly stare in which context I am making a reference to, quotient objects/exact sequences or morphisns in the context of additive category. I have never seen such messd up notational situations. Also it would be nice to know to know since I already have references for how to create matrix representation for linear transformation involving quotient vector spaces, as in we know what the matrix should look like, it would be good to have it handy for coinage and also for cokernel. If you want book references, I can reply with them. Just let me know.
 
  • #38
I'm not sure whether homological algebra/category theory is a good starting point for getting used to the concepts. E.g., I looked it up to avoid mistakes and found:

Let ##f : X \longrightarrow Y## be a morphism in an additive category. The image of ##f##, denoted by ##\operatorname{Im}(f)##, is defined as the kernel of the cokernel. Namely, if ##(C, \pi : Y \longrightarrow C)## is the cokernel of ##f##, then ##\operatorname{Im}(f)## is defined to be the kernel of ##\pi##. Similarly, coimage of ##f##, denoted by ##\operatorname{CoIm}(f)##, is the cokernel of the kernel.

What? Again, please! What?

I mean, it takes a scalpel to investigate this. At least, it shows that your questions were basically the definitions of it. I am glad they provided a lemma to shed light on this.

Let ##f : X \longrightarrow Y## be a morphism in an additive category. Assuming that the relevant kernels and cokernels exist, we have a commutative diagram:
1749905047019.webp
The morphism ##j## is injective and ##p## is surjective.

This explains it better than the awkward categorical definition.

Let's take the category of vector spaces and linear functions to keep things simple. First, we need a linear function
$$
f\, : \,X\longrightarrow Y.
$$
With it, we get the image and the kernel of ##f,## $$
\operatorname{Im}(f)=\{y\in Y\,|\,\exists \,x\in X\, : \,f(x)=y\}\, \text{ and } \,\operatorname{ker}(f)=\{x\in X\,|\,f(x)=0\}.
$$
These are the standard subspaces we usually deal with. We have ##\operatorname{Im}(f)\subseteq Y## and ##\operatorname{ker}(f)\subseteq X## per definition.

However, neither of them exhausts these spaces, i.e. they are in general proper subspaces. This means we can investigate the elements that are not "in" them. To do so, we "turn" the elements that are in them "into" zero and look what's not zero after "passing to the quotient". In formulas, we consider
$$
Y/\operatorname{Im}(f)=\operatorname{CoKer}(f)\, \text{ and } \,X/\operatorname{ker}(f)=\operatorname{CoIm}(f)
$$
and call these quotient spaces cokernel and coimage. The function ##p\, : \,X\longrightarrow X/\operatorname{ker}(f)## is the (surjective) projection ##p(x)=x+\operatorname{ker}(f),## and the function ##j\, : \,\operatorname{Im}(f)\longrightarrow Y## is the (injective) embedding ##j(y)=j(f(x))=y.## Furthermore, we have for vector spaces
$$
\operatorname{CoIm}(f)=X/\operatorname{ker}(f) \cong \operatorname{Im}(f)\, \text{ and } \,\operatorname{CoKer}(f)=Y/\operatorname{Im}(f)\cong \operatorname{ker}(f)
$$
which hints toward the reason for the naming. Hints, because the real reason has to mention duality, which I will not discuss here.

One last remark. In case we have ##X=Y## then we can write ##X=\operatorname{ker}(f)\oplus \operatorname{Im}(f)\cong \operatorname{CoKer}(f)\oplus \operatorname{CoIm}(f).##

If you really want to understand the entire concept, you will first have to understand the general concept of duality in its categorical dimension. E.g., the terms kernel, cokernel, image, and coimage always refer to the pair of a space and a function, not just a space, which makes it more difficult outside given examples.
 
Last edited:
  • #39
@fresh_42 I understand the pullback and pushout are respectively for the kernel and cokernel constructions in the category of vector spaces. The two are dual to each other. I studied other stuff like equalizers and co-equalizers.

What I would like to know is how does one let the reader know whether one is talking about a set, or a map/morphism? I mean in practice, when reading in the literature be in papers, or in a text or textbook, what are the written sign posts that signals to the reader that when she/he is encountering kernel or cokernel, it is a set or a map/morphism.
 
  • #40
As someone has mentioned in the other thread - and we really shouldn't cross-talk the same subject! - it depends on the context. In usual suspect categories, sets are easy to define and are thus used. In general category theory, however, you search for statements that hold for a variety of sets simultaneously, which makes the sets difficult to define. Therefore, mappings and commutative diagrams are preferred. I don't think that really helps in understanding the context. I prefer simple examples to "visualize" what's going on over diagram chasing. But this could be a matter of taste, I'm not sure.

For instance, if you look up what a tensor is in a homological algebra book and in a physics textbook, you would bet that these are different things, although they are not. However, you won't get very far with the general definition if you studied physics. Means: language depends on context and purpose.
 
  • #41
elias001 said:
What I would like to know is how does one let the reader know whether one is talking about a set, or a map/morphism? I mean in practice, when reading in the literature be in papers, or in a text or textbook, what are the written sign posts that signals to the reader that when she/he is encountering kernel or cokernel, it is a set or a map/morphism.
Kernel, cokernel, image, and coimage require a morphism because they aren't defined otherwise.

But when it comes to dealing with them, you need the sets since they define the property of the elements you want to operate with. Kernel is just a name, but ##f(x)=0## is an equation! The most important property that deals with morphisms ##f\, : \,A\longrightarrow B## is the equation
$$
A/\operatorname{ker}f \cong \operatorname{im}f.
$$
It cancels out the ambiguity caused by different points mapping to the same image point by putting them into equivalence classes modulo the kernel. It is again an equation. We always need equations for doing math. The arrows are only a notational shortcut.

The answer to your question depends on a decision you have to make: homological algebra or (abstract) algebra? You can do all the fancy diagram stuff in homological algebra without ever understanding why things are done or what elements actually are in those objects. I like examples because they help me to understand things, but I will not rule out that others prefer pure logic and diagrams.
 
  • #42
@fresh_42 I just looked over your posts 26 and 40, I think there is a slight misunderstanding that I have caused on my part. In post 26 you stated that an irreducible element is a product of two non-unit elements. That is true. But Hungerford, and also author like Keith Nicholson in his introduction to Abstract algebra text have the following definition for irreducible elements:

If ##R## is an integral domain, ##p\in R## is called an irreducible element(and is said to be irreducible in ##R##) if it satisfies the following conditions:
$$(1) p\neq 0 \text{ and }p \text{ is not a unit.}$$
$$(2) \text{If }p=ab \text{ in }R \text{ then }a\text{ or }b \text{ is a unit in } R.$$

An element that is not irreducible is called reducible.

So to show that an element ##x## in ##F[x]## is irreducible, is it sufficient to show that it is not an unit? I think that might the reason why there were confusions in post 24.
 
  • #43
elias001 said:
@fresh_42 I just looked over your posts 26 and 40, I think there is a slight misunderstanding that I have caused on my part. In post 26 you stated that an irreducible element is a product of two non-unit elements. That is true. But Hungerford, and also author like Keith Nicholson in his introduction to Abstract algebra text have the following definition for irreducible elements:



So to show that an element ##x## in ##F[x]## is irreducible, is it sufficient to show that it is not an unit? I think that might the reason why there were confusions in post 24.
No. ##6\in \mathbb{Z}_{12}## is not a unit, and reducible.

However, and I assume that ##F## stands for a field, the ring ##F[x]## is a special one. It is Euclidean, a principal ideal domain, and has a unique factorization into primes. Particularly, prime and irreducible coincide.

You claim that all non-units in ##F[x]## are prime/irreducible, but ##x^2## is neither a unit nor prime/irreducible.

What you must show is, that whenever you have a decomposition ##F[x]\ni f(x)=a(x)\cdot b(x)## then necessarily ##a(x)## or ##b(x)## is a unit. We have in my example ##f(x)=x^2=\underbrace{x}_{=a(x)}\cdot \underbrace{x}_{=b(x)}## and neither factor is a unit.

E.g., take ##f(x)=x^2+1\in \mathbb{Q}[x].## Then any decomposition means we have an equation
$$
f(x)=x^2+1=a(x)\cdot b(x).
$$
If ##a(x)## or ##b(x)## is a rational number, then it would be a unit, and there is nothing to show. Let's therefore assume that ##\deg(a)=\deg(b)=1## as the only other possibility left. Then
$$
a(x)=\alpha x+\beta\text{ and }b(x)=\gamma x +\delta.
$$
Now we get
$$
f(x)=x^2+1=a(x)\cdot b(x)=(\alpha x+\beta)\cdot(\gamma x+\delta)=\alpha\gamma x^2+ (\alpha \delta+\beta \gamma)x+\beta \delta
$$
and by comparison of the coefficients that
$$
\alpha\gamma=1\, , \,\alpha \delta+\beta \gamma=0\, , \,\beta \delta=1
$$
Inserting the first and the last one into the middle results in
$$
0=\alpha \delta+\beta \gamma=\alpha\beta^{-1}+\beta\alpha^{-1}=\dfrac{\alpha}{\beta}+\dfrac{\beta}{\alpha}
$$
If we multiply this equation by ##\alpha\beta## then we get ##0=\alpha^2+\beta^2.## But squares are positive in ##F=\mathbb{Q},## so this equation can only hold for ##\alpha=\beta=0,## contradicting our assumption about the degrees. Hence, there is no composition possible for ##f(x)=x^2+1\in \mathbb{Q}[x]## except one of the factors is a rational number, i.e., a unit. Thus ##f(x)## is irreducible in ##\mathbb{Q}[x].##

This is what you have to show in order to prove irreducibility.

Note that the ring is significant here. If I had chosen ##f(x)=x^2+1\in \mathbb{C}[x]## then my poof would collapse, since there are negative squares in ##\mathbb{C}.## The decomposition is of course ##x^2+1=(x+i)(x-i)## and ##f(x)## is reducible over the complex numbers.

Keep these examples in mind. Irreducibility and units are not complementary. They describe two different things. And concerning proofs in algebra: The question "What if not" is the second suit of an algebraist, if not even their first.
 
Last edited:
  • #44
@fresh_42 Sorry for the late reply. I think I referred you to the wrong post you previously replied to. In post 26, you said:

An element ##f## is irreducible if it cannot be written as a product of two non-unit factors, and reducible if such a factorization exists. We have to exclude units, since we can always multiply as many units to f as we want without changing its irreducibility or reducibility.

But in post 42 in my replied where I gave a definition of an element ##p## in an integral domain ##R## to be consider as an Irreducible element, the two criteria are (1) that ##p\neq 0## and ##p## is not a unit. (2) If ##p=ab##, then ##a## or ##b## is a unit in ##R##.

So to show that an element is an irreducible element, can I simply show that it is not an unit. That was one of the thing I was not able to demonstrate, the sum of ##g(x)x\Bbb{Q}[x]## and ##p\in \Bbb{Q}[x]## are both non units in ##\Bbb{Q}_{\Bbb{Z}}[X].##
 
  • #45
elias001 said:
But in post 42 in my replied where I gave a definition of an element ##p## in an integral domain ##R## to be consider as an Irreducible element, the two criteria are (1) that ##p\neq 0## and ##p## is not a unit. (2) If ##p=ab##, then ##a## or ##b## is a unit in ##R##.

So to show that an element is an irreducible element, can I simply show that it is not an unit.
Not being a unit itself is only a minor part of the definition of an irreducible element. The main part is showing that it is not reducible, i.e., any possible factorization must be trivial, which means one of its factors is necessarily a unit.
elias001 said:
That was one of the thing I was not able to demonstrate, the sum of ##g(x)x\Bbb{Q}[x]## and ##p\in \Bbb{Q}[x]## are both non units in ##\Bbb{Q}_{\Bbb{Z}}[X].##
Sums have nothing to do with that. ##1+1=2\in \mathbb{Z}## is a sum of two units that is not a unit. ##3-2=1\in \mathbb{Z}## is the sum of two non-units, which is a unit.
 
  • #46
@fresh_42 ah kk. Thank you for clearing that up.
 
  • #47
@fresh_42 one minor logical point i forgot about i want to ask you, we know that if ##x## is an irreducible element, then it is not an unit. What about the converse, if we know that ##x## is not an unit, can it be conclude that it is an irreducible element.
 
  • #48
All those terms depend on the ring you consider. There are rings in which ##x## is a unit, the ring of rational functions. So whenever you use those terms, please tell what ring you are using.

Say we have a polynomial ring ##R[x]## over a commutative integral domain ##R## with ##1,## then ##x## isn't a unit, since any equation ##1=x\cdot p(x)## leads to ##\deg(1)=0=\deg(x)+\deg(p(x))=1+\deg(p(x))>0## which is not possible. If ##R## is not an integral domain, we need a different argument since the degree equation doesn't hold. E.g., ##2x \cdot 3x^2 = 0## in ##\mathbb{Z}_6[x],## hence the degree equation doesn't hold in that case.

So, given such an integral domain ##R,## ##x## cannot be a unit. ##x## is also irreducible for the same reason, using the degree equation.

This means we can basically use the same tool to prove both statements. But we have two different proofs.

Back to your question:
elias001 said:
@fresh_42 one minor logical point i forgot about i want to ask you, we know that if ##x## is an irreducible element, then it is not an unit.
This depends on your definition, i.e., whether irreducible elements must be non-units. One could as well call units irreducible; that won't matter a lot. Those two terms have little to do with each other. We can always multiply units without changing the property of irreducibility. Say ##u_1,\ldots,u_n## are all units and ##\r \in R## an arbitrary element. Then ##r=u_1\cdot u_2 \ldots u_n \cdot u_1^{-1}\cdot u_2^{-1}\ldots u_n^{-1}\cdot r## which doesn't tell us a lot about whether we can write ##r=a\cdot b## or not. That's why primality and irreducibility are always "up to units".

So let us assume that ##r\in R## is irreducible and no unit.
elias001 said:
What about the converse, if we know that ##x## is not an unit, can it be conclude that it is an irreducible element.
In general, no.

Consider ##4\in \mathbb{Z}_{12}=\mathbb{Z}/12\mathbb{Z}.## Then ##4## is not a unit, but ##4=2\cdot 2## is a proper factorization into two elements that aren't units either. So ##4\in \mathbb{Z}_{12}## is a non-unit which is reducible.

If we have ##x\in R[x]## then it is both, but you need an argument for both cases separately.

##x\in R[x]## is no unit because ##x\cdot p(x)=1## is unsolvable.
##x\in R[x]## is irreducible, because ##x=p(x)\cdot q(x)## requires that either ##p(x)## in a unit in ##R## and ##q(x)=x## or the other way around.

You can see that these are two different statements that need two different arguments. Both arguments can use the same degree equation; nevertheless, they remain two different proofs.

The unit section in the definition of primes and irreducible elements is only to avoid trivial facts. They have nothing else to do with each other. E.g., ##1\in \mathbb{Z}## is no prime, since it is a unit. But what counts is, that if a prime divides a product, then it has to divide one of its factors. If we were to allow ##1## to be prime, then this condition would be worthless since ##1## divides anything. That why we exclude units, not because it had something to do with primality. We just don't want to consider those trivial cases all the time. Same for irreducibility.
 
Last edited:
  • #49
@fresh_42 oh thank you thank you. This little subtle point has been confusing me since I saw Nicholson's definition that I quoted.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
21
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
6
Views
987
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 24 ·
Replies
24
Views
659
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K