Splitting ring of polynomials - why is this result unfindable?

In summary, the conversation discusses a theorem that states that for a polynomial ##P## over a commutative ring ##R##, there exists a ring ##\tilde R## that extends ##R## where ##P## splits into linear factors. The proof of this theorem is given and it is noted that this result is difficult to find in literature. The conversation also mentions an application of this theorem in commutative algebra. However, the conversation raises some concerns about the induction step in the proof and the use of phrases like "evident" and "surely". The conversation ends with the suggestion of looking for a counterexample with a ring that is neither an integral domain nor a PID.
  • #1
coquelicot
299
67
Assume that ##P## is a polynomial over a commutative ring ##R##. Then there exists a ring ##\tilde R## extending ##R## where ##P## splits into linear factor (not necessarily uniquely). This theorem, whose proof is given below, is difficult to find in the literature (if someone know a source, it would be extremely welcome).
One may ask the question: "why is it unfindable". The first answer that comes to the head is: no one has ever found applications for it. But this argument is rather unfounded. This theorem, used in synergy with the fundamental theorem of symmetric functions, can be used to prove elegantly many results of commutative algebra. I give an application below, that you are invited to comment too.

Theorem: If ##P## is a polynomial over a commutative ring ##R##, there exists an extension ##\tilde R## of ##R## where ##P## splits into linear factors. Furthermore, given a factorization ##\prod_i P_i## of ##P## in ##R[X]##, these factors can be supposed to split the ##P_i##.

Proof: By an evident induction, it suffices to prove that if ##P## is of degree > 1, then there is a ring ##R'## extending ##R## where ##P## splits into the product of two polynomials of degrees ##< \deg(P) ##. But ##Y## is a root of ##P## in the ring ##R' = R[Y]/(P(Y))## that surely extends ##R##. In ##R'##, Euclidean division leads to ##P(X) = (X - Y)Q(X) + S##, where ##Q\in R'[X]## and ##S\in R'##. Since ##P(Y)=0##, ##S=0## and we are done.

Application: Let ##B## be an associative and commutative algebra over a commutative ring ##A##, and ##\alpha, \beta_1, \ldots, \beta_n\in B##.
If ##\alpha## is a root of an unitary polynomial ##P## with coefficient in ##A[\beta_1, \ldots, \beta_n]## and if ##\beta_1, \ldots, \beta_n## are integral over ##A##, then ##\alpha## is integral over ##A##.
In particular, if ##C## is the integral closure of ##A## in ##B## (the set of elements of ##B## integral over ##A##), then :
1) ##C## is a sub-algebra of ##B##
2) if ##\alpha## is integral over ##C##, then ##\alpha\in C##.

Note: if ##B## is not unitary, the meaning of ##A[\beta_1, \ldots, \beta_n]## is the sub-algebra generated by the ##\beta_i## over ##A## in the algebra ## A \oplus B## extending ##B## naturally (see proof below). This is in fact a ring.

Proof:
By induction, it can be assumed that ##n=1##, and ##\alpha## is a root of a unitary polynomial over ##A[\beta_1]## (hypothesis). In other words, there exists a polynomial ##P(Y, X)\in A[Y, X]##, unitary in ##X##, such that ##P(\beta_1, \alpha) = 0##.
Let ##H(X)\in A[X]## be a unitary polynomial such that ##H(\beta_1)=0## (hypothesis).
Replacing eventually ##B## by ##A\oplus B## (that is, the algebra whose sum is defined component-wise, and product is defined by ##(a\oplus b) (a'\oplus b') = aa' \oplus (ab' + a'b + b.b')##), it can be assumed without loss of generality that ##A## is a sub-algebra of ##B##.
Since ##A[\beta_1]## is a ring, we can consider an extension ##E## of ##A[\beta_1]## in which ##H## splits into the product of linear factors ##H = (X-\beta_1)(X-\beta'_1)(X-\beta''_1)\ldots##.
Let ##\Pi(X) = \prod_i P(\beta^{(i)}_1, X)##. ##\Pi## is unitary, and by the fundamental theorem symmetric functions, ##\Pi \in A[X]##.
Now, ##P(\beta_1, X)## divide ##\Pi## in ##E##: ##\Pi = P(\beta_1, X) Q##. Also, the Euclidean division of ##\Pi## in ##A[\beta_1]## (licit since ##P## is unitary) leads to ##\Pi = P(\beta_1, X) q + r## (with ##\deg(r)<\deg P##), hence we have ##P(\beta_1, X) (Q-q) = r##, which is possible only if ##q = Q##. Hence ##r=0## and ##\Pi(X)= P(\beta_1, X)Q(X)## with ##Q\in A[\beta_1][X]##. Substituting ##\alpha## in place of ##X## shows that ##\alpha## is a root of a unitary polynomial ##\Pi(X)## over ##A,## as contended.
 
Last edited:
Physics news on Phys.org
  • #2
I haven't read it all yet, since two things stopped me from going on. I can't see your basic induction step. Suppose an irreducible ##P(X)## of degree ##n## (with either ##n=2## and ##1+1=0## or ##n=5## to rule out formulas). How do you construct a splitting ring? Also, the Euclidean algorithm needs a principle ideal domain, if I remember correctly. I'm not quite sure, whether an integral domain is needed, too, but this has probably only consequences for uniqueness. Phrases with "evident", "surely", "clearly" or "obviously" make me notoriously nervous, and for an induction, I'd like to see at least one step carried out.

I'm inclined to look for a counterexample with a ring, which is neither an integral domain nor a PID.
 
Last edited:
  • #3
fresh_42 said:
I haven't read it all yet, since two things stopped me from going on. I can't see your basic induction step. Suppose an irreducible ##P(X)## of degree ##n## (with either ##n=2## and ##1+1=0## or ##n=5## to rule out formulas). How do you construct a splitting ring? Also, the Euclidean algorithm needs a principle ideal domain, if I remember correctly. I'm not quite sure, whether an integral domain is needed, too, but this has probably only consequences for uniqueness. Phrases with "evident", "surely", "clearly" or "obviously" make me notoriously nervous, and for an induction, I'd like to see at least one step carried out.

I'm inclined to look for a counterexample with a ring, which is neither an integral domain nor a PID.

I haven't understood what you mean with ##1+1=0##. Do you mean ##R = F_2##?

Waiting for you answer, I rewrite the proof here:
First of all, in the theorem, I've forgotten to specify that the polynomial ##P## must be monic, as well as the polynomials ##P_i## (sorry, this was so obvious for me that I missed that point).

Let ##P = P_1P_2\ldots P_k## be a factorization of ##P## in ##R[X]## by monic polynomials ##P_i##, irreducible or not. I claim that there is an extension of ##R## where each ##P_i##, and hence ##P##, splits into linear factor.
Let us first suppose that ##k=1##.
If ##\deg(P_1) = 1##, this is obvious. Assume the result is true for ##\deg(P_1) = n-1## and assume that ##\deg(P_1)=n##.
##Y## is a root of ##P## in the ring ##R'=R[Y]/(P(Y))##. This ring is an extension of ##R## because the ideal ##(P(Y))## contains only polynomial of degrees ##\geq \deg(P_1)##, hence ##(P(Y)) \cap R = \{0\}##. Now, the euclidean algorithm is valid in any ring, provided the divisor is monic (see e.g. Samuel & Zariski, but this is also evident if the synthetic division process is considered). So, we have ##P_1(X) = (X-Y) Q(X) + r##, with ##r\in R##. Since ##P_1(Y) = 0##, this shows that ##r=0## and ##P_1 = (X-Y)Q##. Now, ##\deg(Q) = n-1##, hence there exists an extension ##R''## of ##R'## where ##Q## splits into linear factor (induction hypothesis). Hence ##P_1## splits into linear factors inside ##R''##, showing the result is true whenever ##k=1##.
Now, assume the result true for ##k-1##, and let ##P_1\ldots P_k## be a factorization of ##P## as above. By the above, there is an extension ##R'## of ##R## in which each ##P_i## splits into linear factors for ##1\leq i\leq k-1##. Construct an extension ##\tilde R## of ##R'## exactly as done for ##P_1## above, in such a way that ##P_k## splits into linear factors in ##\tilde R##. Then every ##P_i## splits into linear factors in ##\tilde R## as contended.
 
  • #4
I think you are far too sloppy, at least for me. I think, if theorems are proven with a minimum of preconditions, then it is needed to be very exact.

Yes, ##1+1=0## meant ##\operatorname{char}R =2##. The latter was simply more to type. ##\mathbb{F}_2## is only one example.

I don't have this book, but all definitions I've found require a Euclidean ring to be an integral domain, and yours is not. Euclidean rings are also PID, and yours is not. ##\mathbb{Z}[x]## is not a Euclidean ring, but commutative with ##1##. So the requirement of an Euclidean ring is pretty strong, and commutative with ##1## is not sufficient. One consequence is, that ##R[X]/R[X]\cdot P(X)## doesn't need to match the conditions to be Euclidean. And the ideal can theoretically be generated by different polynomials, e.g. ##P(X) \in (F(X),G(X))##. You have neither a PID nor an integral domain, so ##P=\alpha F + \beta G## with ##\deg F = \deg P## is possible. Even ##P(X) = \alpha(X) \cdot F(X)## with ##\alpha(Y) \neq 0 \neq F(Y)## cannot be ruled out. Monic isn't the point. However, I'd like to know what you mean by an extension, i.e. whether ##R \rightarrowtail R[Y] \twoheadrightarrow R[Y]/(P(Y)) = R'## leads to an embedding ##R \hookrightarrow R' \,.## Maybe easy to see, but I'm so used to irreducible polynomials or integral domains, that I don't see it immediately, and it depends on what an extension is, since the usual definition analogue to field extensions doesn't work here: no primitive elements, no principle ideal and so on.

And that you have a ##1## is also debatable. I know, commutative without are sometimes called pseudo. What a nonsense. Ring is ring, and some have a ##1## and others don't, although my suspicion is, that yours has one.
 
  • #5
Fresh 42,
First, thank you for reading, once more, my posts.
Yes, for most authors, a ring has a 1 by definition, and this is more and more the adopted convention. A ring without a 1 can be called a pseudo ring or an associative Z-algebra. There are good arguments for both sides I think.

Regarding the Euclidean division, who said that ##R## is an Euclidean ring? In ##R[X]##, you can divide without problem, in the Euclidean sense, a polynomial by another MONIC polynomial (check it by yourself, this is very simple), but you cannot always divide a polynomial by any other polynomial. So, the ring needs not be Euclidean. I quote here the theorem in Samuel and Zariski (Thm 9 chap 1):

"Let ##R## be a ring with identity and ##R[X]## a polynomial ring over ##R## in ##X##.
Let ##f(X)## and ##g(X)## be two polynomials in ##R[X]## of respective degrees ##m## and ##n##, let ## k = \max(m - n+1, 0)## and let ##a## be the leading coefficient of ##g(x)##. Then there exist polynomials ##q(X)## and ##r(X)## such that ##a^k f(X) = q(X)g(X) + r(X),## and ##r(X)## is either of degree less than ##n## or is the zero polynomial. Moreover, if ##a## is regular in ##R##, then ##q(X)## and ##r(X)## are uniquely determined."

For me, an extension of ##R## is simply a ring ##\tilde R## containing ##R## as a sub ring, up to an identification. In our case, the map ##r\mapsto r+(P(Y))## is obviously an homomorphism of rings, and it is injective since ##r+(P(Y)) = (P(Y))## implies ##r\in (P(Y))##, which is possible only if ##r=0##, since all the non zeros elements of ##(P(Y))## are multiple of ##P(Y)##, hence of degree ##\geq \deg(P)## (recall that ##P## is monic).

Finally, I don't understand in what 1+1 = 0 is supposed to destroy the theorem. Isn't there a splitting field for any polynomial in, say, ##F_2[X]##. So, I don't see in what characteristic 2 can be of any help to contradict the theorem.
 
Last edited:
  • #6
fresh_42 said:
I think you are far too sloppy, at least for me. I think, if theorems are proven with a minimum of preconditions, then it is needed to be very exact.

Yes, ##1+1=0## meant ##\operatorname{char}R =2##. The latter was simply more to type. ##\mathbb{F}_2## is only one example.

I don't have this book, but all definitions I've found require a Euclidean ring to be an integral domain, and yours is not. Euclidean rings are also PID, and yours is not. ##\mathbb{Z}[x]## is not a Euclidean ring, but commutative with ##1##. So the requirement of an Euclidean ring is pretty strong, and commutative with ##1## is not sufficient. One consequence is, that ##R[X]/R[X]\cdot P(X)## doesn't need to match the conditions to be Euclidean. And the ideal can theoretically be generated by different polynomials, e.g. ##P(X) \in (F(X),G(X))##. You have neither a PID nor an integral domain, so ##P=\alpha F + \beta G## with ##\deg F = \deg P## is possible. Even ##P(X) = \alpha(X) \cdot F(X)## with ##\alpha(Y) \neq 0 \neq F(Y)## cannot be ruled out. Monic isn't the point. However, I'd like to know what you mean by an extension, i.e. whether ##R \rightarrowtail R[Y] \twoheadrightarrow R[Y]/(P(Y)) = R'## leads to an embedding ##R \hookrightarrow R' \,.## Maybe easy to see, but I'm so used to irreducible polynomials or integral domains, that I don't see it immediately, and it depends on what an extension is, since the usual definition analogue to field extensions doesn't work here: no primitive elements, no principle ideal and so on.

And that you have a ##1## is also debatable. I know, commutative without are sometimes called pseudo. What a nonsense. Ring is ring, and some have a ##1## and others don't, although my suspicion is, that yours has one.

Hi Fresh 42,
I feel you were a bit irritated in the previous posts. I hope I didn't say something that has hurt you (you see, English in not my mother tongue and I don't feel it exactly), but if I did, let me apologize.
Also, I haven't written all the details of the proof in the question, because I thought it would be cumbersome for a simple question in a forum. But I tried to do my best to answer your questions.
I also think, maybe, to give a simple example of an extension of ##R## by non irreducible polynomials:
Let us suppose ##R = \mathbb Q##, and ##P = (X-1)(X-2)##. Then ##R[Y]/(P)## is isomorphic as a ring to ##R[Y]/(Y-1) \times R[Y]/(Y-2)## by the Chinese remainder theorem. But ##R[Y]/(Y-1) = Q = R[Y]/(Y-2)## (up to identification, isomorphism given by substitution ##Y=1## and ##Y=2## resp.) so ##R[Y]/(P) = Q\times Q##. We see that ##Q## embeds into ##R[Y]/(P)##.
 
  • #7
My favorite notation by far for a ring without identity is that of Nathan Jacobson, namely a "rng".
 
  • Like
Likes mfb
  • #8
mathwonk said:
My favorite notation by far for a ring without identity is that of Nathan Jacobson, namely a "rng".
Did he name commutative groups solvAbelian? Or at least those with ##G^{(2)}=1\,##?
 
Last edited:
  • #9
coquelicot said:
Hi Fresh 42,
I feel you were a bit irritated in the previous posts.
No, not at all, just lazy, as I don't find the problem exciting enough to figure it out myself. I already mentioned that I had difficulties to see what you seemed to have seen immediately. The division for example. Yes it should work with two monic polynomials, but that is a deviation from the standard and it shouldn't be your reader to figure out such cases. Btw., how did it come, that you "automatically assumed" monic in your first post but don't allow me, to automatically assume that ##\mathbb{Z}[x]## is not Euclidean and therefore the division isn't allowed. That it is possible in this special case, shouldn't be the reader to figure it out. That's just my personal opinion. However, it left me in a situation, in which I had to check all "evident" parts by myself. So sorry for my laziness.

The case of ##\operatorname{char}R=2## was just a shot in the dark, because it allows a small ring with zero divisors, and division isn't "self-evident" anymore.
 
  • #10
Hi Fresh 42,
fresh_42 said:
how did it come, that you "automatically assumed" monic in your first post
Actually, I didn't "assumed automatically" monic in my first post, but I simply forgot to write it, despite I had carefully check my proof using, of course, this assumption.
fresh_42 said:
Yes it should work with two monic polynomials
Actually, it suffices that the divisor be monic, no such restriction is necessary for the polynomial being divided.
Fresh 42, take it easy and don't feel hurt by that: I promise you that division by monic polynomials is a very well known fact, used again and again in commutative algebra, even at relatively low levels. This is not by chance that this theorem appears in Samuel&Zariski in the first chapter. Mathematics is infinite, even at elementary levels, so, we all have such "holes" in our mathematical culture. Even if you may be a bit angry about me, I am happy to have been indirectly the cause you now know this fact.
fresh_42 said:
It left me in a situation, in which I had to check all "evident" parts by myself.
I continue to think that the induction, in itself, was relatively evident at this mathematical level and hadn't to be detailed. But yes, you are probably right that I should have detailed the main argument as I did in my second post. There are persons that don't like too much detailed proof, especially as a question in a forum, so, I thought I shouldn't be too heavy.
fresh_42 said:
how did it come, that you "automatically assumed" monic in your first post but don't allow me, to automatically assume that ##\mathbb{Z}[x]## is not Euclidean and therefore the division isn't allowed
I don't fully understand this question. But let me try to answer: You've objected that the ring ##R[X]## is Euclidean, hence must be a PID (I agree with the fact that a Euclidean ring must be a PID) etc. I've simply pointed out that your very first assumption that the ring ##R[X]## is Euclidean is unfunded, because the Euclidean division is only possible by some elements of ##R[X]## (the monic ones), but not by any element of ##R[X]##. Of course, the theorem holds also (in my opinion) if the ring ##R[X]## is euclidean, e.g. if ##R## is a field. But since we have not considered irreducible polynomials, we loose almost all the algebraic properties of the splitting ring, like uniqueness (up to isomorphism). This is, in my opinion, the reason why "splitting rings" are not considered in the literature.
In my first post, I tried to point out that even this somewhat "artificial" theorem (in the algebraic point of view) can be useful as a lever for using the fundamental theorem of symmetric functions, and prove easily nice results.
 
Last edited:

1. What is a splitting ring of polynomials?

A splitting ring of polynomials is a mathematical concept that refers to a specific type of ring where every polynomial can be factored into linear factors. This means that every polynomial in the ring can be written as a product of linear polynomials.

2. Why is this result important?

This result is important because it provides a powerful tool for solving polynomial equations and understanding the structure of polynomials. It also has applications in algebraic geometry, number theory, and other areas of mathematics.

3. How is a splitting ring of polynomials different from a regular ring?

A regular ring may have polynomials that cannot be factored into linear factors, while a splitting ring of polynomials has the property that all polynomials can be factored in this way. Additionally, a splitting ring of polynomials must also be a commutative ring.

4. Is every polynomial ring a splitting ring of polynomials?

No, not every polynomial ring is a splitting ring of polynomials. For example, the ring of polynomials over the integers (with coefficients in the integers) is not a splitting ring of polynomials because not all polynomials can be factored into linear factors with integer coefficients.

5. Why is it difficult to find information about splitting rings of polynomials?

Splitting rings of polynomials are a specialized topic in abstract algebra, and may not be as well-known or studied as other mathematical concepts. Additionally, the concept may be referred to by different names, making it more challenging to find information about it. However, there are resources available such as textbooks and online articles that discuss this topic in detail.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
28
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
921
  • Linear and Abstract Algebra
Replies
1
Views
804
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
758
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Back
Top