I Showing ##k[x_1,\ldots,x_n]/\mathfrak{a}## is finite dimensional

  • I
  • Thread starter Thread starter elias001
  • Start date Start date
elias001
Messages
359
Reaction score
24
The following exercise is taken from the book "An introduction to grobner bases" by Ralf Froberg.

Exercise: Let ##\mathfrak{a}## be an ideal generated by monomials in ##k[x_1,\ldots,x_n]##. Show that ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k## if and only if for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##.

Questions:


I am having trouble showing the other direction:

If for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##, then ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k##.

Basically if I have ##x_i^{d_i}\in \mathfrak{a}##, and then any element ##u_i=x_i^{d_i-1}+\mathfrak{a}\not\in (x_i^{d_i})\in \mathfrak{a}## in the set ##\{1, u_i,u_i^2,\ldots u_i^{d_i-1}\}## would form a base in the quotient vector space. ##k[x_1,\ldots,x_n]/\mathfrak{a}##. But I am not sure what to do from here.

I found a written solution online as follows:

For completeness, I am quoting the solution for both directions:
Assume that for each ##i##, there is some ##i## for which ##x^d_i\in\mathfrak{a}##. Then, for all ##m_i\geq d_i##, we have ##x_i^{m_i}=x^{m_i-d_i} x_i^{d_i}\in\mathfrak{a}##.

In particular, take a monomial ##x_1^{m_1}\cdots x_n^{m_n}##. If there exists at least one ##i## such that ##m_i\geq d_i##, then ##x_1^{m_1}\cdots x_n^{m_n}\in\mathfrak{a}##.
Now, an element of the quotient has the form ##P=\sum_{m_1,\ldots,m_n\geq 0}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

As you notice, for all ##Q\in \mathfrak{a}##, we have ##P+Q+\mathfrak{a}=P+\mathfrak{a}.##
The previous point then shows that ##P=\sum_{m_1\leq d_1-1,\ldots,m_n\leq d_n-1}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

Hence, the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1## for all ##i##, span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##. Since this family is finite, it follows that this vector space is finite dimensional (of dimension ##\leq d_1\cdots d_n##).
Note that we didn't use the fact that ##\mathfrak{a}## is generated by monomials here.

Assume to the contrary that there is an ##i## such that ##x_i^m\notin \mathfrak{a}## for all ##m>0##; say ##i=1##.

We are going to show that the classes ##x_1^m+\mathfrak{a},m\geq 0## are linearly independent, showing our quotient space has infinite dimension.

So assume that ##\sum_m \lambda_m x_1^m+\mathfrak{a}=0+\mathfrak{a}##, meaning that ##\sum_m \lambda_m x_1^m\in\mathfrak{a}.## (the ##\lambda_m##'s are all zero except from a finite number of them).

By assumption on ##\mathfrak{a}##, there are some polynomials ##P_1,\ldots,P_r\in k[x_1,\ldots,x_n]## and monomials ##M_1,\ldots,M_r\in\mathfrak{a}## such that
##\sum_m \lambda_m x_1^m=P_1M_1+\cdots+P_rM_r##. By assumption, ##M_i## is not a power of ##x_1##, so it must contains another variable (we use the fact that ##M_i## is a monomial here).In particular ##M_i(x_1, 0,\ldots,0)=0## for all ##i##.

Evaluating the previous equality shows that ##\sum_m \lambda_m x_1^m=0\in k[x_1]##, hence ##\lambda_m=0## for all ##m## , as required.


I think the author is doing a contrapositive proof. I don't understand how if there is an ##i## such that ##x_i^m\notin \mathfrak{a}## for all ##m>0##, then the classes ##x_1^m+\mathfrak{a},m\geq 0## are linearly independent, showing our quotient space would imply the quotient space has infinite dimension.

Also, I am having trouble understanding the portion of the argument where it says:
".....monomials ##M_1,\ldots,M_r\in\mathfrak{a}## such that
##\sum_m \lambda_m x_1^m=P_1M_1+\cdots+P_rM_r##. By assumption, ##M_i## is not a power of ##x_1##, so it must contains another variable (we use the fact that ##M_i## is a monomial here).In particular ##M_i(x_1, 0,\ldots,0)=0## for all ##i##"

Basically I don't know what the author means by ##M_r\in \mathfrak{a}## must contain another variable, in the simple case of ##\lambda_m x_i^m=P_iM_i## what would that look like. Also, I am having trouble understanding the notation: ##M_i(x_1, 0,\ldots,0)=0## for all ##i##.

Lastly when the auuthor at the end says: "Evaluating the previous equality shows that ##\sum_m \lambda_m x_1^m=0\in k[x_1]##", how is the equality being evaluated it?

Thank you in advance.
 
Last edited:
Physics news on Phys.org
elias001 said:
The following exercise is taken from the book "An introduction to grobner bases" by Ralf Froberg.

Exercise: Let ##\mathfrak{a}## be an ideal generated by monomials in ##k[x_1,\ldots,x_n]##. Show that ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k## if and only if for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##.

Questions:


I am having trouble showing the other direction:

If for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##, then ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k##.

Basically if I have ##x_i^{d_i}\in \mathfrak{a}##, and then any element ##u_i=x_i^{d_i-1}+\mathfrak{a}\not\in (x_i^{d_i})\in \mathfrak{a}## in the set ##\{1, u_i,u_i^2,\ldots u_i^{d_i-1}\}## would form a base in the quotient vector space. ##k[x_1,\ldots,x_n]/\mathfrak{a}##. But I am not sure what to do from here.
A basis has two properties: its elements are linearly independent, and all vectors (here polynomials) can be written as a linear combination of them. These two properties have to be shown.

elias001 said:
I found a written solution online as follows:

Assume to the contrary that there is an ##i## such that ##x_i^m\notin \mathfrak{a}## for all ##m>0##; say ##i=1##.
...
This is not "assuming the contrary", it is a different statement. We want to show
$$
\underbrace{\forall \;i\in \{1,\ldots,n\}\;\exists \;d_i>0\; : \;x_i^{d_i}\in \mathfrak{a} }_{=A}\Longrightarrow \underbrace{\dim_k \left(k[x_1,\ldots,x_n]/\mathfrak{a}\right) < \infty}_{=B}
$$
This is a statement of the form ##A\Longrightarrow B.## Assuming the contrary is assuming that ##B## is false, while ##A## is still true. This must lead to a contradiction if ##A\Longrightarrow B## holds, because then ##B## is true and the assumption that it is wrong is false.

So, what does the contrary mean in our case? It is "not B" and therefore the assumption that ## \dim_k \left(k[x_1,\ldots,x_n]/\mathfrak{a}\right) = \infty ,## i.e. that the quotient ring is of infinite dimension over ##k.## But that is nothing that we would use.

Hence, what you found on the internet has nothing to do with what we want to show. The internet direction of the statement is the other way around. It shows ##\text{not } A \Longrightarrow \text{not } B## which is equivalent to ##B\Longrightarrow A.## But you started by asking about ##A\Longrightarrow B.##

I am not quite sure what you want to discuss, your problem or the proof from the internet. Let's keep it simple and forget about that internet proof for now. That is, we want to show that the dimension is finite. We can discuss the proof from the internet afterward, but we shouldn't confuse the two, so let's start with the first statement and direction ##A\Longrightarrow B.##

Before we start, recall that we have a ring homomorphism
\begin{align*}
k[x_1,\ldots x_n] &\longrightarrow k[x_1,\ldots,x_n]/\mathfrak{a}\\
p(x)&\longmapsto p(x)+\mathfrak{a}
\end{align*}
which means that ##\pi(af)=a\pi(f)\, , \,\pi(f+g)=\pi(f)+\pi(g)\, , \,\pi(f\cdot g)=\pi(f)\cdot \pi(g)## for any polynomials ##f,g\in k[x_1,\ldots,x_n]## and ##a\in k.##

Part 1, linear independence.

Vectors ##f_1,\ldots,f_m## are linear independent if ##a_1f_1+\ldots+a_mf_m=0\;(a_i\in k)## implies that ##a_1=\ldots=a_m=0.## So we have
\begin{align*}
0&=a_0\cdot 1+\sum_{i=1}^n \left(a_{1i}u_i^1+a_{2i}u_i^2+\ldots+ a_{ni}u_i^{d_i-1}\right)
\end{align*}
As already mentioned in the other thread, this means
\begin{align*}
0&=a_0\cdot 1+\sum_{i=1}^n \left(a_{1i}u_i^1+a_{2i}u_i^2+\ldots+ a_{ni}u_i^{d_i-1}\right)\\
&=a_0\cdot 1+\sum_{i=1}^n \left(a_{1i}\pi(x_i)^1+a_{2i}\pi(x_i)^2+\ldots+ a_{ni}\pi(x_i)^{d_i-1}\right)\\
&=\pi \left(a_0\cdot 1+\sum_{i=1}^n \left(a_{1i}x_i^1+a_{2i}x_i^2+\ldots+ a_{ni}x_i^{d_i-1}\right)\right)\\
&\Longrightarrow \\
\phantom{0}&\phantom{=}a_0\cdot 1+\sum_{i=1}^n \left(a_{1i}x_i^1+a_{2i}x_i^2+\ldots+ a_{ni}x_i^{d_i-1}\right)\in \mathfrak{a}
\end{align*}
Here we must assume that all ##x_i^{e_i}\not\in\mathfrak{a}## whenever ##e_i<d_i.## Otherwise, we are stuck. But if this is the case, then it follows that ##a_0=a_{ij}=0## for all ##i,j## simply because sums and multiples by elements of ##k## cannot produce ##x_i^{d_i}.##

Formally, I would prove it by induction over ##n,## i.e. show it for ##n=1.## This is true because ##x_1,\ldots,x_1^{d_1-1} \not\in \mathfrak{a}## and neither is a linear combination of them because ##\{x_1,\ldots,x_1^{d_1-1},x_1^{d_1}\} ## (no typo!) is linear independent in ##k[x_1].## We cannot "produce" ##x_1^{d_1}## so we cannot get into ##\mathfrak{a}## by only using powers less than ##d_1.##

The induction step uses ##k[x_1,\ldots,x_{n}]=k[x_1,\ldots,x_{n-1}][x_n]## and the fact that even if we allow our coefficients in a linear combination to be taken from ##k[x_1,\ldots,x_{n-1}],## i.e. we have
$$
a_0\cdot 1+ a_{1n}x_n^1+a_{2n}x_n^2+\ldots+ a_{nn}x_n^{d_n-1}\in \mathfrak{a}\, , \,a_0,a_{ij}\in k[x_1,\ldots,x_{n-1}]
$$
there is no way to land in ##\mathfrak{a}## other than by ##a_0=a_{ij}=0## because all powers ##x_n^{1},\ldots,x_n^{d_n}## (no typo) are ##k[x_1,\ldots,x_{n-1}]##-linear independent. Since ##x_n^{d_n}\in \mathfrak{a}## and the others are not, we may conclude that the coefficients are all zero, since otherwise, we can't get into ##\mathfrak{a}.##

Part 2, generating the vector space.

The ring homomorphism ##\pi## is surjective. Any polynomial ##\overline{f}\in k[x_1,\ldots,x_n]/\mathfrak{a}## (with arbitrary powers) can be written as ##f+\mathfrak{a}## with a polynomial ##f\in k[x_1,\ldots,x_n].## Hence,
\begin{align*}
\overline{f}&=f+\mathfrak{a}=\pi(f)\\
&=\pi\left(f_0+\sum_{i_1,\ldots,i_n}x_1^{i_1}\cdot\ldots\cdot x_n^{i_n}\right)\\
&=\pi(f_0)+\sum_{i_1,\ldots,i_n}\pi(x_1)^{i_1}\cdot\ldots\cdot \pi(x_n)^{i_n}\\
&=f_0+\sum_{i_1,\ldots,i_n}u_1^{i_1}\cdot\ldots\cdot u_n^{i_n} +\mathfrak{a}
\end{align*}
where all terms with powers greater or equal ##x_i^{d_i}## are shifted into ##\mathfrak{a}## so only terms with ##\{1, u_i,u_i^2,\ldots u_i^{d_i-1}\}## remain, i.e. they generate ##k[x_1,\ldots,x_n]/\mathfrak{a}## showing that
$$
k[x_1,\ldots,x_n]/\mathfrak{a}\cong k[u_1,\ldots,u_n].
$$

The other direction (from the internet) is that in case ##k[x_1,\ldots,x_n]/\mathfrak{a}## is of finite dimension, there must be some finite power ##x_i^{d_i}\in \mathfrak{a}## of every indeterminate ##x_i .## It proves the negation of this statement: If we stay out of ##\mathfrak{a}## for some variable ##x_i## (wlog ##i=1##), i.e. ##1,x_1,x_1^2,\ldots \not\in \mathfrak{a}## (infinitely many), then ##k[x_1,\ldots,x_n]/\mathfrak{a}## is infinite dimensional. To show infinite dimension, it is sufficient to show that there are infinitely many linearly independent vectors, in this case ##1+\mathfrak{a},x_1+\mathfrak{a},x_1^2+\mathfrak{a},x_1^3+\mathfrak{a}\ldots## which can be written as ##1,u_1,u_1^2,u_1^3,\ldots## if we want to avoid writing ##+\mathfrak{a}## all the time.
 
@fresh_42 Apologies for the late reply, I had to stepped out for a bit.

I edited my post adding the other direction's solution. What you wrote is a billion times more clear. I think many people on math stackexchange, whenever you ask them an undregraduate math question, they answer it at the level of graduate school.

I want to clarify that the proof you gave has to do with the direction: If for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##, then ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k##.

Actually, when the author stated assume the contrary, by contrary, does he mean contrapositive?

Also the surjectcive homomorphism ##\pi:k[x_1,\ldots, x_n]\to k[x_1,\ldots, x_n]/\mathfrak{a}##

defined by ##p(x)\mapsto p(x)+\mathfrak{a}##, you are referring to ##\pi(x_i)=x_i+\mathfrak{a}=u_i##, I mean we are letting ##u_i=x_i+\mathfrak{a}##

For the other direction from the internet that I found, Is there a way I don't prove the contrapositive statement. I don't think the statement that if ##x_i^{d_i}\notin \mathfrak{a}##, this means ##1,x_1,x_1^2,\ldots \not\in \mathfrak{a}##, for infinitely many of them, then ##k[x_1,\ldots,x_n]/\mathfrak{a}
## is infinitely dimensional.

THe vector space ##k[x_1,\ldots,x_n]## is finite dimensional, and ##\mathfrak{a}\subset k[x_1,\ldots,x_n]##, then I just don't see if we have a particular ##x_i^{d_i}\notin \mathfrak{a}##, how does that mean the quotient ring/vector space is infinite dimensional. What I am trying to say is that I am not getting any intutituion about it. Another thing is, I am having trouble understanding that author's terse notation with ##M_r##
 
elias001 said:
@fresh_42 Apologies for the late reply, I had to stepped out for a bit.

I edited my post adding the other direction's solution. What you wrote is a billion times more clear. I think many people on math stackexchange, whenever you ask them an undregraduate math question, they answer it at the level of graduate school.

I want to clarify that the proof you gave has to do with the direction: If for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##, then ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k##.

Actually, when the author stated assume the contrary, by contrary, does he mean contrapositive?

Also the surjectcive homomorphism ##\pi:k[x_1,\ldots, x_n]\to k[x_1,\ldots, x_n]/\mathfrak{a}##

defined by ##p(x)\mapsto p(x)+\mathfrak{a}##, you are referring to ##\pi(x_i)=x_i+\mathfrak{a}=u_i##, I mean we are letting ##u_i=x_i+\mathfrak{a}##
So far, so correct.
elias001 said:
For the other direction from the internet that I found, Is there a way I don't prove the contrapositive statement. I don't think the statement that if ##x_i^{d_i}\notin \mathfrak{a}##, this means ##1,x_1,x_1^2,\ldots \not\in \mathfrak{a}##, for infinitely many of them, then ##k[x_1,\ldots,x_n]/\mathfrak{a}
## is infinitely dimensional.
It is all about the definition of ##d_i## as the minimal power with ##x_i^{d_i}\in \mathfrak{a}.## That was my example in the other thread with ##k[x,y]/\bigl\langle y \bigr\rangle## where we have ##y^3\in \bigl\langle y \bigr\rangle ## but we cannot conclude that the coefficients in ##a+by+cy^2 \in \bigl\langle y \bigr\rangle## are all zero because ##y,y^2\in \bigl\langle y \bigr\rangle## and ##b## and ##c## can be any value. Such cases must be excluded.

The internet proof is more precise because I was a bit sloppy when I said we "cannot land in" ##\mathfrak{a}.## That was the idea behind it. The technical details of "not land in" are a bit more complicated.

I will only use one variable ##x_1## since this shows the essential arguments. More than one variable only means more typing and dots or sums.

We have ##a_0+a_1x_1+\ldots+a_mx_1^m\in \mathfrak{a}## and need to show that ##a_0=a_1=\ldots=a_m=0.## But the sum is only in ##\mathfrak{a}## and not zero. To make the step "land in ##\mathfrak{a}##" we need to have a closer look on ##\mathfrak{a}.## The ideal ##\mathfrak{a}## is generated by monomials, i.e. by elements of the form ##M_k=x_1^{m_{1,k}}\cdot\ldots\cdot x_n^{m_{n,k}}.## Hence, ##a_0+a_1x_1+\ldots+a_mx_1^m\in \mathfrak{a}## means that we can write it as
$$
a_0+a_1x_1+\ldots+a_mx_1^m=\sum_{k=1}^K \lambda_k M_k=\sum_{k=1}^K \lambda_k x_1^{m_{1,k}}\cdot\ldots\cdot x_n^{m_{n,k}}.
$$
with some (finite) numbering ##k=1,\ldots,K## and some ##K>0.## (##K=0## would imply ##\mathfrak{a}=\{0\}## and there is nothing to show because then ##k[x_1,\ldots,x_n]/\mathfrak{a}=k[x_1,\ldots,x_n]/ \{0\} =k[x_1,\ldots,x_n]## which is infinite dimensional.)

Now we have to compare the two sides. We have powers of ##x_1## on the left and monomials ##M_k## on the right. But those are linearly independent in ##k[x_1,\ldots,x_n]## unless they are equal. But then ##x_1^{r}=M_k## for some ##r## and some ##k,## and ##M_k\in \mathfrak{a}## since they generate ##\mathfrak{a},## and ##r<d_1.## This cannot be, since the ##d_1## was already the smallest power of ##x_1## with that property. The only way out of this contradiction is if ##a_r=0=\lambda_k.## And that goes for every coefficient, i.e. ##a_0=a_1=\ldots=a_m=0=\lambda_1=\ldots=\lambda_K.##

This is what it means that we "cannot land in" ##\mathfrak{a}## other than by zeros, or in other words, that ##x_1,\ldots,x_1^m## are linearly independent not only in ##k[x_1,\ldots,x_n]## but also modulo ##\mathfrak{a}## as long as ##m<d_1## and ##d_1## is minimal with the property ##x_1^{d_1}\in \mathfrak{a},## and ##\mathfrak{a}## is generated by monomials. (See my example above with ##k[x,y]## and ##\mathfrak{a}=\bigl\langle y \bigr\rangle.## Also, if the generators are arbitrary polynomials in ##k[x_1,\ldots,x_n],## i.e. sums of monomials, then our calculation breaks down since we do not have any control over the total sum ##a_0+a_1x_1+\ldots+a_mx_1^m## any longer.)

This argument can be used in both cases: to show that the finitely many ##u_i=x_i+\mathfrak{a}## are linearly independent, or in the case of the contrapositive proof where we assume all ##x_i^m\not\in \mathfrak{a}## because we always only consider finite sums like ##a_0+a_1x_1+\ldots+a_mx_1^m.## Infinite sums are not defined because we do not have a concept of convergence. The terminus technicus is: ##\sum_{k=0}^\infty a_kx_1^k## with only finitely many coefficients unequal to zero, or even more usual: with almost all ##a_k=0.## Mathematicians say "almost all" if they mean "all but finitely many."

elias001 said:
THe vector space ##k[x_1,\ldots,x_n]## is finite dimensional, and ##\mathfrak{a}\subset k[x_1,\ldots,x_n]##, then I just don't see if we have a particular ##x_i^{d_i}\notin \mathfrak{a}##, how does that mean the quotient ring/vector space is infinite dimensional. What I am trying to say is that I am not getting any intutituion about it. Another thing is, I am having trouble understanding that author's terse notation with ##M_r##
##k[x_1,\ldots,x_n]## is of infinite dimension over ##k## since all elements ##1,x_1,x_1^2,x_1^3,\ldots## are already infinitely many linearly independent vectors (polynomials). I hope I could clarify the role of the ##M_r.## They are the monomials that generate ##\mathfrak{a}## by the given condition about ##\mathfrak{a}.## And monomials are products of powers of the ##x_i,## like e.g. ##x_1^2x_2x_3^7.## The leading coefficient of these monomials can be assumed to be ##1## since we otherwise can simply divide it away and still have generators.
 
Last edited:
elias001 said:
I think many people on math stackexchange, whenever you ask them an undregraduate math question, they answer it at the level of graduate school.
I recently answered a simple question on MSE at a high school level about division. I tried to explain it simply and clearly. Ten minutes later, the entire thread had been deleted. Seems as if not everybody is welcome there.
 
@fresh_42 Sorry for my late reply again. It took awhile to gather my thoughts. Also, I am wondering if the preview feature for posting on here is the same as on math stackexchange. I mean are there are places where I can preview what I wrote before I hit reply or post a thread. This is due to enable math mode in latex with ##\#\### instead of using ##\$## signs.

You are correct ##k[x_1,\ldots,x_n]## is of infinite dimension. I was thinking of the number of variables, ##x_1, x_2,\ldots,x_n##.

There are couple of things I want to clear up still, and I am going to make use of the two theorems below.

Let's say I want to show the quotient vector spaces, either: ##k[x,y,z]/(x^3y^4)## or ##k[x,y,z]/(x^3,y^4)## as finite dimensional.

Basically I want either one of them, I count two of the variables ##x,z## or ##y,z## as coefficients due to the following theorem:

>Theorem 1 ##k[x_1,\ldots,x_{n-1},x_n]\backsimeq (k[x_1,\ldots,x_{n-1}])[x_n]##



So say I count ##y, z## as coefficients, then I would have ##(k[y,z])[x]/(x^3y^4)## or ##(k[y,z])[x]/(x^3,y^4)##

Then I use ##(2)## of the following theorem to list out the basis coset elements for ##w=x+(x^3y^4)##, or ##w=x+(x^3,y^4)##

>Theorem 2: Let ##K## be an extension field of ##F## and ##u\in K## an algeebraic element over ##F## with minimal polynomial ##p(x)## of degree ##n##. Then

##(1)## ##F(u)\equiv F(x)/(p(x))##

##(2)## ##\{1_F,u,u^2,\ldots,u^{n-1}\}## is a basis of the vector space ##F(u)## over ##F##.

##(3)## ##[F(u):F]=n##.

In both cases for the ##x## variable for both ##(k[y,z])[x]/(x^3y^4)## or ##(k[y,z])[x]/(x^3,y^4)##, I would have:

##\{1, x,x^2, zx,zx^2, yx, y^2x, y^3x, yx^2, y^2x^2, y^3x^2\}##

I collect all the ##1## terms as ##1##, in terms of coset basis elements, it is ##1_w=1+(x^3y^4)## and ##1_w=1+(x^3,y^4)##

and all the ##x## terms as

##x, x^2##, for ##x## we have ##x,zx,yx,y^2x, y^3x## and in terms of coset elements it is ##w=(1+z+y+y^2+y^3)x+(x^3y^4)## and ##w=(1+z+y+y^2+y^3)x+(x^3,y^4)##

for the ##x^2## terms, we have ##x^2, zx^2, yx^2,y^2x^2,y^3x^2## and in terms of coset basis elements, we have ##w^2=(1+z+y+y^2+y^3)x^2+(x^3y^4)## and ##w^2=(1+z+y+y^2+y^3)x^2+(x^3,y^4)##

So the basis element for the variable ##x## are ##1_w,w,w^2## for both ##k[x,y,z]/(x^3y^4)## or ##k[x,y,z]/(x^3,y^4)##

Another thing is, in the internet's proof, i don't understand his notation when he stated: ##M_i(x_1, 0,\ldots,0)=0## for all ##i##.

There is something else I want to quickly ask you for the other direction. I understood most of it.
 
Last edited:
elias001 said:
@fresh_42 Sorry for my late reply again. It took awhile to gather my thoughts. Also, I am wondering if the preview feature for posting on here is the same as on math stackexchange. I mean are there are places where I can preview what I wrote before I hit reply or post a thread. This is due to enable math mode in latex with ##\#\### instead of using ##\$## signs.
The preview function here is different from MSE. MSE directly shows you what you write; here, we have to hit the preview button in the upper right corner of the edit section, and often enough, additionally refresh the page to force the interpreter to render the code. However, it is currently not working properly due to a software upgrade. It either shows you nothing or turns the code into ASCII. Neither is desirable.

I either type now, hoping that I make no errors, and usually edit my response several times afterward because I did make mistakes, or I use a TeX editor where I usually write longer calculations and copy it in here when I'm done. It's a bit inconvenient right now. I have plenty of keyboard shortcuts for TeX commands, which makes life a lot easier.

elias001 said:
You are correct ##k[x_1,\ldots,x_n]## is of infinite dimension. I was thinking of the number of variables, ##x_1, x_2,\ldots,x_n##.

There are couple of things I want to clear up still, and I am going to make use of the two theorems below.

Let's say I want to show the quotient vector spaces, either: ##k[x,y,z]/(x^3y^4)## or ##k[x,y,z]/(x^3,y^4)## as finite dimensional.

Basically I want either one of them, I count two of the variables ##x,z## or ##y,z## as coefficients due to the following theorem:

>Theorem 1 ##k[x_1,\ldots,x_{n-1},x_n]\backsimeq (k[x_1,\ldots,x_{n-1}])[x_n]##



So say I count ##y, z## as coefficients, then I would have ##(k[y,z])[x]/(x^3y^4)## or ##(k[y,z])[x]/(x^3,y^4)##

Then I use ##(2)## of the following theorem to list out the basis coset elements for ##w=x+(x^3y^4)##, or ##w=x+(x^3,y^4)##
Your rings still have infinitely many powers of ##z.## But even if you drop ##z## you still have all expressions ##x^ny,x^ny^2,x^ny^3## when you factor ##\bigl\langle x^3\cdot y^4 \bigr\rangle## as in your first example.

So only ##k[x,y]/\bigl\langle x^3,y^4 \bigr\rangle## would be finite dimensional: no indeterminate ##z## and only powers ##1,x,x^2## and ##1,y,y^2,y^3## which reduces the degree of your polynomials to maximal ##2## for ##x## and maximal ##3## for ##y.## But you have to get rid of ##z.## The comma makes a big difference. I could write a formula, but that would involve a tensor product.

The rule is: In ##k[x,y,z]/\bigl\langle x^3y^4 \bigr\rangle## all terms in polynomials that contain ##x^3\cdot y^4## are identified with zero, which still leaves infinitely many ##k##-linear independent monomials, and in ##k[x,y,z]/\bigl\langle x^3\, , \,y^4 \bigr\rangle ## all terms in polynomials that contain ##x^3## or ##y^4## are identified by zero. With ##z## these are still infinitely many, namely the powers of ##z## (and others like ##xz^n.## Without ##z,## i.e. in ##k[x,y]/\bigl\langle x^3\, , \,y^4 \bigr\rangle ## we have only polynomials $$p(x,y)=a_0+a_1x+a_2x^2+b_1y+b_2y^2+b_3y^3+c_1xy+c_2xy^2+c_3xy^3+ d_1x^2y+d_2x^2y^2+d_3x^2y^3,$$
hence dimension ##3\cdot 4=12## with ##12## independent coefficients ##a_i,b_j,c_k,d_l.##

Timeout.
 
@freah_42 i thought i would clarify the subtle part using concrete examples where if I have ##x^m y^n##, how do I count all the terms with mixed lower power terms of the form ##x^{m-1}y^{n-1}##. Is the idea of collecting terms of say ##x## aa coset bases element: ##w=(1+y+y^2+y^3)x+(x^3y^4)## and ##w=(1+y+y^2+y^3)x+(x^3,y^4)## correct for both ##k[x,y]/(x^3y4)## and ##k[x,y]/(x^3,y^4)##?

When you count maximal 2 for ##x## and maximal 3 for ##y##, you are not including the identity element. So the formula i asked you about in the other post don't account for mixed variable terms?

I have learned about tensor products, but not as a tool for counting arguments.

What does timeout mean? Do you think I should start a new post on the rest of my questions? Or do you want me to post the rest of my questions later?
 
Last edited:
elias001 said:
@freah_42 i thought i clarify the subtle part where if I have ##x^m y^n##, how do I count all the terms with mixed lower power terms of the form ##x^{m-1}y^{n-1}##.
If you have a product like ##x^my^n## that generates your ideal, then there is no upper bound for mixed degrees ##\operatorname{mdeg}## and you get infinitely many ##k##-linearly independent polynomials. There is no way to identify, e.g., any of ##1,x,x^2,\ldots,x^m,x^{m+1},\ldots## with zero. Only monomials ##x^ky^j## with ##k\geq m## AND ##j\geq n## vanish, the rest remains. Means this one is infinite-dimensional.

If you switch the product to a comma, i.e. ##\mathfrak{a}=\bigl\langle x^m\, , \,y^n \bigr\rangle## then all monomials ##x^ky^j## with ##k\geq m## OR ##j\geq n## vanish. This would limit all possible degrees from above to ##m\cdot n##, making it an ##m\cdot n##-dimensional space.

In the case with the comma, we have
$$
k[x,y]/\bigl\langle x^m\, , \,y^n \bigr\rangle \cong k[u,y]/\bigl\langle y^n \bigr\rangle \cong k[u,v]
$$
where ##u=x+\bigl\langle x^m,y^n \bigr\rangle## and ##v=y+\bigl\langle x^m,y^n \bigr\rangle.##
The tensor notation would be
$$
k[x,y]/\bigl\langle x^m\, , \,y^n \bigr\rangle \cong k[x]/\bigl\langle x^m \bigr\rangle \otimes k[y]/\bigl\langle y^n \bigr\rangle
$$
where the each component has the dimension ##m,n,## resp. and the tensor product the dimension ##n\cdot m.##

Such a split is not possible if we have a product ##x^my^n.##

elias001 said:
What does timeout mean? Do you think I should start a new post on the rest of m questions? Or do you want me to post the rest of my questions later?
I need a break before I answer the second part. It's already quite late over here.
 
Last edited:
  • #10
@fresh_42 ok ok, I will try to ask you tomorrow in the evening if that is ok. you have a good night.
 
  • #11
elias001 said:
>Theorem 2: Let ##K## be an extension field of ##F## and ##u\in K## an algeebraic element over ##F## with minimal polynomial ##p(x)## of degree ##n##. Then

##(1)## ##F(u)\equiv F(x)/(p(x))##

##(2)## ##\{1_F,u,u^2,\ldots,u^{n-1}\}## is a basis of the vector space ##F(u)## over ##F##.

##(3)## ##[F(u):F]=n##.
Correct.
elias001 said:
In both cases for the ##x## variable for both ##(k[y,z])[x]/(x^3y^4)## or ##(k[y,z])[x]/(x^3,y^4)##, I would have:

##\{1, x,x^2, zx,zx^2, yx, y^2x, y^3x, yx^2, y^2x^2, y^3x^2\}##
and many, many others, e.g. ##z,z^2,z^3,\ldots##
elias001 said:
I collect all the ##1## terms as ##1##, in terms of coset basis elements, it is ##1_w=1+(x^3y^4)## and ##1_w=1+(x^3,y^4)##

and all the ##x## terms as

##x, x^2##, for ##x## we have ##x,zx,yx,y^2x, y^3x## and in terms of coset elements it is ##w=(1+z+y+y^2+y^3)x+(x^3y^4)## and ##w=(1+z+y+y^2+y^3)x+(x^3,y^4)##

for the ##x^2## terms, we have ##x^2, zx^2, yx^2,y^2x^2,y^3x^2## and in terms of coset basis elements, we have ##w^2=(1+z+y+y^2+y^3)x^2+(x^3y^4)## and ##w^2=(1+z+y+y^2+y^3)x^2+(x^3,y^4)##

So the basis element for the variable ##x## are ##1_w,w,w^2## for both ##k[x,y,z]/(x^3y^4)## or ##k[x,y,z]/(x^3,y^4)##
Ok. I have explained this in post #7. There are more basis elements. Infinitely many in ##k[x,y,z]/\bigl\langle x^3y^4 \bigr\rangle,## infinitely many in ##k[x,y]/\bigl\langle x^3y^4 \bigr\rangle,## and only a finite basis with ##12## basis vectors in ##k[x,y]/\bigl\langle x^3\, , \,y^4 \bigr\rangle.##
elias001 said:
Another thing is, in the internet's proof, i don't understand his notation when he stated: ##M_i(x_1, 0,\ldots,0)=0## for all ##i##.
I used the minimality of the ##d_i## to conclude that ##x_i^r=M_k## is not possible for ##r<d_i## and that the coefficients at ##x_i^r## and ##M_k## must therefore be zero.

If we do not have the minimality criterion, then we need something else in order to compare the terms in the equation
$$
a_0+a_1x_1+\ldots+a_mx_1^m=\sum_{k=1}^K \lambda_k M_k=\sum_{k=1}^K \lambda_k x_1^{m_{1,k}}\cdot\ldots\cdot x_n^{m_{n,k}}.
$$
I assume that this other criterion is that the monomial generators ##M_k## of the ideal ##\mathfrak{a}## are not built by powers in only one variable. In this case, we do not need the minimality of the ##d_i.## The monomials are then of the form ##M_k=x_1^{i_1,k}\cdot\ldots\cdot x_n^{i_n,k}## where at least two exponents (powers) are positive. ##M_k## are polynomials and therefore
$$
M_k=M_k(x_1,\ldots,x_n)
$$
and
$$
a_0+a_1x_1+\ldots+a_mx_1^m=\sum_{k=1}^K \lambda_k M_k=\sum_{k=1}^K \lambda_k M_k(x_1,\ldots,x_n)=\sum_{k=1}^K \lambda_k x_1^{m_{1,k}}\cdot\ldots\cdot x_n^{m_{n,k}}.
$$
If we insert zeros for the ##x_i## then ##M_k(0,\ldots,0)=0##. But if we only set ##x_2=\ldots=x_n=0## and keep ##x_1## arbitrary, then our equation becomes
$$
a_0+a_1x_1+\ldots+a_mx_1^m=\sum_{k=1}^K \lambda_k M_k(x_1,0,\ldots,0)=\sum_{k=1}^K \lambda_k x_1^{m_{1,k}}\cdot 0^{m_{2,k}} \cdot \ldots\cdot 0^{m_{n,k}}=0
$$
This means that ##a_0=a_1=\ldots=a_n=0## because the powers of ##x_1## are ##k##-linearly independent, and which is what we wanted to show. This argument doesn't work if the ##M_k## can be powers of only ##x_1.##

We need that ##\{1,x_1,\ldots,x_1^m\}## are linearly independent, and all we have is a comparison with the generators of ##\mathfrak{a}.## This can be achieved if ##d_1## in ##x_1^{d_1}\in \mathfrak{a}## is minimal and ##m<d_1,## or if the powers of ##x_1## do not occur alone on the right-hand side within the generators ##M_k##, and we can set the other variables to zero making the entire expression on the right zero without changing the left-hand side.

I guess, the "not alone" condition is given so we wouldn't need minimal ##d_i.## In any case, we need a piece of additional information in order to compare ##a_0+a_1x_1+\ldots+a_mx_1^m## with ##\sum_{k=1}^K \lambda_k M_k.## The goal is to show that ##a_0=\ldots=a_m=0.##



elias001 said:
There is something else I want to quickly ask you for the other direction. I understood most of it.
Which is it?
 
Last edited:
  • #12
@fresh_42 Sorry for my late response. I felt asleep last night early.

Before I ask you about the other direction, I just want to sort out the case of how to explicitly list the basis elements of a quotient ring of the form ##k[x_1,\ldots,x_n]/\mathfrak{a}##, and I will use concrete examples. I want to clear up the case of ##k[x,y]/\langle x^3,y^4\rangle## before I deal with other cases.

Suppose we have the following example: ##k[x,y]/\langle x^3,y^4\rangle##, then the basis elements should be ##\{1, x,y,x^2,y^2,y^3,xy,xy^2,xy^3,x^2y,x^2y^2,x^2y^3\}##. But in terms of coset basis elements (I don't know what else to call them), They are: ##\{1+\langle x^3,y^4\rangle, x+\langle x^3,y^4\rangle, y+\langle x^3,y^4\rangle, x^2+\langle x^3,y^4\rangle, y^2+\langle x^3,y^4\rangle, y^3+\langle x^3,y^4\rangle, xy+\langle x^3,y^4\rangle, xy^2+\langle x^3,y^4\rangle, xy^3+\langle x^3,y^4\rangle, x^2y+\langle x^3,y^4\rangle, x^2y^2+\langle x^3,y^4\rangle, x^2y^3+\langle x^3,y^4\rangle\}\quad (*)##

In ##(*)##, there are four terms that have ##x## in them, ##\{x+\langle x^3,y^4\rangle, xy+\langle x^3,y^4\rangle, xy^2+\langle x^3,y^4\rangle, xy^3+\langle x^3,y^4\rangle\}##. Do I collect them all into one ##x## term and treaing the ##y## variable as coefficient? Meaning if I have ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##, meaning I just have one basis element ##x##? If it can't be done, would it be possible if instead of ##k[x,y]/\langle x^3,y^4\rangle##, we have ##(k[y,])[x]/\langle x^3,y^4\rangle##, then would the ceoset bases elements be ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##?
 
Last edited:
  • #13
elias001 said:
@fresh_42 Sorry for my late response. I felt asleep last night early.

Before I ask you about the other direction, I just want to sort out the case of how to explicitly list the basis elements of a quotient ring of the form ##k[x_1,\ldots,x_n]/\mathfrak{a}##, and I will use concrete examples. I want to clear up the case of ##k[x,y]/\langle x^3,y^4\rangle## before I deal with other cases.

Suppose we have the following example: ##k[x,y]/\langle x^3,y^4\rangle##, then the basis elements should be ##\{1, x,y,x^2,y^2,y^3,xy,xy^2,xy^3,x^2y,x^2y^2,x^2y^3\}##. But in terms of coset basis elements (I don't know what else to call them), They are: ##\{1+\langle x^3,y^4\rangle, x+\langle x^3,y^4\rangle, y+\langle x^3,y^4\rangle, x^2+\langle x^3,y^4\rangle, y^2+\langle x^3,y^4\rangle, y^3+\langle x^3,y^4\rangle, xy+\langle x^3,y^4\rangle, xy^2+\langle x^3,y^4\rangle, xy^3+\langle x^3,y^4\rangle, x^2y+\langle x^3,y^4\rangle, x^2y^2+\langle x^3,y^4\rangle, x^2y^3+\langle x^3,y^4\rangle\}\quad (*)##
Correct.

The difference is basically whether we are on the left side of
$$
k[u_1,\ldots,u_n]\cong k[x_1,\ldots,x_n]/\mathfrak{a}
$$
or on the right. It doesn't make a difference since they are isomorphic, but they use a different language and people usually do not distinguish between the two languages. We have a so-called coordinate ring on the left, where the coordinates (basis) vectors are ##1,u_1,\ldots,u_n,## and we have cosets (or equivalence classes) on the right in the quotient (or factor) ring. The isomorphism is transported by ##u_i\longmapsto x_i+\mathfrak{a}.## We speak of coordinates, because the ##u_i## are not necessarily indeterminates as the ##x_i## are, which behave like transcendental elements (over ##k##). The ##u_i## can fulfill algebraic equations, like ##u_1^3=0## and ##u_2^4=0## in your example.

Whether the language of the coordinates ##u_i## is used, or the language of the cosets ##x_i+\mathfrak{a}## often depends on the situation and which one is more convenient for calculations or the goal of a proof, rather than being strictly one or the other.

Consider again, let's say, division by ##5.## The ring here are the integers ##\mathbb{Z},## and the ideal is ##\mathfrak{a}=5\mathbb{Z},## all multiples of ##5,## the isomorphism is ##\mathbb{F}_5=\mathbb{Z}_5\cong \mathbb{Z}/5\mathbb{Z} .##

We have five possible remainders, ##0,1,2,3,4.## These are the ##u_i,## and they are what we calculate with. However, ##120,76,-8,343,-26## are also possible remainders, and there is no mathematical reason to prefer one over the other set of representatives of the equivalence classes modulo ##5.## But you certainly agree, that ##\{0,1,2,3,4\}## is much more convenient than ##\{120,76,-8,343,-26\}## to operate with. Nevertheless,
$$
120\equiv 0\pmod{5}\ ,\ 76\equiv 1\pmod{5}\ ,\ -8\equiv 2\pmod{5}\ ,\ 343\equiv 3\pmod{5}\ ,\ -26\equiv 4\pmod{5}
$$
The last equations could also be written as
$$
120+5\mathbb{Z}=0+5\mathbb{Z}\ ,\ 76+5\mathbb{Z}=1+5\mathbb{Z}\ ,\ -8+5\mathbb{Z}=2+5\mathbb{Z}\ ,\ 343+5\mathbb{Z}=3+5\mathbb{Z}\ ,\ -26+5\mathbb{Z}=4+5\mathbb{Z}
$$
in terms of ##x_i+\mathfrak{a}.##

Most often, we only calculate with them in an abstract notation like ##a\equiv b\pmod{5}## and do not care by which element ##a,b## are represented. On some occasions, however, like the Euclidean algorithm, we demand that the remainder in ##a=q\cdot 5 +r## is ##0\leq r< 5.##

Nobody calculates with ##0+\mathfrak{a},\ldots,4+\mathfrak{a}.## This is a bit different in your proof with the polynomials. We want to show that the ##u_i## are ##k##-linearly independent, and we only know that the ##x_i## are ##k##-linearly independent, so we have to switch between the two languages in order to use an information we have for one in the realm of the other. We must therefore also consider the difference between them, which is the ideal ##\mathfrak{a}.## Here is where the structure of ##\mathfrak{a}## gets into play.

elias001 said:
In ##(*)##, there are four terms that have ##x## in them, ##\{x+\langle x^3,y^4\rangle, xy+\langle x^3,y^4\rangle, xy^2+\langle x^3,y^4\rangle, xy^3+\langle x^3,y^4\rangle\}##. Do I collect them all into one ##x## term and treating the ##y## variable as coefficient?
No. Well, generally no. Since ##f(x)\longmapsto f(x)+\mathfrak{a}## is a ring homomorphism, we have in your example for instance
$$
xy^3+\bigl\langle x^3y^4 \bigr\rangle=\left(x+\bigl\langle x^3,y^4 \bigr\rangle \right)\cdot \left(y^3+\bigl\langle x^3y^4 \bigr\rangle\right)=\left(x+\bigl\langle x^3y^4 \bigr\rangle\right)+\left(y+\bigl\langle x^3y^4 \bigr\rangle\right)^3
$$
But that is rarely written that way. At most, you will find something like ##\left(f\cdot g\right)+\mathfrak{a}=(f+\mathfrak{a})(g+\mathfrak{a}).##

elias001 said:
Meaning if I have ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y_y^2+y^3)x+\langle x^3,y^4\rangle##, meaning I just have one basis element ##x##?
We have a very special situation here. There is a big difference between ##\mathfrak{a}=\bigl\langle x^3\, , \,y^4 \bigr\rangle ## and ##\mathfrak{b}=\bigl\langle x^3\cdot y^4 \bigr\rangle.## Polynomials in ##x,y## modulo ##\mathfrak{a}## are reduced by ##x^3=0## OR ##y^4=0## whereas polynomials modulo ##\mathfrak{b}## are reduced by ##x^3=0## AND ##y^4=0##. The former allows a separation of the variables, the latter does not.

Since you asked about ##\mathfrak{a}## we can make the intermediate step and write
$$
k[x,y]/\bigl\langle x^3y^4 \bigr\rangle \cong k[ u ](y)/\bigl\langle y^4 \bigr\rangle
$$
where ##u=x+\bigl\langle x^3 \bigr\rangle .## In this case, we reduced the variables to one, namely ## u, ## and extended the coefficients from scalars in ##k## to polynomials in ## u. ## Since ##u^3=0,## we only have polynomials ##a+bu+cu^2## as coefficients. I would avoid writing them with an ##x## since ##x^3\neq 0.## If you do, then it is a bit sloppy, and I cannot rule out that authors do, but then you have to be sure in which ring you currently are, the ring ##k[x]## where ##x^3\neq 0## or the ring ##k[ u ]## where ##u^3=0.##

In case of ##\mathfrak{b},## things are different. Here we have ##u=x+\bigl\langle x^3\cdot y^4 \bigr\rangle ## and ##v=y+\bigl\langle x^3\cdot y^4 \bigr\rangle.## But this time we cannot get rid of the second variable since no power of ##x## or ##y## will ever be an element of ##\mathfrak{b},## i.e. ##u^n\neq 0## for all ##n\geq 0## and ##u=x+\bigl\langle x^3\cdot y^4 \bigr\rangle,## and ##v^m\neq 0## for all ##m \geq 0## and ##v=y+\bigl\langle x^3\cdot y^4 \bigr\rangle.## This makes a separation of the variables impossible. We can still consider ##k[ u ](y)/\bigl\langle x^3\cdot y^4 \bigr\rangle## but this time, ##u=x+\bigl\langle x^3\cdot y^4 \bigr\rangle## and not simply polynomials in ##x.##
 
Last edited:
  • #14
@fresh_42 How do I make something indent and inline like when I use ##\$\$## in latex. Also in in math stackexchange, where you can have another paragaraph as a separate block by starting with "> text". How do I do that on here? Another thing is, are there chat functions on physics forum?

I edited the portion of my post, instead of: ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##, it should be ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##

I would like to sort out the case for ##k[x,y]/\langle x^3, y^4\rangle## first if that is okay. You said about the intermediate step: ##k[x,y]/\bigl\langle x^3y^4 \bigr\rangle \cong k[ u ](y)/\bigl\langle y^4 \bigr\rangle##, at the end, you said about the ring ##k[x]## where ##x^3\neq 0## or the ring
##k## where ##u^3=0##. The later case refers to ##k(y)/\langle y^4 \rangle## and here we have the elements ##u=x+\langle x^3,y^4\rangle, u^2=x^2+\langle x^3,y^4\rangle, u^3=x^3+\langle x^3,y^4\rangle=0##, but I thought I am treating ##x## as a coefficient and letting ##y## be the variable? So should ##u^3=x^3+\langle x^3,y^4\rangle\neq 0##?

Also, do you still teach or do research in commutative algebra/algebraic geometry at an university. You seem to know this subject very well. If you don't mind me asking. Also, you does better job explaining things then many of the people on math stack exchange. I think they forget the fact ultimately when one is answering questions, one is doing teaching and there is a lot of back and forth. I noticed you made a post about topology, do you also do research in algebraic topology too?
 
Last edited:
  • #15
Just a copy to improve readability.

@fresh_42 How do I make something indent and inline like when I use ##\$\$## in latex. Also in in math stackexchange, where you can have another paragaraph as a separate block by starting with "> text". How do I do that on here? Another thing is, are there chat functions on physics forum?

I edited the portion of my post, instead of: ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##, it should be ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##

I would like to sort out the case for ##k[x,y]/\langle x^3, y^4\rangle## first if that is okay. You said about the intermediate step: ##k[x,y]/\bigl\langle x^3y^4 \bigr\rangle \cong k[ u ](y)/\bigl\langle y^4 \bigr\rangle##, at the end, you said about the ring ##k[x]## where ##x^3\neq 0## or the ring ##k[ u ] ## where ##u^3=0##. The later case refers to ##k[ u ](y)/\bigl\langle y^4 \bigr\rangle## and here we have the elements ##u=(x+\langle x^3,y^4\rangle, u^2=x^2+\langle x^3,y^4\rangle, u^3=x^3+\langle x^3,y^4\rangle=0##, but I thought I am trating the ##x## as coefficient and letting ##y## be the variable? So should ##u^3=x^3+\langle x^3,y^4\rangle\neq 0##?

Also, do you still teach or do research in commutative algebra/algebraic geometry at an university. You seem to know this subject very well. If you don't mind me asking. Also, you does better job explaining things then many of the people on math stack exchange. I think they forget the fact ultimately when one is answering questions, one is doing teaching and there is a lot of back and forth. I noticed you made a post about topology, do you also do research in algebraic topology too?
 
  • #16
elias001 said:
@fresh_42 How do I make something indent and inline like when I use ##\$\$## in latex. Also in in math stackexchange, where you can have another paragaraph as a separate block by starting with "> text". How do I do that on here?
It is explained here:
https://www.physicsforums.com/help/latexhelp/

PF uses a MathJax version to interpret LaTeX commands, which is not totally equivalent to LaTeX. Inline tags in the version here are ## \Longrightarrow ##, separated formulas $$ \Longrightarrow $$. I don't know what it is about the ">text" command. Simply insert an empty line.

The notations [u] or [i] are problematic because the engine interprets them as "begin underline / italic" and adds automatically ending tags. [ u ] and [ i ] with spaces are ok.

elias001 said:
Another thing is, are there chat functions on physics forum?
No. Only a private messaging system, which is closer to an email than it is to a chat.
elias001 said:
I edited the portion of my post, instead of: ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##, it should be ##\{x+xy+xy^2+xy^3+\langle x^3,y^4\rangle\}=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle##
I wouldn't use the parentheses. So it should be only ##x+xy+xy^2+xy^3+\langle x^3,y^4\rangle=(1+y+y^2+y^3)x+\langle x^3,y^4\rangle .## That doesn't mean that either factor becomes a coefficient. If you want to treat them as such, you should make clear which ring you're in. See my post #13.

elias001 said:
I would like to sort out the case for ##k[x,y]/\langle x^3, y^4\rangle## first if that is okay. You said about the intermediate step: ##k[x,y]/\bigl\langle x^3y^4 \bigr\rangle \cong k[ u ](y)/\bigl\langle y^4 \bigr\rangle##, at the end, you said about the ring ##k[x]## where ##x^3\neq 0## or the ring ##k[ u ] ## where ##u^3=0##. The later case refers to ##k[ u ](y)/\bigl\langle y^4 \bigr\rangle## and here we have the elements ##u=(x+\langle x^3,y^4\rangle, u^2=x^2+\langle x^3,y^4\rangle, u^3=x^3+\langle x^3,y^4\rangle=0##,...

We have the elements ##1, u=(x+\langle x^3,y^4\rangle, u^2=x^2+\langle x^3,y^4\rangle## because ##u^3=x^3+\langle x^3,y^4\rangle=0.## And for a basis, we need ##u^0=1,## too.

elias001 said:
... but I thought I am trating the ##x## as coefficient and letting ##y## be the variable?

I would distinguish between ##x## and ##u## as long as you learn. Sloppiness requires insights. We have in this particular case
$$
k[x,y]/\bigl\langle x^3,y^4 \bigr\rangle\cong k[x]/\bigl\langle x^3 \bigr\rangle \otimes k[y] /\bigl\langle y^4 \bigr\rangle =k[ u ]\otimes k[y] /\bigl\langle y^4 \bigr\rangle \cong (k([ u ])[y]/\bigl\langle y^4 \bigr\rangle .
$$
The ring on the most right is a ring of polynomials in ##y## with coefficients that are polynomials in ##u## of at most degree two. And sure, you can write those coefficients again with an ##x,## where then ##x^3=0## but this can obviously lead to confusion with the ##x## in ##k[x,y].##

elias001 said:
So should ##u^3=x^3+\langle x^3,y^4\rangle\neq 0##?

This is always wrong. Since ##x^3\in \bigl\langle x^3,y^4 \bigr\rangle## we have ##x^3+\bigl\langle x^3,y^4 \bigr\rangle=\bigl\langle x^3,y^4 \bigr\rangle## and ##u^3=0## since the zero is the coset ##0=0+\bigl\langle x^3,y^4 \bigr\rangle.##

elias001 said:
Also, do you still teach or do research in commutative algebra/algebraic geometry at an university.
No.
elias001 said:
You seem to know this subject very well. If you don't mind me asking.
Thank you. I like the algebraic sector of mathematics. I gave some tutorials for school kids over the years, so maybe that taught me how to explain things.
elias001 said:
Also, you does better job explaining things then many of the people on math stack exchange. I think they forget the fact ultimately when one is answering questions, one is doing teaching and there is a lot of back and forth.
I think the mention of Gröbner bases led their responses, not the question itself. Gröbner bases are not what you usually meet at more basic levels. Back and forth is almost impossible on MSE. I tried.
elias001 said:
I noticed you made a post about topology, do you also do research in algebraic topology too?
Not really. I have some strange (co-)homology sequences I'm interested in, and I have a textbook titled "Algebraic Topology," but I wouldn't call that research.
 
  • #17
@fresh_42 Is there a chat feature/function on physics forum?

When you used tensor product: ##
k[x,y]/\bigl\langle x^3,y^4 \bigr\rangle\cong k[x]/\bigl\langle x^3 \bigr\rangle \otimes k[y] /\bigl\langle y^4 \bigr\rangle =k[ u ]\otimes k[y] /\bigl\langle y^4 \bigr\rangle \cong (k([ u ])[y]/\bigl\langle y^4 \bigr\rangle .##

What is the theorem about tensor product that allow you to justify these two steps of tensor product: ##k[x]/\bigl\langle x^3 \bigr\rangle \otimes k[y] /\bigl\langle y^4 \bigr\rangle =k[ u ]\otimes k[y] /\bigl\langle y^4 \bigr\rangle##?

Has anyone asked questions on here about homological algebra/cohomology of groups or category theory. Basically stuff at the grad level?

Also, did you specialize in algebraic topolgoy at your ph.d level. You said you wrote a book on algebraic topology. Can I know what the title is? There are few things about quotient topology I am dying to know, but I will ask in a different post in another time. When I do, I will let you know.

I will ask you about the other concrete case next.
 
  • #18
elias001 said:
@fresh_42 Is there a chat feature/function on physics forum?
No. Only a private messaging system, which is closer to an email than it is to a chat.
elias001 said:
When you used tensor product: ##
k[x,y]/\bigl\langle x^3,y^4 \bigr\rangle\cong k[x]/\bigl\langle x^3 \bigr\rangle \otimes k[y] /\bigl\langle y^4 \bigr\rangle =k[ u ]\otimes k[y] /\bigl\langle y^4 \bigr\rangle \cong (k([ u ])[y]/\bigl\langle y^4 \bigr\rangle .##

The direct sum ##k[x] \oplus k[y]## would be all sums of polynomials in ##x## with polynomials in ##y## but we need mixed terms, too, so we need ##k[x] \cdot k[y]## which has no algebraic structure. The tensor product provides such a structure.

elias001 said:
What is the theorem about tensor product that allow you to justify these two steps of tensor product: ##k[x]/\bigl\langle x^3 \bigr\rangle \otimes k[y] /\bigl\langle y^4 \bigr\rangle =k[ u ]\otimes k[y] /\bigl\langle y^4 \bigr\rangle##?
This specific equation is only the substitution ##k[x]/\bigl\langle x^3 \bigr\rangle \cong k[ u ].## The equation
$$
k[ u ] \otimes k[y](\bigl\langle y^4 \bigr\rangle \cong k[ u ] (y)/\bigl\langle y^4\bigr\rangle
$$
is the extension of the coefficients in ## k ## to coefficients from ##k[ u ].## The most popular example is the extension of a real vector space ##V## to a complex vector space ##V_{\mathbb{C}}##
$$
V \otimes \mathbb{C} \cong_\mathbb{R} V_{\mathbb{C}} .
$$
The technical details are a bit more complicated, and I had to look them up if I were to write them without mistakes using the universal property of the tensor product.

elias001 said:
Has anyone asked questions on here about homological algebra/cohomology of groups or category theory. Basically stuff at the grad level?
Well, it's not ruled out. However, the closest I can remember were occasions when homotopy classes were discussed in the context of topological manifolds. Cohomology belongs to topology and was, therefore, part of my insights article. But I admit that it has been quite some time since I looked into my books about cohomology, except for some basics in homological algebra, like universal properties.

elias001 said:
Also, did you specialize in algebraic topolgoy at your ph.d level. You said you wrote a book on algebraic topology. Can I know what the title is?
I meant "have" as synonymous with to possess.
elias001 said:
There are few things about quotient topology I am dying to know, but I will ask in a different post in another time. When I do, I will let you know.

I will ask you about the other concrete case next.
Ok. The quotients are not very mysterious; it's more about getting used to them. But I assume that the topological quotients are a bit more complicated than the algebraic ones. The entire concept is about sets that all of a sudden are elements, the difference between ##x## and ##x+\mathfrak{a}.## Whenever we have an equivalence relation, a quotient space can be defined. ##16## is equivalent to ##1## by division by ##5## and ##x^5## is equivalent to ##x^2## modulo ##\bigl\langle x^3 -1 \bigr\rangle## and equivalent to ##0## modulo ##\bigl\langle x^3 \bigr\rangle.##
 
  • #19
@fresh_42 about the

"$$V \otimes \mathbb{C} \cong_\mathbb{R} V_{\mathbb{C}}$$

The technical details are a bit more complicated, and I had to look them up if I were to write them without mistakes using the universal property of the tensor product."

Is there a reference about that in terms of the universal property of the tensor product?

The other concrete example is

##k[x,y]/\langle x^3y^4\rangle##

You said we have both ##x^3=0## and ##y^4=0##, so how we should have the following basis/coset basis elements: ##\{1+\langle x^3y^4\rangle, xy+\langle x^3y^4\rangle, xy^2+\langle x^3y^4\rangle, xy^3+\langle x^3y^4\rangle, x^2y+\langle x^3y^4\rangle, x^2y^2+\langle x^3y^4\rangle, x^2y^3+\langle x^3y^4\rangle\}##

But what about the following list of elements: ##\{x+\langle x^3y^4\rangle, y+\langle x^3y^4\rangle, x^2+\langle x^3y^4\rangle, y^2+\langle x^3y^4\rangle, y^3+\langle x^3y^4\rangle\}##?

I don't understand the subtle differences between quotienting out ##\langle x^3y^4\rangle##, ##\langle x^3,y^4\rangle##, when it comes to listing out coset basis elements. For the case of ##\langle x^3y^4\rangle##, why can't we have ##x, x^2## for say ##x##?


As questions for quotient topology, the questions I have is more geometric and method of proofs.

If whenever I post question about algebra or topology related on here, can i message you to let you know?
 
Last edited:
  • #20
elias001 said:
@fresh_42 about the

"$$V \otimes \mathbb{C} \cong_\mathbb{R} V_{\mathbb{C}}$$

The technical details are a bit more complicated, and I had to look them up if I were to write them without mistakes using the universal property of the tensor product."

Is there a reference about that in terms of the universal property of the tensor product?

https://kconrad.math.uconn.edu/blurbs/linmultialg/complexification.pdf
(page 5f.)

elias001 said:
The other concrete example is

##k[x,y]/\langle x^3y^4\rangle##

You said we have both ##x^3=0## and ##y^4=0##, so how we should have the following basis/coset basis elements: ##\{1+\langle x^3y^4\rangle, xy+\langle x^3y^4\rangle, xy^2+\langle x^3y^4\rangle, xy^3+\langle x^3y^4\rangle, x^2y+\langle x^3y^4\rangle, x^2y^2+\langle x^3y^4\rangle, x^2y^3+\langle x^3y^4\rangle\}##

But what about the following list of elements: ##\{x+\langle x^3y^4\rangle, y+\langle x^3y^4\rangle, x^2+\langle x^3y^4\rangle, y^2+\langle x^3y^4\rangle, y^3+\langle x^3y^4\rangle\}##?
Yes, and many more. Whereas ##k[x,y]/\bigl\langle x^3\, , \,y^4 \bigr\rangle ## is twelve-dimensional, ##k[x,y]/\bigl\langle x^3\cdot y^4 \bigr\rangle## is infnite-dimensional.

Any terms ##x^ny^m## will be identified with zero if ##n\geq 3## AND ##y\geq 4.## Hence, if ##n<3,## we still have all possible powers of ##y##, and if ##m<4,## we still have all powers of ##x.##

elias001 said:
I don't understand the subtle differences between quotienting out ##\langle x^3y^4\rangle##, ##\langle x^3,y^4\rangle##, when it comes to listing out coset basis elements. For the case of ##\langle x^3y^4\rangle##, why can't we have ##x, x^2## for say ##x##?
We do have the cosets ##x+\bigl\langle x^3y^4 \bigr\rangle## and ##x^2+\bigl\langle x^3y^4 \bigr\rangle## in ## k[x,y]/\bigl\langle x^3\, , \,y^4 \bigr\rangle ## and of course we have these terms in ##k[x,y].## Be sure you know the ring you are talking about.

A possible infinite basis is:
\begin{align*}
&u^0,u^1,u^2,u^3,u^4,u^5,\ldots\\
&v^1,v^2,v^3,v^4,v^5,\ldots\\
&uv,uv^2,uv^3, uv^4,uv^5,\ldots\\
&u^2v,u^2v^2,u^2v^3, u^2v^4,u^2v^5,\ldots\\
&u^3v,u^3v^2,u^3v^3\\
&u^4v,u^4v^2,u^4v^3\\
&\vdots\\
&u^nv,u^nv^2,u^nv^3\; (n\geq 3)\\
&\vdots
\end{align*}
I wrote it in terms of ##u=x+\bigl\langle x^3y^4 \bigr\rangle## and ##v=y+\bigl\langle x^3y^4 \bigr\rangle ## because I didn't want to type that ideal all the time, and I hope I haven't forgotten a monomial.

The basis of ##k[x,y]/\bigl\langle x^3\, , \,y^4 \bigr\rangle## is given by
\begin{align*}
&u^0,u^1,u^2\\
&u^0v,u^1v,u^2,v\\
&u^0v^2,u^1v^2,u^2v^2\\
&u^0v^3,u^1v^3,u^2v^3
\end{align*}
because all terms that include ##x^3## as a factor OR ##y^4## as a factor are identified by zero.

The ideal ##\mathfrak{a}## basically says what should be regarded as zero. In ##\mathbb{R}[x]/\bigl\langle x^2+1 \bigr\rangle## we identify all squares of ##x## by ##x^2+1=0,## i.e. ##x^2=-1## which gives us a real basis ##\{1,x\}## and the multiplication
$$
(a+bx)\cdot (c+dx)=ac + (bc+ad)x+bdx^2=ac + (bc+ad)x+bd (-1)=(ac-bd) + (bc+ad)x,
$$
the complex numbers.

The difference is, that ##\bigl\langle x^3\, , \,y^4 \bigr\rangle## means we set ##x^3=0## and ##y^4=0,## whereas ##\bigl\langle x^3\cdot y^4 \bigr\rangle## only sets ##x^3\cdot y^4=0.##

elias001 said:
As questions for quotient topology, the questions I have is more geometric and method of proofs.
Post them in the topology forum:
https://www.physicsforums.com/forums/topology-and-analysis.228/

elias001 said:
If whenever I post question about algebra related on here, can i message you to let you know?
This shouldn't be necessary. I have all math forums on 'alert', so I should get it whenever someone asks a math question. The same happens automatically with all threads I have posted in, so you do not have to type @fresh_42 all the time, as on MSE. This small red number
1748453586537.webp

alerts me.
 
Last edited:
  • #21
@freah_42 for the messaging/email function on PF, does it display Latex math mode? In MSE, the chat function just display the plain latex code.

Also on PF, if i ask a question or ones that might contain too advance materials, do the mods or whoever answers, would they be judgemental about me not having the necessary prequisite. I will show you later on what I mean, on MSE. I am out and typing on my phone right now, so I can't really check MSE very well.

Ok, for the example we have been discussing, I have two other concrete examples after this, then we can discuss about the other direction.

You stated that the dimension of ##k[x,y]/\langle x^3y^4\rangle## is infinite dimensional. I don't understand the examples you have given.

My understanding is that for the ##x## variable, ##x^n=0## for ##n\geq 3##, similarly for the ##y## variable, ##y^m=0## for ##m\geq 4##.

So if we have ##u=x+\langle x^3y^4\rangle, v=y+ \langle x^3y^4\rangle##, then in your example above:

$$\begin{align*}

&u^0,u^1,u^2,u^3,u^4,u^5,\ldots\\

&v^1,v^2,v^3,v^4,v^5,\ldots\\

&uv,uv^2,uv^3, uv^4,uv^5,\ldots\\

&u^2v,u^2v^2,u^2v^3, u^2v^4,u^2v^5,\ldots\\

&u^3v,u^3v^2,u^3v^3\\

&u^4v,u^4v^2,u^4v^3\\

&\vdots\\

&u^nv,u^nv^2,u^nv^3\; (n\geq 3)\\

&\vdots

\end{align*}$$

We should have:

$$0=u^3=u^4=\ldots, 0=v^4=v^5=\ldots, 0=uv^4=uv^5=\ldots, 0=u^2v^4=u^2v^5=\ldots, 0=u^3v=u^3v^2=u^3v^3=\ldots, \ldots, 0=u^4v=u^4v^2=u^4v^3=\ldots, \ldots 0=u^nv^3=u^nv^4=\ldots (n\geq 3)$$.

So if after the exponents ##m,n## go beyond ##3, 4## respectively in either mixed or single variable term, then both variables should go to 0. They should be consider to be the same as both basis and coset bases elements. How can their be infinitely many coset basis elements? I think i am not seeing how to explicitly count these bases elements because I am not seeing any theorems that shows how ##\langle x^3y^4\rangle## versus ##\langle x^3,y^4\rangle## as different when counting basis elements in quotient vector spaces.
 
Last edited:
  • #22
elias001 said:
@freah_42 for the messaging/email function on PF, does it display Latex math mode? In MSE, the chat function just display the plain latex code.

I think, the editor is basically the same as in the forums. But what includes formulas should better be asked in the open that's what this platform is about.

elias001 said:
Also on PF, if i ask a question or ones that might contain too advance materials, do the mods or whoever answers, would they be judgemental about me not having the necessary prequisite.

No. At least they shouldn't. Threads are usually graded by A (advanced), I (intermediate), or B (basic). You set this level (prefix) when you start a thread. This classifies less the question than it sets the path to expected answers. But everyone who answers is (hopefully) a human being, so some are more polite than others, and some answer on an A-level regardless of the prefix. But PF is very dialogue-oriented, so this shouldn't be a problem, or can easily be adjusted. And as long as you don't talk about personal theories like having solved the Riemann hypothesis, or fantasy objects like perpetual motion machines, threads will not be deleted.

elias001 said:
I will show you later on what I mean, on MSE. I am out and typing on my phone right now, so I can't really check MSE very well.

Ok, for the example we have been discussing, I have two other concrete examples after this, then we can discuss about the other direction.

You stated that the dimension of ##k[x,y]/\langle x^3y^4\rangle## is infinite dimensional. I don't understand the examples you have given.

My understanding is that for the ##x## variable, ##x^n=0## for ##n\geq 3##, similarly for the ##y## variable, ##y^m=0## for ##m\geq 4##.

This is only the case if the ideal is generated by ##x^3## and ##y^4,## i.e. ##\mathfrak{a}=\bigl\langle x^3\, , \,y^4 \bigr\rangle.## But this time we consider an ideal that is generated by only one monomial, ##x^3\cdot y^4.##

We have in general, that two polynomials ##f,g\in k[x,y]## are considered equivalent modulo the ideal ##\mathfrak{a}## if their difference is an element of ##\mathfrak{a},## i.e.
$$
f(x,y)\sim g(x,y) \Longleftrightarrow f(x,y)-g(x,y)\in \mathfrak{a}
$$
elias001 said:
So if we have ##u=x+\langle x^3y^4\rangle, v=y+ \langle x^3y^4\rangle##, then in your example above:

$$\begin{align*}

&u^0,u^1,u^2,u^3,u^4,u^5,\ldots\\

&v^1,v^2,v^3,v^4,v^5,\ldots\\

&uv,uv^2,uv^3, uv^4,uv^5,\ldots\\

&u^2v,u^2v^2,u^2v^3, u^2v^4,u^2v^5,\ldots\\

&u^3v,u^3v^2,u^3v^3\\

&u^4v,u^4v^2,u^4v^3\\

&\vdots\\

&u^nv,u^nv^2,u^nv^3\; (n\geq 3)\\

&\vdots

\end{align*}$$

We should have:

$$0=u^3=u^4=\ldots, 0=v^4=v^5=\ldots, 0=uv^4=uv^5=\ldots, 0=u^2v^4=u^2v^5=\ldots, 0=u^3v=u^3v^2=u^3v^3=\ldots, \ldots, 0=u^4v=u^4v^2=u^4v^3=\ldots, \ldots 0=u^nv^3=u^nv^4=\ldots (n\geq 3)$$.

Assume ##u^3=0.## Then ##x^3+\bigl\langle x^3y^4 \bigr\rangle =0+\bigl\langle x^3y^4 \bigr\rangle .## This means, that ##x^3-0=x^3\in \bigl\langle x^3y^4 \bigr\rangle.## But this isn't the case. Every element of ##\bigl\langle x^3y^4 \bigr\rangle## is divisible by ##x^3y^4## which is more or less the definition of the ideal. And ##x^3y^4## does not divide ##x^3.##

The notation means
$$
\mathfrak{a}=\bigl\langle M_1,\ldots,M_m \bigr\rangle =\left\{\left.\sum_{i=1}^m p_i(x_1,\ldots,x_n)\cdot M_i\;\right|\;p_i(x_1,\ldots,x_n)\in k[x_1,\ldots,x_n]\right\}
$$
In case ##m=1## we get all polynomials ##p_1(x_1,\ldots,x_n)\cdot M_1## as elements of ##\mathfrak{a}.##
In case ##m=1^,n=2## and ##M_1=x^3y^4## we get all polynomials ##p(x,y)\cdot x^3y^4## as elements of ##\mathfrak{a}.##

Hence, for example, ##u^2v^7=x^2y^7+\bigl\langle x^3y^4 \bigr\rangle \neq 0## simply because ##x^3y^4## does not divide ##x^2y^7.## That's where we get the infinitely many equivalence classes from. And, of course, for example, ##x^2y^7## and ##x^2y^8## are ##k##-linearly independent since no multiplication with a scalar from ##k## turns ##x^2y^7## into ##x^2y^8.##

elias001 said:
So if after the exponents ##m,n## go beyond ##3, 4## respectively in either mixed or single variable term, then both variables should go to 0.

The multiplication ##x^3\cdot y^4## couples the variables. Only those are zero in the quotient ring that contain at least ##x^3## and ##y^4## in their monomials.

elias001 said:
They should be consider to be the same as both basis and coset bases elements. How can their be infinitely many coset basis elements? I think i am not seeing how to explicitly count these bases elements because I am not seeing any theorems that shows how ##\langle x^3y^4\rangle## versus ##\langle x^3,y^4\rangle## as different when counting basis elements in quotient vector spaces.

Look at my definition of the notation ##\bigl\langle M_1,\ldots,M_m \bigr\rangle.## We have
$$
\mathfrak{a}=\bigl\langle x^3y^4 \bigr\rangle = \left\{\left.p(x,y)\cdot x^3\cdot y^4\;\right|\;p(x,y)\in k[x,y]\right\}
$$
and
$$
\mathfrak{b}=\bigl\langle x^3\ ,\ y^4 \bigr\rangle = \left\{\left. p(x,y)\cdot x^3+q(x,y)\cdot y^4\;\right|\;p(x,y),q(x,y)\in k[x,y]\right\}
$$
The second ideal ##\mathfrak{b}## is much bigger than the first ##\mathfrak{a}## since we have two arbitrary polynomials ##p## and ##q.## This makes the quotient ##k[x,y]/\mathfrak{a}## much smaller than the quotient ##k[x,y]/\mathfrak{b}.## So much smaller, that the second quotient ring ##k[x,y]/\mathfrak{b}## is finite-dimensional over ##k## as stated in the theorem, and the first quotient ring ##k[x,y]/\mathfrak{a}## is still infinite-dimensional.

Long story short: ##x^3\not\in\bigl\langle x^3y^4 \bigr\rangle ## hence ##u^3\neq 0.##
 
  • #23
@fresh_42 so if i have a quotient vector space say ##k[x,y]/\mathfrak{m}##, then ##u=x+\mathfrak{m}=0+\mathfrak{m}## if and only if ##x\in \mathfrak{m}##, is that correct? Similarly for ##v=y+ \mathfrak{m}##?

In your two examples, should not the first ideal ##\mathfrak{a}## be much larger than the second ideal ##\mathfrak{b}##? I mean the first is infinite dimensional while the second is finite dimensional.
 
Last edited:
  • #24
elias001 said:
@fresh_42 so if i have a quotient vector space say ##k[x,y]/\mathfrak{m}##, then ##u=x+\mathfrak{m}=0+\mathfrak{m}## if and only if ##x\in \mathfrak{m}##, is that correct? Similarly for ##v=y+ \mathfrak{m}##?
Yes.
elias001 said:
In your two examples, should not the first ideal ##\mathfrak{a}## be much larger than the second ideal ##\mathfrak{b}##?
In ##\mathfrak{a}=\bigl\langle x^3y^4 \bigr\rangle ## we have all polynomial multiplicities of ##x^3y^4.## So ##f(x,y)\in \mathfrak{a}## if ##f(x,y)## can be written as ##f(x,y)=p(x,y)x^3y^4.##

In ##\mathfrak{b}=\bigl\langle x^3\ ,\ y^4 \bigr\rangle ## we have all sums of polynomial multiplicities of ##x^3## and ##y^4.## So ##g(x,y)\in \mathfrak{b}## if ##g(x,y)## can be written as ##g(x,y)=p(x,y)x^3+q(x,y)y^4.##

##\mathfrak{b}## has more degrees of freedom (##p,q##) than ##\mathfrak{a}## has (only ##p##), hence ##\mathfrak{b}## is larger and the quotient ring ##k[x,y]/\mathfrak{b}## is smaller. Same as ##1/2>1/8.## Two is smaller than eight, and ##1/2## is larger than ##1/8.##


elias001 said:
I mean the first is infinite dimensional while the second is finite dimensional.
 
  • #25
Here is another way to think of it.

Write down any polynomial in two variables. Then whenever you see ##x^3y^4##, replace it by ##z.## Finally set ##z=0## and see what is left.

Next, take this polynomial again and whenever you see ##x^3## replace it by ##z##, and when you see ##y^4## then replace this by ##z,## too. Again, set ##z=0## and see what is left.
 
  • #26
@fresh_42 is there a theorem that discuss the dimension of a quotient vector space for when quotient out ideal of the ambient space in terms of degree of freedoms. I know degree of freedoms comes from classical mechanics. I sleep better at night knowing where and how these little tip bits can be justified.

I understand that with ##\langle x^3y^4 \rangle##, one multiplied by one polynomial where as for ##\langle x^3,y^4 \rangle##, one multiplies by two different polynomials. So the former is smaller than the latter. But hasn't someone made this little idea precise? Abstract algebra is extremely famous to have terms for every little change in a mathematical concept however minuscule.

For the final example, if I have ##k[x,y]/\langle x^3+y^4\rangle##, what would its basis elements be? Or can i treat it as being equivalent to ##k[x,y]/\langle x^3,y^4\rangle##?

How do I message you? I want to show you what happened in MSE and whether I should worry about similar situation occurring in real life when I go back to school in six months time.
 
Last edited:
  • #27
elias001 said:
@fresh_42 is there a theorem that discuss the dimension of a quotient vector space for when quotient out ideal of the ambient space in terms of degree of freedoms. I know degree of freedoms comes from classical mechanics. I sleep better at night knowing where and how these little tip bits can be justified.

Well, this is more complicated than it looks at first sight. Quotients can be built whenever we have an equivalence relation. This includes topological spaces, vector spaces, rings, algebras, groups, and even function spaces or spaces of sequences. And the answer to your question is a bit different in each case. Groups, for example, have no dimension. The main difference between these categories is the structure of the quotient, which, in our case, is an ideal and not only a vector subspace.

If we only have vector spaces ##V##, then we can factor any subspace ##U\subseteq V.## If they are finite-dimensional, then ##\dim V/U=\dim V -\dim U.##

Here we have an infinite-dimensional vector space ##k[x_1,\ldots,x_n]## and not only a subspace, but an ideal ##\mathfrak{a}.## This complicates things. One theorem you asked about is the one in the first post here. And the words linear independent or dimension always require saying linear independent over what, or dimension over what? The dimension of ##k[x,y]/\bigl\langle x,y \bigr\rangle## is one as ##k##-vector space, the dimension of ##k[x,y]/\bigl\langle x \bigr\rangle = k[y]## is infinite as ##k##-vector space. In the world of ideals and rings, one doesn't speak of vector spaces and dimensions, but of modules and number of generators instead.

elias001 said:
Also, how do I message you? I want to show you what happened in MSE and whether I should worry about similar situation occurring in real life when I go back to school in six months time.
With the small letter icon in the gallery.

We also have a STEM Academic Advising Forum where you can ask such questions. It might be even better to gather a couple of different views. E.g., such questions often depend on local parameters (country, school system, etc.).
 
  • #28
@fresh_42 For the case of ##k[x,y]/\langle x^3+y^4\rangle##, what would its basis elements be? Is it the same as ##k[x,y]/\langle x^3,y^4\rangle##? Meaning do I treat it as being equivalent to ##k[x,y]/\langle x^3,y^4\rangle##? I am basing my guess on the fact that for sum of ideals ##\langle x^3,y^4\rangle = \langle x^3\rangle + \langle y^4\rangle##

I will post questions about the other direction tomorrow.
 
Last edited:
  • #29
elias001 said:
@fresh_42 For the case of ##k[x,y]/\langle x^3+y^4\rangle##, what would its basis elements be?

These are all polynomials which are divisible by ##x^3+y^4.## The elements in the ideal are ##p(x,y)\cdot (x^3+y^4).## We set ##x^3+y^4=0## which means ##x^3=-y^4.## Hence, we can replace every occurance of ##x^3## by ##-y^4## and have all polynomials with ##x## exponents ##0,1,2## and arbitrary ##y.## That is
$$
k[x,y]/\bigl\langle x^3+y^4 \bigr\rangle \cong k[y] +x\cdot k[y] + x^2 \cdot k[y]
$$
The dimension over ##k## is infinte since ##\dim_kk[y]=\infty .## But ##k[x,y]/\bigl\langle x^3+y^4 \bigr\rangle## is a "three-dimensional" ##k[y]##-module, or better: generated by three elements as a ##k[y]##-module. The language changes, too, if we allow scalar domains that are not fields anymore, and ##k[y]## isn't a field.

elias001 said:
Is it the same as ##k[x,y]/\langle x^3,y^4\rangle##?
No. ##\bigl\langle x^3,y^4 \bigr\rangle## limits all exponents of ##x## to under three, and all exponents of ##y## to under four. In ##\bigl\langle x^3+y^4 \bigr\rangle## we have again a coupling of ##x## and ##y,## and can therefore replace higher powers of ##y## by ##x## or the other way around, but not both.

elias001 said:
Meaning do I treat it as being equivalent to ##k[x,y]/\langle x^3,y^4\rangle##?
No.
elias001 said:
I am basing my guess on the fact that for sum of ideals ##\langle x^3,y^4\rangle = \langle x^3\rangle + \langle y^4\rangle##

But ##\bigl\langle x^3 \bigr\rangle +\bigl\langle y^4 \bigr\rangle \neq \bigl\langle x^3+y^4 \bigr\rangle.## If you say ideal, then it has to be clear or better noted in which ring. The ideals ##\bigl\langle x^3 \bigr\rangle \subseteq k[x]## and ##\bigl\langle x^3 \bigr\rangle \subseteq k[x,y]## are different things.
 
  • #30
@fresh_42 oh right, right. I was too quick and hopeful about the sum of ideals. Anyways, I will post the questions about the other direction here tomorrow. I think it is late for you. I am in eastern standard time -5:00 hrs. i think you are more east than me.
 
  • #31
elias001 said:
@fresh_42 oh right, right. I was too quick and hopeful about the sum of ideals. Anyways, I will post the questions about the other direction here tomorrow. I think it is late for you. I am in eastern standard time -5:00 hrs. i think you are more east than me.
Six hours to Detroit.
 
  • #32
@fresh_42 For the other direction, it says:
Assume that for each ##i##, there is some ##i## for which ##x^d_i\in\mathfrak{a}##.

Then, for all ##m_i\geq d_i##,

we have ##x_i^{m_i}=x^{m_i-d_i} x_i^{d_i}\in\mathfrak{a}##.

In particular, take a monomial ##x_1^{m_1}\cdots x_n^{m_n}##.

If there exists at least one ##i## such that ##m_i\geq d_i##,

then ##x_1^{m_1}\cdots x_n^{m_n}\in\mathfrak{a}##.

Now, an element of the quotient has the form ##P=\sum_{m_1,\ldots,m_n\geq 0}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

As you notice, for all ##Q\in \mathfrak{a}##,

we have ##P+Q+\mathfrak{a}=P+\mathfrak{a}.##

The previous point then shows that ##P=\sum_{m_1\leq d_1-1,\ldots,m_n\leq d_n-1}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

Hence, the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1##

for all ##i##, span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##.

Since this family is finite, it follows that this vector space is finite dimensional (of dimension ##\leq d_1\cdots d_n##).

Note that we didn't use the fact that ##\mathfrak{a}## is generated by monomials here.


I don't understand the sentence: "The previous point then shows that ##P=\sum_{m_1\leq d_1-1,\ldots,m_n\leq d_n-1}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

Hence, the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1##

for all ##i##, span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##."

In particular, I am not seeing how the inequalities: ##m_1\leq d_1-1,\ldots,m_n\leq d_n-1##

is arrived at since the author let ##x^{m_i}=x^{m_i-d_i}x^{d_i}##. So ##m_i=m_i-d_i+d_i\geq d_i##.

Also, how does the

"the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1##"

span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##?

Finally, when the author chooses a particular monomial then says "there exists at least one ##i## such that ##m_i\geq d_i##" make the proof to feel like he is going for a proof by contradiction. Since in the beginning of the proof, it assumes "for every ##i##, now, there is at least one ##i##.
 
Last edited:
  • #33
elias001 said:
@fresh_42 For the other direction, it says:
Assume that for each ##i##, there is some ##i## for which ##x^d_i\in\mathfrak{a}##.

Then, for all ##m_i\geq d_i##,

we have ##x_i^{m_i}=x^{m_i-d_i} x_i^{d_i}\in\mathfrak{a}##.

In particular, take a monomial ##x_1^{m_1}\cdots x_n^{m_n}##.

If there exists at least one ##i## such that ##m_i\geq d_i##,

then ##x_1^{m_1}\cdots x_n^{m_n}\in\mathfrak{a}##.

Now, an element of the quotient has the form ##P=\sum_{m_1,\ldots,m_n\geq 0}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

As you notice, for all ##Q\in \mathfrak{a}##,

we have ##P+Q+\mathfrak{a}=P+\mathfrak{a}.##

The previous point then shows that ##P=\sum_{m_1\leq d_1-1,\ldots,m_n\leq d_n-1}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

Hence, the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1##

for all ##i##, span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##.

Since this family is finite, it follows that this vector space is finite dimensional (of dimension ##\leq d_1\cdots d_n##).

Note that we didn't use the fact that ##\mathfrak{a}## is generated by monomials here.


I don't understand the sentence: "The previous point then shows that ##P=\sum_{m_1\leq d_1-1,\ldots,m_n\leq d_n-1}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}+\mathfrak{a}.##

Hence, the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1##

for all ##i##, span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##."

In particular, I am not seeing how the inequalities: ##m_1\leq d_1-1,\ldots,m_n\leq d_n-1##

is arrived at since the author let ##x^{m_i}=x^{m_i-d_i}x^{d_i}##. So ##m_i=m_i-d_i+d_i\geq d_i##.

These were two different considerations.

Try the trick with ##z##. Imagine that you replaced every occurrence of ##x_i^{d_i}## by ##z##. Then you get a polynomial in ##z## with coefficients of the form ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1.## Since ##z\in \mathfrak{a}## we may set ##z=0.## What is left are only terms without a ##z.## None of them has ##x_i^{m_i}## in it if ##m_i\geq d_i## since those had been replaced by ##z.## We thus have only terms with lower exponents than ##d_i.## These count from ##0,1,2,\ldots,d_i-1 ## and are therefore ##d_i## many for each ## i ##.

##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1## span ##k[x_1,\ldots,x_n]/\mathfrak{a}## which are at most ##d_1\ldots d_n## many of them, all other terms had a ##z## and vanished into the ideal ##\mathfrak{a}.##

elias001 said:
Also, how does the

"the classes ##x_1^{m_1}\cdots x_n^{m_n}, m_i\leq d_i-1##"

span the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##?

Because every polynomial ##P## can be written this way after the ##z=0## procedure. You cannot find a polynomial in ##k[x_1,\ldots,x_n]/\mathfrak{a}## that is not composed of these monomials.

elias001 said:
Finally, when the author chooses a particular monomial then says "there exists at least one ##i## such that ##m_i\geq d_i##" make the proof to feel like he is going for a proof by contradiction. Since in the beginning of the proof, it assumes "for every ##i##, now, there is at least one ##i##.

He just describes that all ##x_i^{m_i}## with ##m_i\geq d_i## vanish into ##\mathfrak{a}.## That is the procedure going from ##k[x_1,\ldots,x_n]## to ##k[x_1,\ldots,x_n]/\mathfrak{a}.## The procedure I described with the auxiliary variable ##z.## It is what the ring homomorphism does, the transition from ##x## to ##u.## We have arbitrary powers of ##x,## but only powers ##u^m## with ##m<d.## The rest have gone into ##\mathfrak{a}.## It is therefore no proof by contradiction, it is what happens at
$$
\pi\, : \,k[x_1,\ldots,x_n]\longrightarrow k[x_1,\ldots,x_n]/\mathfrak{a}
$$
i.e.
\begin{align*}
\pi(P)&=\pi\left(\sum_{m_1,\ldots,m_n\geq 0}a_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}\right)\\[6pt]
&=\sum_{m_1,\ldots,m_n\geq 0}a_{m_1,\ldots,m_n} \pi(x_1)^{m_1}\cdots \pi(x_n)^{m_n}\\[6pt]
&=\sum_{m_1,\ldots,m_n\geq 0}a_{m_1,\ldots,m_n} u_1^{m_1}\cdots u_n^{m_n}\\[6pt]
&=\sum_{m_1<d_1,\ldots,m_n<d_n}a_{m_1,\ldots,m_n} u_1^{m_1}\cdots u_n^{m_n}
\end{align*}
because all ##\pi(x_i^{m_i})=u_i^{m_i}=0## if ##m_i\geq d_i.##

The decomposition ##x_i^{m_i}=x_i^{m_i-d_i}\cdot x_i^{d_i} \in \mathfrak{a}## if ##m_i-d_i\geq 0## was only another way to say that those powers vanish into ##\mathfrak{a}.##

These were three different ways to describe it: the author's method with ##x_i,## the trick with ##z=0,## and the formal method with the ring homomorphism ##\pi(x_i)=u_i.## But they are all the same. It's not an additional assumption in the proof, it is what happens if we say modulo ##\mathfrak{a}.##
 
Last edited:
  • #34
@fresh_42 I don't know why the person who came up with the answer in both direction did not put more details into the write up. I get the impression that he is writing for a journal article. I replied to the last post that you posted in my first thread I made on here a few days ago. I tried out the other direction and wrote it out. I was wondering if you can look it over and give some feedback. The post is here: https://www.physicsforums.com/threa...-x-is-irreducible-in-bbb-q-_-bbb-z-x.1080373/
 
Last edited:
  • #35
I keep thinking this should be easy but don't actually see why. However, assuming the field is algebraically closed, it is a direct corollary of the most basic algebraic geometric result, namely the nullstellensatz. [I see it now. It is deduced in the last paragraph from an abstract algebraic version of the nullsatz, i.e. the fact that the radical of an ideal A equals the intersection of all prime ideals containing A.].
Geometric argument: One looks at the common zeroes V(A) of the monomials (equivalently all elements) in the ideal A, and notices that since all monomials are homogeneous, V(A) must form a cone, and indeed is a union of coordinate spaces of various dimensions. If I(V(A)) is the ideal of all polynomials vanishing on V(A), then the quotient k[X]/I(V(A)) is the ring of polynomial functions on V(A). Since A is contained in I(V(A)), and the quotient k[X]/A is already finite dimensional, so is the quotient k[X]/I(V(A)), so this set V(A) of common zeroes cannot contain any coordinate axis, hence equals only the origin.
By the nullstellensatz, the ideal I(V(A)) of all polynomials vanishing on the set V(A) (i.e. the intersection of all maximal ideals containing A) is just the radical of A, which solves the problem. I.e. since V(A) equals only the origin, the ideal I(V(A)) is the maximal ideal M = (X1,...,Xn), which thus equals the radical of A. Thus all elements Xj are in the radical of A, i.e. some power of each Xj belongs to A.

I keep trying some more algebraic argument using the fact that a finite dimensional domain is a field, hence all prime ideals containing A must be maximal. I don't quite see how to deduce that M is the only prime ideal containing A, which is what we need to deduce that rad(A) = M.....???

OK, how's this? By definition of a prime ideal, and the nature of a monomial, a minimal prime ideal P containing A is generated by some subset of the variables. But again, since k[X]/P is finite dimensional when P contains A, then P must actually contain all the variables, i.e. P=M = (X1,...,Xn). Then use fact that the radical of A ( = the ideal of those elements w such that some positive power of w lies in A), is equal to the intersection of all primes containing A; (this uses Zorn's lemma).

That 3 - sentence argument is valid for any field.
 
Last edited:
  • #36
Now let me explain this solution in more detail, since it illustrates the power of abstract concepts. The problem asks us to show that if, for the ideal A:

1) A is generated by monomials, and

2) k[X]/A is finite dimensional over k,

then some positive power of every variable Xj belongs to A.

The first observation is that this property means we are asked to compute the “radical” of A. I.e. the radical of an ideal A consists of those elements x such that some positive power of x belongs to A. So we are asked to prove that under the two given hypotheses, that rad(A) equals the maximal ideal (X1,…,Xn). It is elementary to check that the radical of an ideal is also an ideal, and contains the original ideal.

GEOMETRIC ARGUMENT:
In algebraic geometry one thinks of polynomials as functions on n -space, and one studies an ideal A by looking at the set of points V(A) in n-space where all functions in the ideal A vanish simultaneously. I.e. p is in V(A) if and only if f(p) = 0 for all f in A. When the field k is algebraically closed, the first basic theorem, Hilbert’s nullstellensatz, or “zero set theorem”, says that two ideals have the same common vanishing locus if and only if they have the same radical. In particular the radical of A is the largest such ideal, hence equals the ideal I(V(A)) of all functions that vanish identically on V(A). I.e. when k is algebraically closed, then f is in I(V(A)) if and only if f(p) =0 for all p in V(A), if and only if f is in rad(A), i.e. iff some positive power of f belongs to A.

Since the maximal ideal (X1,…,Xn) is exactly the ideal of all functions vanishing on the origin, to show that rad(A) = (X1,…,Xn) it would suffice to show that the only common zero of A is the origin. Since A is generated by monomials, its common vanishing locus V(A) is an intersection of unions of hyperplanes, hence is a union of coordinate subspaces.
Since a function f is in I(V(A)) if and only if it equals zero on V(A), the quotient space k[X]/I(V(A)) is the space of all polynomial functions on V(A). Moreover since I(V(A)) contains A, the natural map k[X]/A —-> k[X]/I(V(A)) is surjective, so finite dimensionality of the first space forces finite dimensionality of the second.
But the space of polynomial functions on V(A) would contain all polynomials in Xj if V(A) contained the Xj axis, so the set V(A) is a union of coordinate subspaces that does not contain any axes, hence consists only of the origin. qed. at least in the case that k is algebraically closed.


ALGEBRAIC ARGUMENT:
However we want to give an argument that does not assume k algebraically closed. So we need another characterization of the radical of the ideal A. A standard result in commutative algebra is that rad(A) equals the intersection of all prime ideals containing A, where an ideal P is “prime” if and only if whenever a product fg belongs to P, then either f or g belongs to P. In particular maximal ideals are prime. I.e. P is prime if and only if k[X]/P is an integral domain, and M maximal implies k[X]/M is a field hence a domain. (Recall a ring is a domain iff fg=0 implies either f or g is 0.)

[Motivation: This “standard result” is an abstract version of the nullsatz. I.e. if we think of points as represented by the maximal ideals of functions vanishing on them, then a function f vanishes on the point corresponding to a maximal ideal M if and only if f belongs to M. (If k is algebraically closed then all maximal ideals do correspond to points.) So, if you sort this all out, the ideal of all functions vanishing on all the points of the set V(A) would be the intersection of maximal ideals containing A. If we expand our notion of “points” to include all maximal ideals, and also “fat points” which are dense in higher dimensional irreducible sets, and which correspond to prime ideals, then we would expect I(V(A)) to equal the intersection of all prime ideals containing A. So the abstract nullsatz says that rad(A) = intersection of all prime ideals containing A.]

Assuming the abstract nullsatz, it suffices to prove that under the two given hypotheses 1), 2), on A from the problem above, that (X1,…,Xn) is the only prime ideal containing A. Now by definition of prime ideal P, if P contains a monomial, i.e. a product of the variables, then P contains at least one of the variables. And if P contains even one of the variables in a monomial, then by definition of an ideal, P contains the monomial. Hence a prime ideal P contains an ideal A generated by monomials, if and only if P contains at least one variable from each generating monomial. Hence the minimal prime ideals P containing A (whose intersection is the same as that of all primes containing A) are ideals generated by some of the variables Xj, e.g. (X1,X4,X7). But now we use the fact that by property 2), we must also have k[X]/P finite dimensional whenever P contains A. Hence the only prime ideal P containing A must be the maximal ideal (X1,…,Xn). Thus rad(A) = (X1,…,Xn) as desired.

[For experts, to prove the abstract nullsatz, first prove that the ideal of nilpotent elements of a ring R consists of the intersection of all prime ideals of R, then note that the radical of A consists of the nilpotent elements of R/A.
It is elementary that all nilpotents of R belong to all primes of R. So if f is not nilpotent, it suffices to produce a prime P not containing f. Define the ring of fractions R_f, whose denominators are non negative powers of f, and consider the natural map R-->R_f, sending g to g/1. Since f is a unit in R_f, no proper ideals of R_f contain f. By Zorn's lemma R_f contains (proper) maximal, hence prime, ideals, and then we pull back one of them to R, getting a prime that does not contain f.]
 
Last edited:
  • #37
Wait a minute,isn't this problem much easier than I have been making it look? I.e. suppose A is an ideal in k[X1,...,Xn] generated by monomials, i.e. products of some of the variables. Suppose no generating monomial contains X1. Then all generators are contained in the ideal (X2,...,Xn), hence k[X]/A ---> k[X]/(X2,...,Xn) ≈ k[X1] is surjective, so k[A] cannot be finite dimensional. Hence some monomial generator does contain X1. Now suppose every monomial generator containing X1 also contains some other variable. Then again every monomial generator is contained in the ideal (X2,...,Xn) and again A is contained in this ideal, contradicting finite dimensionality of k[X]/A. Hence some monomial generator contains X1 and only X1, i.e. some power of X1 is a generator, and A contains a power of X1. The same argument shows for all Xj, some power of Xj lies in A. I apologize for not reading this long thread, and presume this argument appears there somewhere, (unless I have made a naive mistake).
 
Last edited:
  • #38
mathwonk said:
Wait a minute, isn't this problem entirely trivial?
Well, that depends on your level of education. It is nearly trivial for you and me, but not automatically trivial for students with little experience with quotient rings. And a proof can be very teaching, e.g., induction, the insertion homomorphism, and things like that.

By the way, that has puzzled me for years: do you say factor ring and factoring (out) or quotient ring and dividing (by)?
 
  • #39
A one sentence summary: If A is an ideal of k[X] generated by monomials none of which is a power of Xj, then A is contained in the ideal (X1,...,Xj-1,Xj+1,...,Xn) and hence k[X]/A--->k[X]/(X1,...,Xj-1,Xj+1,...,Xn) ≈ k[Xj] is a surjection and k[X]/A could not be finite dimensional.

Thank you, let me rephrase; instead of "isn't this problem trivial", I should have said "isn't this problem much easier than I have been making it look?" My feeling though is that I think using induction also makes it look harder than it is. I agree however that for someone unfamiliar with quotient rings, induction might seem easier.

I call the rings k[X]/A either quotient rings or factor rings, usually the former. Zariski-Samuel and one other book of mine call them residue class rings, two of my books call them factor rings, one calls them difference rings, and 8 of my other books call them quotient rings.
 
Last edited:
  • #40
I am uncertain whether it helps to understand the concept or if it adds confusion, but the whole situation can be translated to numbers. We could choose ##k=\mathbb{Q}## and instead of indeterminates, we choose transcendental numbers, e.g. ##\mathbb{Q}\left[\pi, e, 2^\sqrt{2}\right].## That makes it look a bit more familiar. The downside is that it is not obvious that the three numbers are ##\mathbb{Q}##-linearly independent.
 
  • #41
@mathwonk I just saw your replies. The book where this question came from only assumes the reader has basic linear algebra and a semester of abstract algebra. In the latter, the reader is not assumed to know anything about basics of Galois theory. The high brow machinery you are using get introduced later in the text. I am about to email the author and ask some questions relating to the exercise. I will get back to you and @fresh_42 what he has to say.

Also i saw somewhere that you worked with Thomas Finney before. His calculus text was the one I used for self teaching myself Calculus.
 

Similar threads

Replies
4
Views
2K
Replies
12
Views
2K
Replies
2
Views
1K
Replies
28
Views
3K
Replies
10
Views
2K
Replies
2
Views
2K
Replies
1
Views
1K
Back
Top