I Proving inequality about dimension of quotient vector spaces

  • I
  • Thread starter Thread starter elias001
  • Start date Start date
  • Tags Tags
    Abstract algebra
elias001
Messages
359
Reaction score
24
TL;DR Summary
solution verification for showing ##d_1+\cdots +d_n-n+1 \leq {\text{dim}}_k k[x_1,\ldots,x_n]/\mathfrak{a}##
The following are from Froberg's "Introduction to Grobner bases" , and Hungerford's undergraduate "Abstract Algebra" text, and also a continuation of this [post](https://math.stackexchange.com/questions/4947061/questions-about-rhs-inequality-of-d-1-cdots-d-n-n1-leq-textdim-k-k?noredirect=1#comment10585303_4947061)

##\textbf{Background}##
##\textbf{Theorem 1:}## ##k[x_1,\ldots,x_{n-1},x_n]\backsimeq (k[x_1,\ldots,x_{n-1}])[x_n]##
##\textbf{Theorem 2:}## Let ##K## be an extension field of ##F## and ##u\in K## an algebraic element over ##F## with minimal polynomial ##p(x)## of degree ##n##. Then ##(1)## ##F(u)\equiv F(x)/(p(x))##
##(2)## ##\{1_F,u,u^2,\ldots,u^{n-1}\}## is a basis of the vector space ##F(u)## over ##F##.
##(3)## ##[F(u):F]=n##.

##\textbf{Assumed Exercise:}## Let ##\mathfrak{a}## be an ideal generated by monomials in ##k[x_1,\ldots,x_n]##. Show that ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k## if and only if for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##.

##\textbf{Exercise:}## Let ##\mathfrak{a}## be a monomial ideal such that ##\{x_i^{d_i},\cdots,{x_n^{d_n}}\}## is a part of a minimal system of generators for ##\mathfrak{a}##, where ##d_i## are positive. Show that
$$(LHS)\quad d_1+\cdots d_n-n+1 \leq {\text{dim}}_k k[x_1,\ldots,x_n]/\mathfrak{a}\leq d_1d_2\cdots d_n\quad (RHS)$$
##\textbf{Questions}##

I am trying to do the above **Exercise** using the [solution](https://math.stackexchange.com/questions/4946665/showing-that-kx-1-ldots-x-n-mathfraka-is-a-finite-dimensional-vector-spa?noredirect=1#comment10577454_4946665) to **Assumed Exercise** above and basic abstract algebra knowledge.

Below is my attempted solution to the left hand side inequality:

##\textbf{Attempted Solution for (LHS) inequality:}##

From **Assumed Exercise** above, we know that ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional (quotient) vector space, hence we need to count the minimum number of basis elements. We can set the ideal ##\mathfrak{a}\subset k[x_1,\ldots,x_n]## to be ##\mathfrak{a}=\langle x_1,\ldots,x_n \rangle##, and ##x_i^{d_i}\in \langle x_i \rangle## for ##i=1,2,3,\ldots, n##. Also, by **Theorem 1** above, ##k[x_1,\ldots,x_{n-1},x_n]\backsimeq (k[x_1,\ldots,x_{n-1}])[x_n]##, we can reduce to the case from ##n## indeterminates to the case of a single indeterminate for the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##.

So let ##F= (k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]), u=x_i, F(u)=(k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}])(x_i),## like the hypothesis of **Theorem 2** above. Using ##(2)## of **Theorem 2**, each ##x_i\in \mathfrak{a}## is a monomial of maximal (is it maximum or total?) degree ##d_i##. Using ##(2)## of **Theorem 2** one more time, there exist ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}## as a basis of the vector space ##F(u)=(k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}])(x_i)## over ##\mathfrak{a}##. There are ##d_i## number of elements in the basis ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}##.

Since there are ##i=n## indeterminates of the ##x_i## and for each ##x_i##, there are ##d_i## number of basis elements.

Hence there are ##d_1+d_2+\cdots +d_n## total number of basis elements. However each basis contains the identity element ##1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]}##, and they are all equal to each other, since the identity element of a vector space is unique. Hence, the identity element has been double counted ##n-1## times. So the dimension of ##k[x_1,\ldots,x_n]/\mathfrak{a}## is at least ##d_1+d_2+\cdots +d_n-(n-1)=d_1+d_2+\cdots +d_n-n+1##

##\textbf{Can someone comment}## on whether I proved the ##\text{LHS}## of the above **Exercise** correctly please? I am not sure of the following two counting argument step is correct. It has to do with my intepretations of meaning of ##d_i## for arriving at ##d_1+\cdots +d_n## and also and my understanding of the property of the identity elemetn to arrive at subtracting ##n-1##.

>##\textbf{(1)}## Using ##(2)## of **Theorem 2**, each ##x_i\in \mathfrak{a}## is a monomial of maximal (is it maximum or total?) degree ##d_i##. Using ##(2)## of **Theorem 2** one more time, there exist ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}## as a basis of the vector space ##F(u)=(k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}])(x_i)## over ##\mathfrak{a}##. There are ##d_i## number of elements in the basis ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}##.

Since there are ##i=n## indeterminates of the ##x_i## and for each ##x_i##, there are ##d_i## number of basis elements.

>##\textbf{(2)}## However each basis contains the identity element ##1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]}##, and they are all equal to each other, since the identity element of a vector space is unique.

Thank you in advance

Moderator Note: reformatted according to our version of MathJax, see
https://www.physicsforums.com/help/latexhelp/
 
Last edited by a moderator:
Physics news on Phys.org
Can we discuss this step by step, because I got lost early in your proof?

elias001 said:
TL;DR Summary: solution verification for showing ##d_1+\cdots +d_n-n+1 \leq {\text{dim}}_k k[x_1,\ldots,x_n]/\mathfrak{a}##

The following are from Froberg's "Introduction to Grobner bases" , and Hungerford's undergraduate "Abstract Algebra" text, and also a continuation of this [post](https://math.stackexchange.com/questions/4947061/questions-about-rhs-inequality-of-d-1-cdots-d-n-n1-leq-textdim-k-k?noredirect=1#comment10585303_4947061)

##\textbf{Background}##
##\textbf{Theorem 1:}## ##k[x_1,\ldots,x_{n-1},x_n]\backsimeq (k[x_1,\ldots,x_{n-1}])[x_n]##
##\textbf{Theorem 2:}## Let ##K## be an extension field of ##F## and ##u\in K## an algebraic element over ##F## with minimal polynomial ##p(x)## of degree ##n##. Then ##(1)## ##F(u)\equiv F(x)/(p(x))##
##(2)## ##\{1_F,u,u^2,\ldots,u^{n-1}\}## is a basis of the vector space ##F(u)## over ##F##.
##(3)## ##[F(u):F]=n##.

##\textbf{Assumed Exercise:}## Let ##\mathfrak{a}## be an ideal generated by monomials in ##k[x_1,\ldots,x_n]##. Show that ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional vector space over ##k## if and only if for each ##i## there is a ##d_i>0##, such that ##{x_i^{d_i}}\in \mathfrak{a}##.
If I understood you correctly, then this is more an assumed lemma than an exercise since we consider it true. The proof of it should give us a hint for the exercise you actually want to solve.
elias001 said:
##\textbf{Exercise:}## Let ##\mathfrak{a}## be a monomial ideal such that ##\{x_i^{d_i},\cdots,{x_n^{d_n}}\}## is a part of a minimal system of generators for ##\mathfrak{a}##, where ##d_i## are positive. Show that
$$(LHS)\quad d_1+\cdots d_n-n+1 \leq {\text{dim}}_k k[x_1,\ldots,x_n]/\mathfrak{a}\leq d_1d_2\cdots d_n\quad (RHS)$$
##\textbf{Questions}##

I am trying to do the above **Exercise** using the [solution](https://math.stackexchange.com/questions/4946665/showing-that-kx-1-ldots-x-n-mathfraka-is-a-finite-dimensional-vector-spa?noredirect=1#comment10577454_4946665) to **Assumed Exercise** above and basic abstract algebra knowledge.

Below is my attempted solution to the left hand side inequality:

##\textbf{Attempted Solution for (LHS) inequality:}##

From **Assumed Exercise** above, we know that ##k[x_1,\ldots,x_n]/\mathfrak{a}## is a finite dimensional (quotient) vector space, hence we need to count the minimum number of basis elements.
In other words: we need to find ##d_1+\ldots+d_n-n+1## many linearly independent polynomials.
elias001 said:
We can set the ideal ##\mathfrak{a}\subset k[x_1,\ldots,x_n]## to be ##\mathfrak{a}=\langle x_1,\ldots,x_n \rangle##, and ##x_i^{d_i}\in \langle x_i \rangle## for ##i=1,2,3,\ldots, n##.

And here is where I got lost. If ##\mathfrak{a}=\bigl\langle x_1,\ldots,x_n \bigr\rangle## then
$$
k[x_1,\ldots,x_n]/\mathfrak{a}\cong k
$$
and ##\dim_k k[x_1,\ldots,x_n]/\mathfrak{a}=1.## In this case ##d_i=1## and there is nothing to show. This is a very special case and we cannot simply "set" ##\mathfrak{a}=\bigl\langle x_1,\ldots.x_n \bigr\rangle.##

elias001 said:
Also, by **Theorem 1** above, ##k[x_1,\ldots,x_{n-1},x_n]\backsimeq (k[x_1,\ldots,x_{n-1}])[x_n]##, we can reduce to the case from ##n## indeterminates to the case of a single indeterminate for the vector space ##k[x_1,\ldots,x_n]/\mathfrak{a}##.
I do not see how we can do the reduction. ##\mathfrak{a}## can somehow lie across all variables, so we cannot simply concentrate on just one.
elias001 said:
So let ##F= (k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]), u=x_i, F(u)=(k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}])(x_i),## like the hypothesis of **Theorem 2** above. Using ##(2)## of **Theorem 2**, each ##x_i\in \mathfrak{a}## is a monomial of maximal (is it maximum or total?) degree ##d_i##. Using ##(2)## of **Theorem 2** one more time, there exist ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}## as a basis of the vector space ##F(u)=(k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}])(x_i)## over ##\mathfrak{a}##. There are ##d_i## number of elements in the basis ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}##.

Did you forget the comma between ##x_{i-1}## and ##x_{i+1},## or do you really mean the product? I assume it's a forgotten comma.

elias001 said:
Since there are ##i=n## indeterminates of the ##x_i## and for each ##x_i##, there are ##d_i## number of basis elements.

Hence there are ##d_1+d_2+\cdots +d_n## total number of basis elements. However each basis contains the identity element ##1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]}##, and they are all equal to each other, since the identity element of a vector space is unique. Hence, the identity element has been double counted ##n-1## times. So the dimension of ##k[x_1,\ldots,x_n]/\mathfrak{a}## is at least ##d_1+d_2+\cdots +d_n-(n-1)=d_1+d_2+\cdots +d_n-n+1##

This simply says that ##\{1_F,x_1,\ldots,x_1^{d_1-1} ,x_2,\ldots,x_2^{d_2-1},\ldots,x_n,\ldots,x_n^{d_n-1}\}## are ##F##-linear independent and contained in ##\mathfrak{a}.## The linear independence over ##F## is obvious, but how are they in ##\mathfrak{a}##? We only know that ##x_i^{d_i}\in \mathfrak{a}.##

We must switch from ##x## to ##u## in the words of Theorem 2, or more precisely, we have to consider the projection ##\pi\, : \,k[x_1,\ldots,x_n] \twoheadrightarrow k[x_1,\ldots,x_n]/\mathfrak{a}## with ##\pi(x_i)=u_i.## This guarantees us that ##u_i^{m_i}\in \mathfrak{a}.## However, it costs us linear independence of ##\{1_F,u_1,\ldots,u_1^{d_1-1},u_2,\ldots,u_2^{d_2-1},\ldots,u_n,\ldots,u_n^{d_n-1}\}##, which we now have to prove. By Theorem 2, it suffices to show that ##\deg m(u_i) \geq d_i## where ##m(x)\in F[x_i]## is the minimal polynomial.

However, the ##d_i## in the problem statement of the exercise are almost arbitrary. Are you sure we don't have ##\deg m(u_i) = d_i## as a given constraint? If so, we can use Theorem 2. Patching all together with Theorem 1 is then only a technical issue.

I stop here because, in particular, the choice of the ##d_i## is crucial.

elias001 said:
##\textbf{Can someone comment}## on whether I proved the ##\text{LHS}## of the above **Exercise** correctly please? I am not sure of the following two counting argument step is correct. It has to do with my intepretations of meaning of ##d_i## for arriving at ##d_1+\cdots +d_n## and also and my understanding of the property of the identity elemetn to arrive at subtracting ##n-1##.

>##\textbf{(1)}## Using ##(2)## of **Theorem 2**, each ##x_i\in \mathfrak{a}## is a monomial of maximal (is it maximum or total?) degree ##d_i##. Using ##(2)## of **Theorem 2** one more time, there exist ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}## as a basis of the vector space ##F(u)=(k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}])(x_i)## over ##\mathfrak{a}##. There are ##d_i## number of elements in the basis ##\{1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]},x_i,x_i^2,\ldots,x_i^{{d_i}-1}\}##.

Since there are ##i=n## indeterminates of the ##x_i## and for each ##x_i##, there are ##d_i## number of basis elements.

>##\textbf{(2)}## However each basis contains the identity element ##1_{F=k[x_1,x_2,\ldots x_{i-1} x_{i+1}\ldots x_{n}]}##, and they are all equal to each other, since the identity element of a vector space is unique.

Thank you in advance

Moderator Note: reformatted according to our version of MathJax, see
https://www.physicsforums.com/help/latexhelp/
 
##x_i^{d_i}\in \mathfrak{a}## by the assumed exercise means ##\pi\left(x_i^{d_i}\right)=u_i^{d_i}=0.## We thus need ##d_i## to be minimal with this property to make sure that ##u_i^{k_i}\not\in \mathfrak{a}## for any ##k_i<d_i.##
 
@fresh_42 Yes I did had a typo between ##x_{i-1}, x_{i+1}##. thank you for looking over. I was trying to do this exercise when going through a book on grobner bases. I got interested in the subject after having gone through a chapter on the topic in a different beginning abstract algebra textbook. My first introduction to abstract algebra is using Thomas Hubgerford's undergraduate text. I am going through it on my own. I am almost done the text. I am also going through all the excises in that text I will be officially taking my first abstract algebra course where the text by Dummit & Foote will be used. I am hoping the exercise can be done without using machinery or concepts from commutative algebra at the level of Atiyah and Mcdonald's text. Maybe I should come back to this exercise another time since I might be lacking in maturity at this point. What do you think? Another thing is i like the computational side of math, especially when it comes to algebra.
 
elias001 said:
@fresh_42 Yes I did had a typo between ##x_{i-1}, x_{i+1}##. thank you for looking over. I was trying to do this exercise when going through a book on grobner bases. I got interested in the subject after having gone through a chapter on the topic in a different beginning abstract algebra textbook. My first introduction to abstract algebra is using Thomas Hubgerford's undergraduate text. I am going through it on my own. I am almost done the text. I am also going through all the excises in that text I will be officially taking my first abstract algebra course where the text by Dummit & Foote will be used. I am hoping the exercise can be done without using machinery or concepts from commutative algebra at the level of Atiyah and Mcdonald's text. Maybe I should come back to this exercise another time since I might be lacking in maturity at this point. What do you think? Another thing is i like the computational side of math, especially when it comes to algebra.
Maybe it helps to keep simple examples in mind, like ##\mathbb{R}[x]/\bigl\langle x^2+1 \bigr\rangle\cong \mathbb{C}## or ##\mathbb{Q}[x]/\bigl\langle x^2+x+1 \bigr\rangle ,## which can also be combined as ##\mathbb{Q}[x,y]
/\bigl\langle x^2+1,y^2+y+1 \bigr\rangle ,## or if you like to use monomials ##\mathbb{Q}[x,y]/\bigl\langle x^2y,xy^4 \bigr\rangle .##

Atiyah-MacDonald is a good book. The exercises, however, are not always easy.
 
@fresh_42 i think Atiyah-MacDonald is much easier than say Hartshorne's text. The thing is i am having hard time visualizing quotient rings. Also i find in Algebra, how to concrete computation of specific examples can't really be learned from just learning about how to prove standard theorems, the case in point is dealing with what ring is a particular quotient ring is isomorphic to. With regard to examples and conputation of them, there is a huge disconnect between these exercises and ones dealing with thd standard theorems.

Also with regards to the exercise in the post, I am trying to reduced counting each ##d_i## to the one variable case. That is why I am trying to use the quoted theorems from Hungerford's text.
 
I'm not sure I can visualize quotient rings very well. I prefer to handle them technically. ##R/\mathfrak{a}## means to identify all elements of ##\mathfrak{a}## with zero. It is already difficult in case ##\mathfrak{a}## isn't a prime ideal since then you can have ##a\cdot b\in \mathfrak{a}## without ##a## or ##b## being in ##\mathfrak{a}.## And if ##x^2y^3\in \mathbb{Q}[x,y]## is identified by zero, how would you visualize that? We get a basis ##\{1,x,x^2\ldots,y,y^2,\ldots,xy,x^2y,x^3y,\ldots, xy^2,x^2y^2,x^3y^2,\ldots,xy^3\}## but what does that mean?

And there is no golden rule. Some people are good at visualizing geometrical or even topological objects, while others are better at techniques.
 
@freah_42 can i ask if you are also on math stackexchange. this place is so much more friendly than there. Also, anytime someone asks a questions, the people answering would give answers that go way beyond the level of the question.

As to the question, don't the ##d_i## count the variables in a monomial. So if you have ##x_1^{d_1}x_2^{d_2}x_3^{d_3}##, then say ##d_1## count the number of basis element using ##x_1##. So there would be ##\{1, x^1, x^2,...,x^{d_1-1}\},## ##d_1## number of elements. But we have three variables, and we count the identity element ##1_F,## ##n=3## time, hence we need to subtract it back twice. That is how I understood the formula ##d_1+d_2+\cdots+d_n-n+1##. Is my understanding correct?
 
Last edited:
elias001 said:
@freah_42 can i ask if you are also on math stackexchange. this place is so much more friendly than there.

I am, but this is exactly the reason I do not post much over there. Dialogues are basically not allowed or have to be in private, simple questions are deleted as substandard, and even correct answers are downvoted by people who do not like "their style". I never got a feeling of what is allowed and what is not. Getting off-topic and being deleted is a high risk. Teaching requires dialogues, not final solutions, and not every remark aside from the problem statement should be removed. If it derails the entire thread, then, yes, open a new one. But saying "it's late here, I may make mistakes" shouldn't be impossible to say.

elias001 said:
Also, anytime someone asks a questions, the people answering would give answers that go way beyond the level of the question.
Yes. More than once, I got the impression that it is more about posing and bragging than it is about helping someone.
elias001 said:
Aa to the question, don't the ##d_i## count the variables in a monomial. So if you have ##x_1^{d_1}x_2^{d_2}x_3^{m_3}##, then say ##d_1## count the number of basis element using ##x_1##. So there would be ##\{1, x^1, x^2,...,x^{d_1-1}\}## ##d_1## elements. But we have three variables, and we count the identity element ##1_F## ##n=3## time, hence we need to subtract it back twice. That is how I understood the formula ##d_1+d_2+\cdots+d_n-n+1##. Is my understanding correct?
Not in my example ##\mathbb{Q}[x,y]/\bigl\langle x^2y^3 \bigr\rangle .## It means that every polynomial ##p(x,y)\in \mathbb{Q}[x,y]## is zero iff ##x^2y^3\,|\,p(x,y).## So no power of ##x## will ever be in the ideal. ##x\cdot y## will become an element of the ideal since ##(xy)^3=x(x^2y^3)=x\cdot 0=0.##

I looked up the definition of monomial on Wikipedia, and it said any multivariate polynomial with only one term, one summand. I first thought it meant polynomials with highest coefficient ##1, ## if not polynomials in only one variable. So better look up what it means in your book. I'm not so sure and it is essential to the problem statement. nLab uses the same definition as Wikipedia "The summands in a polynomial are called monomials."
 
  • #10
@fresh_42 The book where I got the question from, the definition for monomial is as follows:

A monomial ideal is an ideal generated by a set of monomials, i.e., elements in ##k[x_1,x_2,...,x_n]## of the form ##x_1^{e_1}\ldots x_n^{e_n}##.

Here is an example of a standard theorem in the topic on monomial ideal:

Theorem: Let ##\mathfrak{a}## be a monomial ideal in ##k[x_1,\ldots,x_n]##. Let ##f=\sum c_im_i,## where ## c_i\in k\setminus \{0\}## and ##m_i## are different monomials. If ##f\in \mathfrak{a}##, then ##m_i\in\mathfrak{a}## for each ##i##.

Proof: We introduce the concept of fine grading or multigrading of the polynomial ring ##k[x_1,\ldots,x_n]##. If ##cm=x_1^{i_1},\ldots, x_n^{i_n},c\in k\setminus \{0\}##, we set ##\text{mdeg}(cm)=(i_1,\ldots,i_n)##. Let ##\mathfrak{a}=(n_1,n_2,\ldots,n_s)## be a monomial ideal, and suppose ##f=\sum c_im_i\in\mathfrak{a}##. Then ##f=g_1n_1+\cdots+g_sn_s## for some ##g_i=\sum c_im_{ij}\in k[x_1,\ldots,,x_n]##. Let ##c_im_i## be a nonzero term in ##f##. Then ##c_im_i## equals the sum of all elements ##c_{ij}m_{ij}n_i## which are of the same multidegree as ##c_im_i##. Hence ##c_im_i## is a linear combination of the ##n_i's## so ##c_im_i\in \mathfrak{a}##.

Given the above theorem, I thought I should let the ideal ##\mathfrak{a}## be ##\mathfrak{a}=(x_1,x_2,\ldots,x_n)##, and use the idea from Theorem 2 (2) (from Hungerford text quoted in the first post) above, but for the case of multivariable ##x_i^{d_i}##. Now, after speaking with you, I still don't know what the ##d_i## is suppose to count in ##k[x_1,\ldots,x_n]## or in ##k[x_1,\ldots,x_n]/\mathfrak{a}##.
 
Last edited:
  • #11
I am a bit lost here. If you drop the commas after "i.e." then it is the usual definition of monomials as ##x_1^{e_1}\cdot\ldots\cdot x_n^{e_n},## with the commas, it means something different. But the proof uses the usual definition, I think. The argument is simply a rearrangement, so let's see how far we get with the usual definition.

(I have to do this in a separate program so that I can see typos better than here. Hold on...)
 
  • #12
@fresh_42 I edited my post. There should not be any commas with ##x_i^{e_i}## in the definition of monomials. Thank you for noticing.

In the next paragraph after the definition, the book says: For example, if ##(f_1,\ldots,f_r)## is an ideal and ##f_i## divides ##f_j## for some ##i\neq j##, then ##f_j## is not needed as a generator. For monomial ideals this also gives a sufficient condition to get a minimal generating set.
 
  • #13
Ok, I got it, with the usual definition on nLab and Wikipedia.

Let ##\mathfrak{a}## be a monomial ideal generated by the elements ##n_k=x_1^{e_{1,k}}\cdot\ldots\cdot x_n^{e_{n,k}}## for ##k=1,\ldots,s.## Then we have a polynomial ##f=c_1m_1+\ldots c_tm_t\in \mathfrak{a}, ## so
\begin{align*}
f&=\sum_{j=1}^tc_jm_j =\sum_{j=1}^tc_jx_1^{d_{1,j}}\cdot\ldots\cdot x_n^{d_{n,j}}=\sum_{k=1}^sc_k'n_k=\sum_{k=1}^sc_k'x_1^{e_{1,k}}\cdot\ldots\cdot x_n^{e_{n,k}}
\end{align*}
Here we compare the individual terms. Any monomial ##c_jm_j=c_jx_1^{d_{1,j}}\cdot\ldots\cdot x_n^{d_{n,j}}\neq 0## has to be a sum of monomials ##c_k'x_1^{e_{1,k}}\cdot\ldots\cdot x_n^{e_{n,k}}## which all have the same multi-degree. This is what makes polynomials equal. Then
$$
c_jm_j=\sum_{k\in T_j\subseteq \{1,\ldots,s\}}c'_kx_1^{e_{1,k}}\cdot\ldots\cdot x_n^{e_{n,k}}
$$
where ##T_j=\{t\,|\,e_{1,t}+\ldots+e_{n,t}=\operatorname{mdeg}(m_j)\}.## But all these monomials are elements ##n_t## and therewith in ##\mathfrak{a}, ## so ##c_jm_j\in \mathfrak{a}## since ##c_j## is a unit and we can divide by it.

The only thing I do not get is why we cannot compare the two different linear combinations monomial-wise. If the ##m_j## are all different monomials and the ##n_k## are different monomials, too, why don't they already match if they have the same set of powers? There are no linear combinations. Why isn't ##(c_{ij})## already a diagonal matrix?
 
Last edited:
  • #14
elias001 said:
@fresh_42 The book where I got the question from, the definition for monomial is as follows:

A monomial ideal is an ideal generated by a set of monomials, i.e., elements in ##k[x_1,x_2,...,x_n]## of the form ##x_1^{e_1}\ldots x_n^{e_n}##.

Given the above theorem, I thought I should let the ideal ##\mathfrak{a}## be ##\mathfrak{a}=(x_1,x_2,\ldots,x_n)##, and use the idea from Theorem 2 (2) (from Hungerford text quoted in the first post) above, but for the case of multivariable ##x_i^{d_i}##. Now, after speaking with you, I still don't know what the ##d_i## is suppose to count in ##k[x_1,\ldots,x_n]## or in ##k[x_1,\ldots,x_n]/\mathfrak{a}##.
No, that would be too easy. We do not have the naked ##x_j## in the quotient ring, we must consider the ##u_j,## which is basically the set of all polynomials ##x_j+\mathfrak{a}.## The elements of quotients are equivalence classes, which are sets. Two polynomials ## f,g## in ##R/\mathfrak{a}## are equal if and only if ##f=g+a## for some ##a\in \mathfrak{a}.## So the elements of the quotient are the sets ##f+\mathfrak{a}.## To avoid dealing with sets all the time, the letter ##u## as new variable is introduced in theorem 2. We have per definition ##u=x+\mathfrak{a}.##

We need that ##\{1,u_j,u_j^2,\ldots,u_j^{d_j-1}\}## are not in ##\mathfrak{a}## since otherwise, they would be zero in the quotient ring. So ##d_j## must be the smallest power for which ##x_j^{d_j}\in \mathfrak{a}.##

Example: ##\mathbb{C}=\mathbb{R}[ \mathrm{i} ]\cong \mathbb{R}/\bigl\langle x^2+1 \bigr\rangle.## You can either write complex numbers as linear combination of the basis ##a+\mathrm{i}b## where ##\mathrm{i}^2+1=u^2+1 =0## or as equivalence classes ##(a+bx)+\{x^2+1\}.## The former is by far more convenient than the latter. That's why we use ##u## or in the case of complex numbers, ##\mathrm{i},## the image of the variable ##x## under the projection
$$
\mathbb{R}[x]\longrightarrow \mathbb{R}[x]/\bigl\langle x^2+1 \bigr\rangle .
$$
 
  • #15
@fresh_42 i want to say thank you for all your help. I don't want to take up all your time with this question. i think I am not used to handling quotient vector space for monomial ideals and multivariable monomials. i will come back to this question later when I learn more stuff in grobner bases. If I don't get it by then, i will email the author for help Again, thank you so very much for looking over my solutions and commenting on it. :)
 
  • #16
@fresh_42 I emailed the author about the question and the assumed exercise. Hopefully I will get a helpful answer and I will post it here. Maybe we can continue then. I thought more about it, and I think the examples you provided is different than what I assumed. I thought i was only counting monomials of the form ##x_1^{d_1}\cdot\ldots\cdot x_n^{d_n}## and for each ##x_i##, i have ##1_K, x_1, \ldots x_1^{d_1-1}## choices. I have not managed to account for a case like ##x_1^{d_1}x_2^{d_2}## together. I think it needs some sort of comvibatorial counting argument. I know there is one for ##n## choose ##r## with repetition for the case of finding the number of monimals in ##x_1^{d_1}\cdot\ldots\cdot x_n^{d_n}## as, but i am not sure if that formula is sufficient.
 
  • #17
elias001 said:
@fresh_42 I emailed the author about the question and the assumed exercise. Hopefully I will get a helpful answer and I will post it here. Maybe we can continue then. I thought more about it, and I think the examples you provided is different than what I assumed. I thought i was only counting monomials of the form ##x_1^{d_1}\cdot\ldots\cdot x_n^{d_n}## and for each ##x_i##, i have ##1_K, x_1, \ldots x_1^{d_1-1}## choices. I have not managed to account for a case like ##x_1^{d_1}x_2^{d_2}## together. I think it needs some sort of comvibatorial counting argument. I know there is one for ##n## choose ##r## with repetition for the case of finding the number of monimals in ##x_1^{d_1}\cdot\ldots\cdot x_n^{d_n}## as, but i am not sure if that formula is sufficient.
I don't think it is so complicated. Let's remain with the situation
\begin{align*}
k[x_1,\ldots x_n] &\longrightarrow k[x_1,\ldots,x_n]/\mathfrak{a}\\
p(x)&\longmapsto p(x)+\mathfrak{a}
\end{align*}
and set ##u_1=x_1+\mathfrak{a},\ldots,u_n=x_n+\mathfrak{a}.##

Your idea was fine, we only need to show the linear independence of
$$
\{1,u_1,u_1^2,\ldots,u_1^{d_1-1},\ldots,u_n,u_n^2,\ldots,u_n^{d_n-1}\}
$$
instead of
$$
\{1,x_1,x_1^2,\ldots,x_1^{d_1-1},\ldots,x_n,x_n^2,\ldots,x_n^{d_n-1}\}.
$$

This means, we have to show
$$
\alpha_1\cdot 1+\sum_{i=1}^n\sum_{j=1}^{d_j-1} \alpha_{ij}u_i^{j}=0\Longrightarrow \alpha_1=\alpha_{ij}=0 \text{ for all }i,j
$$
Now the zeros in the quotient ring are all polynomials in ##\mathfrak{a}.## So this equation translates to
$$
\alpha_1\cdot 1+\sum_{i=1}^n\sum_{j=1}^{d_i-1} \alpha_{ij}u_i^{j}\in \mathfrak{a}\Longrightarrow \alpha_1=\alpha_{ij}=0 \text{ for all }i,j
$$
where is why we need ##u_i^j\not\in \mathfrak{a}## since such a polynomial would automatically allow ##\alpha_{ij} ## to be arbitrary, in particular different from zero. We therefore need that
$$
d_i=\min\{n\in \mathbb{N}\,|\,x_i^{n}\in \mathfrak{a}\}.
$$
Then, the elements ##\{1,u_i,u_i^2,\ldots,u_i^{d_i-1}\}## are automatically linear independent.

Proof: Assume ##\alpha_0\cdot 1+\sum_{j=1}^{d_i-1} \alpha_{ij} u_i^j=0.## Then
$$
\alpha_0\cdot 1+\sum_{j=1}^{d_i-1} \alpha_{ij} u_i^j=\alpha_0\cdot 1+\sum_{j=1}^{d_i-1} \alpha_{ij} (x_i+\mathfrak{a})^j=\alpha_0\cdot 1+\sum_{j=1}^{d_i-1} \alpha_{ij} x_i^j+\mathfrak{a}\in \mathfrak{a}
$$
But no sum with elements from ##\{1,x_i,x_i^2,\ldots,x_i^{d_i}-1\}## is in ##\mathfrak{a}## because the first power being in ##\mathfrak{a}## is ##x_i^{d_i}## and sums of lower powers cannot generate ##x_i^{d_i}.## Thus all coefficients ##\alpha_1=\alpha_{i1}=\ldots=\alpha_{id_i}=0## are zero, and
##\{1,u_i,u_i^2,\ldots,u_i^{d_i-1}\}## are linear independent.

The same argument holds for all ##u_i## if we only use ##1## once, i.e.
$$
\{1,u_1,u_1^2,\ldots,u_1^{d_1-1},\ldots,u_n,u_n^2,\ldots,u_n^{d_n-1}\}
$$
are linear independent and you have your lower bound. But the minimum condition is necessary.
 
  • #18
@fresh_42 The reason we have ##n-1## is because we double counted ##1_K## ##n-1## times. Also, you are right, I am counting coset elements of the form ##x_i+\mathfrak{a}## instead of ##x_i##.
 
Last edited:
  • #19
elias001 said:
@fresh_42 The reason we have ##n-1## is because we double counted ##1_K## ##n-1## times. Also, you arr right, I am counting coset elements ##x_i+\mathfrak{a}## instead of ##x_i##.
Here is an easy example why we need the minimum condition: ##k[x,y]/\bigl\langle y \bigr\rangle \cong k[x].## We certainly have ##y^3\in \bigl\langle y \bigr\rangle## and ##\{1,y,y^2\}## are linear independent, but ##\{1+\bigl\langle y \bigr\rangle,y+\bigl\langle y \bigr\rangle,y^2+\bigl\langle y \bigr\rangle \}## are not, simply because ## y+\bigl\langle y \bigr\rangle=y^2+\bigl\langle y\bigr\rangle = \bigl\langle y \bigr\rangle = 0_{k[x,y]/<y>}.##
 
  • #20
@fresh_42 I thought in the case of your example ##k[x,y]/\langle y\rangle##, any coset elements in the form of polynomail ##\sum_{ij}a_{ij}x^{d_i}y^{d_j}+\langle y\rangle=\sum_ia_ix^{d_i}+\langle y\rangle##. Meaning any terms that have ##y^{d_j}=0##, since ##y^{d_j}\in \langle y\rangle##. Or if instead of ##\langle y\rangle##, we have ##\langle y^{d_j}\rangle##, then any terms in the set ##x^{d_i}\{1,y,y^2,y^3,\ldots, y^{d_j-1}\}## would not be zero. Am I understanding ##k[x,y]/\langle y\rangle## correctly? Basically we are modding out the invovling ##y## and any higher power of it due to the presence of modding out the ideal ##\langle y\rangle##. Another thing is in standard linear algebra, i can't really apply the standard dimension formula for quotient vector space since we don't always know what the dimension of ##\mathfrak{a}## is suppose to be.
 
Last edited:
  • #21
elias001 said:
@fresh_42 I thought in the case of your example ##k[x,y]/\langle y\rangle##, any coset elements in the form of polynomail ##\sum_{ij}a_{ij}x^{d_i}y^{d_j}+\langle y\rangle=\sum_ia_ix^{d_i}+\langle y\rangle##. Meaning any terms that have ##y^{d_j}=0##, since ##y^{d_j}\in \langle y\rangle##. Or if instead of ##\langle y\rangle##, we have ##\langle y^{d_j}\rangle##, then any terms in the set ##\{x^{d_i}(1,y,y^2,y^3,\ldots, y^{d_j-1})\}## would not be zero. Am I understanding ##k[x,y]/\langle y\rangle## correctly? Basically we are modding out the invovling ##y## and any higher power of it due to thte presence of ##\langle y\rangle##.
I am not sure what you mean by ##\{x^{d_i}(1,y,y^2,y^3,\ldots, y^{d_j-1})\},## but it sounds right. I read it as
$$
\{x^{d_i}(1,y,y^2,y^3,\ldots, y^{d_j-1})\}=\{x^{d_i},x^{d_i}y,x^{d_i}y^2,x^{d_i}y^3,\ldots, x^{d_i}y^{d_j-1}\}.
$$
Let me make it a bit less trivial. Say we have ##k[x,y,z]## and the ideal ##\mathfrak{a}=\bigl\langle y,z \bigr\rangle.## The notation means that ##\mathfrak{a}## is generated as an ideal by ##y## and ##z.## These are all polynomials of the form ##k[x,y,z]\cdot y+k[x,y,z]\cdot z.## So all elements of the form ##x^ny## or ##x^mz## or ##x^ny+x^mz## are in the ideal, for all ##n,m\ge 0.##

Back to the example, it means that ##\bigl\langle y \bigr\rangle## contains all polynomials ##p(x)## of ##k[x,y]## the are divisible by ##y##. So ##xy,x^2y,x^3y,\ldots## are all elements of ##\bigl\langle y \bigr\rangle .## But elements in ##\bigl\langle y \bigr\rangle## are zero in the quotient ring, hence
$$
\{x^{n},x^{n}y,x^{n}y^2,x^{n}y^3,\ldots, x^{n}y^{m},\ldots\}\subseteq k[x,y]
$$
are linearly independent for any ##n,m##, but passing to the quotient ring, we get
$$
\{x^{n},x^{n}y=0,x^{n}y^2=0,x^{n}y^3=0,\ldots, x^{n}y^{m}=0,\ldots\}=\{x^{n}\}\subseteq k[x,y]/\bigl\langle y \bigr\rangle
$$
and only the powers of ##x,## i.e. ##x^n## are linearly independent. Every polynomial that can be divided by ##y## vanishes into the ideal and is such the zero in the quotient ring.
 
Last edited:
  • #22
@fresh_42 yes ##\{x^{d_i}(1,y,y^2,y^3,\ldots, y^{d_j-1})\}=\{x^{d_i},x^{d_i}y,x^{d_i}y^2,x^{d_i}y^3,\ldots, x^{d_i}y^{d_j-1}\}##

Basically in your example ##k[x,y,z]/\langle y,z\rangle##, any terms of a polynomial ##P(x,y,z)## in ##k[x,y,z]## which contains the variables ##y,z## would be consider as belonging to the ideal ##\langle y,z\rangle## while any term that only contain the variable ##x## would be in the complement of the ideal ##\langle y,z\rangle##. Is that correct?

So coset elements of the form: ##y+\langle y,z\rangle, z+\langle y,z\rangle,xyz+\langle y,z\rangle,xy+\langle y,z\rangle, xz+\langle y,z\rangle,yz+\langle y,z\rangle##, would be in the ideal ##\langle y,z\rangle## and be all be send to zero in the quotient ring.?? Am I interpreting and understanding the quotient ring correctly?

There is something else I want to ask you about the assumed exercise if that is okay? I thought of using something in linear algebra to solving it for one of the direction, but I am not sure about a subtle point. Should I make a separate post instead. I can point you to it if you think i should make it a separate post and after I have done so.
 
Last edited:
  • #23
elias001 said:
@fresh_42 yes ##\{x^{d_i}(1,y,y^2,y^3,\ldots, y^{d_j-1})\}=\{x^{d_i},x^{d_i}y,x^{d_i}y^2,x^{d_i}y^3,\ldots, x^{d_i}y^{d_j-1}\}##

Basically in your example ##k[x,y,z]/\langle y,z\rangle##, any terms of a polynomial ##P(x,y,z)## in ##k[x,y,z]## which contains the variables ##y,z## would be consider as belonging to the ideal ##\langle y,z\rangle## while any term that only contain the variable ##x## would be in the complement of the ideal ##\langle y,z\rangle##. Is that correct?
Yes. I wouldn't say complement, though. The word is reserved for sets, and here we have just ##\neq 0_{k[x,y,z]/<y,z>}.##
elias001 said:
So coset elements of the form: ##y+\langle y,z\rangle, z+\langle y,z\rangle,, xyz+\langle y,z\rangle,xy+\langle y,z\rangle, xz+\langle y,z\rangle,yz+\langle y,z\rangle##, would be in the ideal ##\langle y,z\rangle## and be all be send to zero in the quotient ring.?? Am I interpreting and understanding the quotient ring correctly?
Yes. But we do not distinguish them anymore, they are all ##0=\bigl\langle y,z \bigr\rangle .##

That's why the author used the variable ##u.## We write ##k[x]/\mathfrak{a}=k[ u ]## where ##u## is our new variable. It represents the coset ##x+\mathfrak{a}.## But it is not really an indeterminate anymore because it can have equations. ##1,x,x^2,\ldots## are all different and linearly independent. But we cannot say the same for ##u##. E.g., ##\mathbb{R}[x]/\bigl\langle x^2 +1 \bigr\rangle = \mathbb{R}[ u ]## and ##u^2+1=0,## so only ##1,u## are ##\mathbb{R}##-linear independent and ##\{1,u,u^2\}=\{1,u,-1\}## are not.

The trick in the proof is to set-up an assumed linear dependence in the quotient ring, then use the cosets to translate it back to an equation in the original ring modulo ##\mathfrak{a}## which means an equation up to polynomials of ##\mathfrak{a}.## Then, using the linear independence of the ##x_i##, we see that the coefficients are all zero.

The standard examples for quotient rings are the remainders of integers by a given divisor ##n.##

We can write every integer ##a## as ##a=q\cdot n +r## with a remainder ##0\leq r\leq n-1.## Then the cosets in ##\mathbb{Z}/n\mathbb{Z} ## are ##0+n\mathbb{Z},1+n\mathbb{Z},\ldots,(n-1)+n\mathbb{Z}.## Calculations are done on the representatives ##\{0,1,\ldots,n-1\}## because nobody wants to add ##+n\mathbb{Z}## all the time. In the case of integers, we write ##a\equiv b\pmod{n}## if ##a,b## have the same remainder by division by ##n.## It's the same thing with polynomials and ideals instead of the ideal ##n\cdot\mathbb{Z}\subseteq \mathbb{Z}.##

If ##n=2## then you get a light switch or a computer, if ##n=12## you get the hours on a clock.

elias001 said:
There is something else I want to ask you about the assumed exercise if that is okay? I thought of using something in linear algebra to solving it for one of the direction, but I am not sure about a subtle point. Should I make a separate post instead. I can point you to it if you think i should make it a separate post and after I have done so.
Maybe a separate thread is better. We now have so many variable names and powers that the risk of confusion is increasing. Starting from scratch seems to be better.
 
Last edited:
  • #24
@fresh_42 I look back over your replies, when you say we let the variable ##u_i## to denote coset of the form ##u_i=x_i+\mathfrak{a}##, if we raise it to a power say ##u_i^n##, then ##u_i^n=x_i^n+\mathfrak{a}##.

By the way, in my two linear algebra courses, we did not do much with quotient vector spaces. The text we used was Friedberg, Insel and Spence's Linear Algebra. The concepts of quotient vector spaces were all relegated to the exercise section.

I created a new post about the other exercise, it is here:

 
  • #25
elias001 said:
@fresh_42 I look back over your replies, when you say we let the variable ##u_i## to denote coset of the form ##u_i=x_i+\mathfrak{a}##, if we raise it to a power say ##u_i^n##, then ##u_i^n=x_i^n+\mathfrak{a}##.

The complete calculation is
$$
u_i^n=(x_i+\mathfrak{a})^n=x_i^n+\underbrace{\binom{n}{1}x_i\mathfrak{a}^{n-1}+\binom{n}{2}x_i^2\mathfrak{a}^{n-2}+\binom{n}{3}x_i^3\mathfrak{a}^{n-3}+\ldots+\binom{n}{n-1}x_i^{n-1}\mathfrak{a}+\mathfrak{a}}_{\in \mathfrak{a}}=x_i^n+\mathfrak{a}
$$
 
  • #26
@fresh_42 actually I figured out that it has to be true since all the terms in the expansion of ##u^i## except ##x_i^n## has to be in ##\mathfrak{a}##. But thank you for writing out the calculation, I really appreciate it.
 
Last edited:
Back
Top