# Math Challenge - May 2020

• Challenge
• Featured
mathwonk
Homework Helper
2020 Award
Here is an attempt at some more insight on this question. Suppose we think of power series over a field, of form a + bx + cx^2 +.....+ex^n +......
The ones that vanish at the origin form the maximal ideal M with zero constant coefficient, so they start at the linear term, i.e. look like bx + cx^2 +........ And the ones that not only vanish, but have derivative zero also at the origin, start at the quadratic term, so look like cx^2 + dx^3 +........., and thus form the ideal M^2.

Similarly those that start at the term ex^n+......., form the ideal M^n, of functions that vanish along with the first n-1 derivatives, at the origin. Since a function can have as many vanishing derivatives as desired, and still not be identically zero, the infinite descending sequence of ideals M, M^2, M^3,..... never terminates, so the ring of power series at the origin is not artinian, and in particular the submodule M is not artinian.

But if we take the quotient, say M/M^n, we are looking only at series that are truncated by throwing away all terms after the x^n term, hence there isn't much left, and this quotient is an n-1 dimensional vector space / the field of coefficients, hence it is artinian.

It's like looking at an infinite decimal, which goes on forever, and even if it only starts at the nth decimal place, still goes on forever. But if we only truncate it at the nth decimal place, like a hand held calculator does, then we get something finite. Its the same idea as this problem, start with something infinite, throw away some smaller infinite part at the tail end, and be left only with something finite.

Well forgive my rambling, but these abstract ideas are really inspired by very concrete objects from the world of calculus and infinite series, and that's where the intuition comes from for them.

In fact this is probably the example that might be most familiar to people with mainly calculus experience. I.e. let R = k[[x1,...,xn]] be the ring of power series in n variables at the origin, let M be the ideal of those power series with zero constant term, and consider the quotient M/M^2. These are power series starting at the linear term, but not having terms after that, hence consists of homogeneous linear polynomials. This is exactly the cotangent space of n - space, an n dimensional vector space, hence it is artinian as a module over R, where multiplication by a power series is defined just by multiplying by the constant term.

................
One way to look at a ring being artinian is from the point of view of dimension. A polynomial ring like k[x1,...,xn], over a field, has dimension n, corresponding to the fact that it represents functions on the space k^n which has dimension n. The dimension of the space is reflected in the existence of chains of subspaces like the origin, the x1 axis, the (x1,x2) plane, the (x1,x2,x3) space,..... that have length n+1, but no longer such chains exist.

Correspondingly the ring k[x1,...,xn] has chains of prime ideals corresponding to the ideals of functions that vanish identically on these subspaces, i.e. (x1,...,xn), (x2,...,xn), (x3,...,xn),....(0), which has length n+1 also, but no longer chains of prime ideals exist.

Now these chains of prime ideals are finite, but by taking powers of them you get infinite chains of non prime ideals, at least when there are some variables, i.e. when the polynomial ring has positive dimension, and these show that the ring is not artinian.

If the ring is artinian, you don't have these infinite chains of powers of prime ideals, so the ring must have had dimension zero. If you have a prime ideal P in the ring k[x1,...,xn] and we call the "height" of P the length of a chain of smaller prime ideals, then the height of P plus the dimension of the quotiwnt ring k[x1,...,xn]/P equals n, the dimension of k[x1,...,xn]. So if we want to get a zero dimensional quotient, we just need to mod out by a maximal height prime ideal, i.e. by maximal ideal. In this case that will give us a field, which is artinian.

More generally, a commutative artinian ring, without nilpotent elements, is just a product of fields, so these examples of (commutative) artinian rings being either fields, or of form R/m^r, for some maximal ideal m, are pretty much all there is.

To try to avoid confusion, none of the concrete examples I have given follows the general model I gave first, of constructing examples via taking a product AxN, forcing the quotient AxN/N ≈ A, to be of predetermined form. Of course using that method one can construct many more examples, but I think less interesting ones.

Last edited:
benorin
Homework Helper
Ok @fresh_42 , is this solution to #3) different enough? It's complex but slightly different:

Reference Text: Complex Analysis with Applications to Engineering and Science by Saff and Snider, pg. 228, exercise #14

The Poisson Integral Formula for the Half-Plane states: If ##f=\phi + i\psi## is analytic in a domain containing the x-axis and the upper half-plane and ##| f(z) |\leq K## in this domain, then the values of the harmonic function ##\phi## in the upper half-plane are given in terms of its values on the x-axis by

$$\phi (x,y) =\tfrac{y}{\pi}\int_{-\infty}^{\infty}\dfrac{\phi (\xi , 0)}{(\xi -x)^2 +y^2}\, d\xi \quad ( y > 0 ) .$$

Comparing this to the given integral

$$I(\alpha ) := \int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{1+x^2}\,dx \quad (\alpha \geq 0)$$

we deduce that ##x=0## and ##y=1## and ##\phi ( \xi ,0) = \cos (\alpha \xi )\Rightarrow f(z) =e^{i \alpha z}## so that we have ##| f(z) |= e^{- \alpha y}\leq 1, \, \, \forall \Im (z) \geq 0## hence ##\phi (x,y) := \Re \left[ f(x+ i y)\right] =\Re \left( e^{i\alpha x - \alpha y}\right) = e^{-\alpha y}\cos (\alpha x )## so from the formula we have,

$$\int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{1+x^2}\,dx = \pi \phi (0,1) = \pi e^{-\alpha }, \quad (\alpha \geq 0)$$

Mentor
Ok @fresh_42 , is this solution to #3) different enough? It's complex but slightly different:

Reference Text: Complex Analysis with Applications to Engineering and Science by Saff and Snider, pg. 228, exercise #14

The Poisson Integral Formula for the Half-Plane states: If ##f=\phi + i\psi## is analytic in a domain containing the x-axis and the upper half-plane and ##| f(z) |\leq K## in this domain, then the values of the harmonic function ##\phi## in the upper half-plane are given in terms of its values on the x-axis by

$$\phi (x,y) =\tfrac{y}{\pi}\int_{-\infty}^{\infty}\dfrac{\phi (\xi , 0)}{(\xi -x)^2 +y^2}\, d\xi \quad ( y > 0 ) .$$

Comparing this to the given integral

$$I(\alpha ) := \int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{1+x^2}\,dx \quad (\alpha \geq 0)$$

we deduce that ##x=0## and ##y=1## and ##\phi ( \xi ,0) = \cos (\alpha \xi )\Rightarrow f(z) =e^{i \alpha z}## so that we have ##| f(z) |= e^{- \alpha y}\leq 1, \, \, \forall \Im (z) \geq 0## hence ##\phi (x,y) := \Re \left[ f(x+ i y)\right] =\Re \left( e^{i\alpha x - \alpha y}\right) = e^{-\alpha y}\cos (\alpha x )## so from the formula we have,

$$\int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{1+x^2}\,dx = \pi \phi (0,1) = \pi e^{-\alpha }, \quad (\alpha \geq 0)$$
Yes, and no. I was thinking of a completely real solution.

Summary:: topological spaces and metrics, integrals, abstract algebra (groups and rings), heat equation, geometry

13. Given two different, coprime, positive natural numbers a,b∈N. Then there are two natural numbers x,y∈N such that ax−by=1.
This is really a problem of proving "there exist a non-negative integral solution of this linear diophantine equation" and Higher Algebra by Hall and Knight have a full chapter devoted to it, it is named "Indeterminate equation of first degree", Chapter XXVI.

So, we have ## ax - by = 1##. Without loss of generality, ##a \gt b##. Convert ##\frac{a}{b}## into continued fraction, and let ##\frac{p}{q}## be the convergent just before ##\frac{a}{b}##, so we have by a theorem (which I can prove, actually it is given in the book)
$$aq - bq = \pm 1$$
Let, ## aq - bp = 1##, so we have
$$ax - by = aq -bp$$
$$a(x-q) = b(y-p)$$
$$a\frac{x-q}{b} = y- p$$
Since, ## y,p \in \mathbb Z \implies y- p \in \mathbb Z##, that means ##b## divides ##a\frac{x-q}{b}## but since ##b## cannot divide ##a## (they are co-prime) we have ## \frac{x-q}{b}= t = \frac{y-p}{a}##, where ##t## is an integer.

$$x = bt +q$$
$$y =at +p$$
So, for a given positive ##t## we will get a positive value of ##x## and ##y## which satisfies the original equation.

If ## aq - bp = -1##, then we have
$$ax - by = bp -aq$$
And again ,
$$a(x+q) = b(y+p)$$ by a similar reasoning we have :
$$x = bt -q$$
$$y = at -p$$

But for non-negative solution we have to be careful,
$$bt - q \gt 0 \implies t \gt \frac{q}{b}$$
$$at - p \gt 0 \implies t \gt \frac{p}{a}$$
So, we must have ## t \gt ~max~(\frac{q}{b} , \frac{p}{a})##.

benorin
Homework Helper
Sorry it took so long.

3. (solved by @cbarker1, @benorin , alternative solution possible) Calculate (FR) $$\int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{1+x^2}\,dx \quad (\alpha \geq 0).$$

Let
$$I(\alpha ):=2\int_0^\infty \tfrac{\cos (\alpha x)}{1+x^2}\, dx$$
Make the substitution ##y=\alpha x## for ##\alpha \neq 0## to get what I will call
$$J(\alpha ):=2\int_0^\infty \tfrac{\alpha}{\alpha ^2+y^2}\cdot\cos (y)\, dy\quad (\alpha >0)$$
Note that whilst not violating the sanctity of our wholesome strictly real-valued integral, it is simpler to trek through the complex: specifically, note that for the complex variable ##z=\alpha +i y##, for every ##\epsilon >0##, the Domain ##D_{\epsilon } :=\left\{ z\in\mathbb{C} | \Im\left[ z\right] \geq \epsilon \wedge \Re\left[ z\right] \geq \epsilon\right\}## we have that ##\phi (\alpha ,y):= \tfrac{\alpha}{\alpha ^2+y^2}=\Re\left[ \tfrac{1}{\alpha + i y}\right] ## hence it is harmonic on the same domain ##D_{\epsilon }## and thus satisfies Laplace's Equation therein. Given this, two applications of differentiation under the integral sign give

$$\tfrac{d^2 J}{d\alpha ^2}=2\int_0^\infty\cos (y)\cdot\tfrac{\partial ^2}{\partial \alpha ^2}\left(\tfrac{\alpha}{\alpha ^2+y^2}\right) \, dy =-2\int_0^\infty\cos (y)\cdot\tfrac{\partial ^2}{\partial y ^2}\left(\tfrac{\alpha}{\alpha ^2+y^2}\right) \, dy$$

Proceed to integrate by parts twice, for the first round we'll put ##u_1=\cos (y) \Rightarrow du_1=-\sin (y)\, dy## and ##dv_1=\tfrac{\partial ^2}{\partial y ^2}\left(\tfrac{\alpha}{\alpha ^2+y^2}\right) \, dy\Rightarrow v_1=\tfrac{\partial }{\partial y }\left(\tfrac{\alpha}{\alpha ^2+y^2}\right) ## and the integral becomes

$$\tfrac{d^2 J}{d\alpha ^2}=-2\left[ \cos (y)\cdot\tfrac{\partial }{\partial y }\left(\tfrac{\alpha}{\alpha ^2+y^2}\right)\right| _{y=0}^{\infty}-2\int_0^\infty\sin (y)\cdot\tfrac{\partial }{\partial y }\left(\tfrac{\alpha}{\alpha ^2+y^2}\right) \, dy$$

now for the second round we put ##u_2=\sin (y)\Rightarrow du_2=\cos (y)\, dy## and ##dv_2=\tfrac{\partial }{\partial y }\left(\tfrac{\alpha}{\alpha ^2+y^2}\right) \, dy\Rightarrow v_2=\tfrac{\alpha}{\alpha ^2+y^2}## and we have

$$\tfrac{d^2 J}{d\alpha ^2}=\underbrace{2\left[ \cos (y)\tfrac{ 2\alpha y}{ (\alpha ^2+y^2 )^2}\right| _{y=0}^{\infty}}_{=0}-\underbrace{2\left[ \sin (y)\tfrac{\alpha }{\alpha ^2+y^2}\right| _{y=0}^{\infty}}_{=0} +\underbrace{2\int_0^\infty\cos (y)\tfrac{\alpha}{\alpha ^2+y^2} \, dy}_{=J(\alpha )}=J(\alpha )$$

So now we have a DE for the integral in question, namely ##J^{\prime\prime}-J=0## which has characteristic equation ##\lambda ^2 -1=0\Rightarrow \lambda =\pm 1## and thus the general solution to our DE is given by ##J(\alpha )=c_1 e^{\alpha }+c_2 e^{-\alpha}## and we can make this an initial value problem by first defining ##J(0):=I(0)## and then quickly computing

$$I(0)=2\int_0^\infty \tfrac{1}{1+x^2}\, dx=2\left[\tan ^{-1} (x)\right| _{x=0}^{\infty}=\pi$$

Now I'm stuck needing another initial condition to solve for the second constant in the particular solution. I'm gonna post this now and hopefully finish it up later. A hint as to which other special value of of either defined integrals of their derivative would help much fresh_42 ^.~

Note: I had a oh duh, I can just quote a post with a spoiler in it to learn the code for them moment during the creation of this post, sorry that was long overdue. I've enjoyed the spoilers in that I scrolled past theem oblivious to their content seeking the editor and thus my own solution at the bottom of the page. That was cool. I'll do this from now on.

Mentor
A hint as to which other special value of of either defined integrals of their derivative would help much fresh_42 ^.~
Sorry, this is a bit too confusing for me. You use so many auxiliary quantities that I lost track of the quantity you actually want to calculate.

The solution is quite short if you know the trick, which is to write ##\dfrac{x}{1+x^2}## under the integral as another integral of a damped sine function.

benorin
Homework Helper
I derived and solved a second order DE and found the solution, the particular solution needs me to solve for 2 constants ##c_1## and ##c_2##, I can solve the value of the integral at ##\alpha =0## to determine one of the constants: what I need is value of ##\alpha\neq 0## that I can compute the integral for to determine the value of a constant.

benorin
Homework Helper
This is the only way I can see to use differentiation under the integral sign on this particular integral, I wasted many hours trying the same technique on the original integral (it would have been easier to understand if I hadn’t short-cut some derivatives calculations by use of harmonic functions but I had just reviewed the concept and it was preferable to some messy quotient rule with trig/partial fractions perhaps: I see the short-cut added length to the journey. That’s a bad short-cut!).

Mentor
But the result depends on ##\alpha##. How can a certain value yield a result for all ##\alpha## after you fixed it?

Mentor
Do you want to see my solution?

benorin
Homework Helper
For a second order DE (which was necessarily of that order to get cosine to be cosine again) there will be two solutions (one for each root of the characteristic equation) with two constants (of integration if you will) that need to be solved for by initial conditions, usually either a pair of function values or one of these and the value of the derivative at a given point. I have the value of the function at zero, but to fully determine the solution I need another such value.

I’m always down to learn a new method of calculating problematic integrals: teach me something!

Mentor
For ##\alpha=0## we have $$\int_{-\infty}^{+\infty} \frac{dx}{1+x^2}=[\arctan (x)]_{-\infty}^{+\infty} = \dfrac{\pi}{2}-\left(-\dfrac{\pi}{2}\right)=\pi$$
so we may assume ##\alpha> 0## now, substitute ##t=\alpha x , \beta= \alpha^{-1}## and observe using integration by parts twice
\begin{align*}
\int_0^\infty e^{-t}&\sin(\beta t)\,dt = \left. -e^{-t}\sin(\beta t)\right|_0^\infty + \beta \int_0^\infty e^{-t}\cos(\beta t)\,dt\\
&= 0 + \beta\left(\left[ -e^{-t}\cos(\beta t) \right]_0^\infty -\beta \int_0^\infty (-e^{-t})(-\sin(\beta t)) \,dt \right)\\
&= \beta -\beta^2 \int_0^\infty e^{-t}\sin(\beta t)\,dt
\end{align*}
and thus
$$\int_0^\infty e^{-t}\sin(\beta t)\,dt = \dfrac{\beta}{1+\beta^2}$$
This means
\begin{align*}
\int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{1+x^2}\,dx &= \int_{-\infty}^{+\infty}\dfrac{\cos(\alpha x)}{x} \cdot \dfrac{x}{1+x^2} \,dx\\
&=2\int_0^\infty \left( \dfrac{\cos(\alpha x)}{x} \int_0^\infty e^{-t}\sin(x t)\,dt \right)\,dx \\
&= \int_0^\infty \int_0^\infty e^{-t} \cdot \dfrac{2\sin(xt)\cos(\alpha x)}{x}\,dt\,dx \\
&= \int_0^\infty \int_0^\infty e^{-t}\cdot \dfrac{\sin(x(t+\alpha))+\sin(x(t-\alpha))}{x}\,dx\,dt \\
&=\int_0^\infty e^{-t} \cdot \left( \dfrac{\pi}{2}\operatorname{sgn}(t+\alpha) + \dfrac{\pi}{2}\operatorname{sgn}(t-\alpha) \right)\,dt\\
&=\pi \int_\alpha^\infty e^{-t}\,dt\\
&= \pi \cdot e^{-\alpha}
\end{align*}

• benorin