# Math Challenge - July 2021

• Challenge
• Featured
Mentor
Do you want me to derive the equation of director circle?
Yes, but I thought of a geometric proof which would have been shorter (therefore the picture).

Let ##y=mx+c## be a tangent to the ellipse ##\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1##, so on solving these two we get,
\begin{align*}
b^2x^2+a^2(mx+c)^2&=a^2b^2\\
(b^2+a^2m^2)x^2+(2a^2mc)x+a^2(c^2-b^2)&=0
\end{align*}
For ##y=mx+c## to be a tangent to the ellipse ##\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1##, discriminant of the above quadratic equation must be zero (to ensure that the line cuts the ellipse at only one point),
\begin{align*}
4a^4m^2c^2-4a^2(b^2+a^2m^2)(c^2-b^2)&=0\\
4a^2(a^2m^2c^2-b^2c^2+b^4-a^2m^2c^2+a^2b^2m^2)&=0\\
4a^2b^2(-c^2+b^2+a^2m^2)&=0\\
c^2=a^2m^2+b^2\\
c=\pm \sqrt{a^2m^2+b^2}
\end{align*}
So, the equation of tangent to the ellipse is $$y=mx \pm \sqrt{a^2m^2+b^2}$$
Let the locus of the director circle is ##P(h,k)##

Now, as the tangent passes through ##P(h,k)##, we can write,
\begin{align*}
&y=mx \pm \sqrt{a^2m^2+b^2}\\
&k=mh \pm \sqrt{a^2m^2+b^2}\\
&k-mh= \pm \sqrt{a^2m^2+b^2}\\
&k^2+m^2h^2-2khm=a^2m^2+b^2\\
&(a^2-h^2)m^2+(2hk)m+(b^2-k^2)=0
\end{align*}
This is a quadratic equation in ##m## so its roots represents slope of tangents passing through ##P(h,k)## and by the definition of director the circle, they are perpendicular,
\begin{align*}
m_1m_2&=-1\\
\dfrac{b^2-k^2}{a^2-h^2}&=-1\\
h^2+k^2&=a^2+b^2\\
x^2+y^2&=a^2+b^2
\end{align*}
which is the locus of ##P(h,k)##

And yes, we don't get the entire circle as the ellipse is rotating only in the first quadrant, we get an arc of the circle ##y=\sqrt{5-x^2}## from ##x=1## to ##x=2## because when the major axis of the ellipse is horizontal, the ##x## coordinate of the center is ##x=2## (maximum case as on rotating the ##x## coordinate of the center decreases) and when the major axis of ellipse is vertical, the ##x## coordinate of the center is ##x=1## (the other maximum case as the ellipse cannot cross the coordinate axes, the distance from the center of the ellipse to the y-axis is equal to the length of semi-minor axis of the ellipse)

Mentor
so ##\mathbf 1## is the ones vector and ##\mathbf {11}^T## is the ones matrix -- I use this notation all the time on this site ...
I apologize that I am not familiar with your nomenclature. I guess I wouldn't have asked if it had been ##\mathbf{1}\cdot \mathbf{1}^\tau##. The ambiguous use of ##T## is of course totally ok.

... and ##\mathbf v \mathbf y^T## is completely standard matrix vector notation. My approach to the lemma fairly closely follows that of Artin's Algebra's chapter on Representation Theory -- though after re-examining the text, not quite as closely as I thought. With some notational changes I am using ##\mathbf B## to mimic the regular representation development in that chapter. In particular ##\mathbf B## is ordered whereas a set is not. So pick an arbitrary labelling and if X has cardinality n, then ##\mathbf B =\bigg[\begin{array}{c|c|c} x_1 & \cdots & x_{n-1} & x_{n}\end{array}\bigg]\implies g\mathbf B =\bigg[\begin{array}{c|c|c} gx_1 & \cdots & gx_{n-1} & gx_{n}\end{array}\bigg] = \mathbf B P_g## where ##P_g## is a permutation matrix.

You’re asking what ##P_g## is but I explicitly addressed this in my post when I wrote

##g\mathbf B = \mathbf B P_g##
And a note that this was the defining equation and not a conclusion was obviously not necessary. Sorry for not guessing it.
this defines a permutation (matrix) representation i.e. ##P_g = \phi(g)## for some surjective homomorphism ##\phi : G\longrightarrow G' \in GL_{\vert X\vert}(\mathbb C)##
I would ask what ##G'## means instead of guessing ##G':=\operatorname{im} \phi ,## but you are obviously annoyed by my lack of fantasy. And sure, it is not necessary to mention that ##G'## isn't ##[G,G]## in a proof in group theory. Or is it, and I missed the point again?

That said I definitely could say this differently by getting rid of ##\mathbf B## and just assert X to be an ordered set. But in any case the point is ##P_g## is the standard permutation matrix representation for g in G implied by the way g acts on the elements in our set X.

- - - -
There's a miscellaneous problem in either edition of that book's rep theory chapter that says "Let G be a finite subgroup of ##GL_n(\mathbb C)##. Prove that if ##\sum_g \text{trace} g = 0## then ##\sum_g g = \mathbf 0##. There's basically two proofs of this, one (after dividing by ##\vert G\vert##) using idempotence or two be clever and back into Schur’s Lemma. What I did in the lemma is implied by that problem and I went the idempotence route.

##T^2=\Big(\frac{1}{\vert H\vert}\sum_{P\in H}P\Big)\Big(\frac{1}{\vert H\vert}\sum_{P'\in H}P'\Big)=\frac{1}{\vert H\vert^2}\sum_{P\in H}\Big(P\sum_{P'\in H}P'\Big)=\frac{1}{\vert H\vert^2}\sum_{P\in H}\vert H\vert \cdot T=\frac{1}{\vert H\vert^2}\cdot \vert H\vert^2 \cdot T=T##

missing detail:
##P\sum_{P'\in H}P'=\sum_{P'\in H}PP'=\sum_{P''\in H}P''=\sum_{P\in H}P=\vert H\vert \cdot T##

The does ##T## even exist question is strange. Of course it exists, you can always add square matrices of the same size (and we sum over all permutation matrices P in H) and since this is over a field of characteristic zero (##\mathbb C##) you can always divide by a positive integer (the cardinality of H).
I thought of an example like ##T=\dfrac{1}{3}\cdot \begin{bmatrix}
1 & 1 & 0&\ldots\\1&1&0&\ldots\\0&0&1&\ldots\\ &&&\ddots \\ 0&0&\ldots&1
\end{bmatrix}## where I could make the matrix arbitrary large. This example doesn't fulfill ##\operatorname{trace}T =1##. Hence this condition restricts the number of possible groups ##H## significantly, and the question whether some groups remain is not trivial. You have always the identity in ##H## which can be made arbitrary large, regardless how large ##|H|## is. Sorry for asking.

I do remember how it can be quite exhausting to review answers to these challenges. I had guessed the sort of things in my post would be automatic for you though it seems not to be the case. Probably a good idea to put a pin in this.

Last edited:
We may assume them to be positive since they all only occur squared in the statement. Your proof was basically ok, just a bit too sloppy (cp. my questions in post #12).
But I applied ##A.M \geq G.M## on $$(z-x+y)(z+x-y)(x+y-z)(x+y+z)$$
##(z-x+y),(z+x-y),(x+y-z)## and ##(x+y+z)## aren't necessarily positive right?

Also to answer your questions in post #12, I have no idea about the geometric meaning of ##A.M \geq G.M##
All I knew was that if we had ##n## positive real numbers, ##a_1,a_2, ... ,a_n## we can use ##A.M \geq G.M## and the equality holds if ##a_1=a_2= ... =a_n##

I think P can win. A winning first move is to choose a=0. If Q then chooses a value for b, then P considers the cubic x3+bx and selects any value c such that the graph of x3+bx is not tangent to the horizontal

I read that as a hint since I want a rigorous proof. Just a remark: What if Q chooses b=0?
I might be wrong, but I thought that when ##a=0## then, if the value of ##b## is chosen to be positive, then the equation ##f(x)=x^3+ax^2+bx+c=0## should have only one solution because we can see that
##f'(x)=3x^2+2ax+b##
Now to have two different values of ##x## where ##f'(x)=0## we should have (using discriminant >0)
##a^2>3b##
So, for any positive value of ##b## and ##a=0##, ##f'(x)>0## so ##f(x)## will be strictly increasing function.

If P chooses any value of ##a## in the first move, Q can accordingly choose ##b## and P will never win as the function will be always increasing and have only one real root!

Infact, I thought that the winning strategy for P would be choosing the value of ##b## as a negative number in the first move, so if Q chooses a value of ##a## in the next move P can simply shift the x-axis up or down accordingly so as to intersect the curve at three points by choosing the value of ##c## and if Q chose the value of ##c## in the second move to make the curve lie above axis or below such that it doesn't cut it at three distinct points then P can choose the value of ##a## accordingly to shift the curve either up or down because ##a## is the coefficient of ##x^2## so it should have a greater say in the position of the curve in the plane and it is also the last move so Q cannot do anything after this.

Last edited:
Infrared
Gold Member
I read that as a hint since I want a rigorous proof. Just a remark: What if ##Q## chooses ##b=0\,\rm ?##

Can you clarify what is not rigorous? I don't see why ##Q## choosing ##b=0## is not covered by what I wrote. I wrote that ##P## chooses any value of ##b## such that the graph ##x^3+bx## is not tangent to ##y=-c.## Hence if ##b=0## any nonzero value of ##c## works as the line ##y=0## is the only horizontal that's tangent to the graph of ##x^3.## This procedure works because ##p(x)## having a multiple root at ##x=a## is equivalent to the graph of ##p## being tangent to the ##y##-axis at ##x=a.##

No typo. ##Z## is required to be Hausdorff, not ##Y##.
OK, my apologies. I had an idea of how I thought you wanted the solution to look and shouldn't have changed the problem. Anyway, put a relation on ##X## by ##x\sim x'## if ##h(x)=h(x')## and consider the induced quotient map ##h':(X/\sim)\to Z.## This is a continuous bijection, and since the map ##h## is closed as you mentioned (because a closed subset of ##X## is compact, and its image in ##Z## is compact in a hausdorff space; thus closed), the map ##h'## is too, making it a homeomorphism. Then if ##g'## is the induced map ##(X/\sim)\to Y##, the map ##f=(h')^{-1}\circ g'## is continuous and clearly satisfies ##f\circ h=g.##

Can you clarify what is not rigorous? I don't see why Q choosing b=0 is not covered by what I wrote. I wrote that P chooses any value of b such that the graph x3+bx is not tangent to y=−c. Hence if b=0 any nonzero value of c works as the line y=0 is the only horizontal that's tangent to the graph of x3. This procedure works because p(x) having a multiple root at x=a is equivalent to the graph of p being tangent to the y-axis at
I don't understand what you're saying?

How can the equation ##y=x^3+c## have three different real roots for any value of ##c##?

fresh_42
Mentor
But I applied ##A.M \geq G.M## on $$(z-x+y)(z+x-y)(x+y-z)(x+y+z)$$
##(z-x+y),(z+x-y),(x+y-z)## and ##(x+y+z)## aren't necessarily positive right?

Also to answer your questions in post #12, I have no idea about the geometric meaning of ##A.M \geq G.M##
All I knew was that if we had ##n## positive numbers, ##a_1,a_2, ... ,a_n## we can use ##A.M \geq G.M## and the equality holds if ##a_1=a_2= ... =a_n##
Sure. That gives you an upper bound. But why is it the maximum, and not way greater than the maximum?
But I applied ##A.M \geq G.M## on $$(z-x+y)(z+x-y)(x+y-z)(x+y+z)$$
##(z-x+y),(z+x-y),(x+y-z)## and ##(x+y+z)## aren't necessarily positive right?

Also to answer your questions in post #12, I have no idea about the geometric meaning of ##A.M \geq G.M##
All I knew was that if we had ##n## positive real numbers, ##a_1,a_2, ... ,a_n## we can use ##A.M \geq G.M## and the equality holds if ##a_1=a_2= ... =a_n##
I added the condition ##x,y,z > 0## to save us a zoo of cases. That leaves us with fewer cases. Sorry for this inconvenience.

I meant the geometrical meaning of ##f(x,y,z)## not the meaning of mean values.

Sure. That gives you an upper bound. But why is it the maximum, and not way greater than the maximum?

I added the condition ##x,y,z > 0## to save us a zoo of cases. That leaves us with fewer cases. Sorry for this inconvenience.
I don't know about why is that maximum, if equality holds then they should be maximum. I think I don't understand what you're trying to say?

Also even if ##x,y,z \gt 0##, ##x+y-z## can still be less than zero and we cannot use A.M G.M inequality the way I did, if ##x,y,z## were sides of a triangle then its different though.

I meant the geometrical meaning of f(x,y,z) not the meaning of mean values.
I just noticed that $$c(c-2x)(c-2y)(c-2z)$$
Looks similar to the formula for area of a triangle!
Is that what you meant, is geometrical meaning of ##f(x,y,z)## is that is 16 times the square of the area of a triangle with sides ##x,y,z##! because area of a triangle with sides ##x,y,z## will be
$$\sqrt{\dfrac{c}{2}\left(\dfrac{c}{2}-x\right)\left(\dfrac{c}{2}-y\right)\left(\dfrac{c}{2}-y\right)}$$

fresh_42
Mentor
Can you clarify what is not rigorous? I don't see why ##Q## choosing ##b=0## is not covered by what I wrote. I wrote that ##P## chooses any value of ##b## such that the graph ##x^3+bx## is not tangent to ##y=-c.## Hence if ##b=0## any nonzero value of ##c## works as the line ##y=0## is the only horizontal that's tangent to the graph of ##x^3.## This procedure works because ##p(x)## having a multiple root at ##x=a## is equivalent to the graph of ##p## being tangent to the ##y##-axis at ##x=a.##
You started with ##P## chooses ##a=0##. And if ##Q## answers ##b=0,## then ##P## is left with ##x^3+c##. So ##a=0## is no winning strategy for ##P##.

Mentor
Infact, I thought that the winning strategy for P would be choosing the value of b as a negative number in the first move, so if Q chooses a value of a in the next move P can simply shift the x-axis up or down accordingly so as to intersect the curve at three points by choosing the value of c and if Q chose the value of c in the second move to make the curve lie above axis or below such that it doesn't cut it at three distinct points then P can choose the value of a accordingly to shift the curve either up or down because a is the coefficient of x2 so it should have a greater say in the position of the curve in the plane and it is also the last move so Q cannot do anything after this.
Could you write this in formulas? It leaves all the work for me to do, only to decode this string.

Mentor
I don't know about why is that maximum, if equality holds then they should be maximum. I think I don't understand what you're trying to say?
Yes, that's the idea! You have proven ##f(x,y,z) \leq C## for some specific value ##C##. But if the maximum would be ##f(x,y,z)=:M<C## then ##C## doesn't answer the question. You must show that ##C## is the least possible solution, a maximum, not only an upper bound.
Also even if ##x,y,z \gt 0##, ##x+y-z## can still be less than zero and we cannot use A.M G.M inequality the way I did, if ##x,y,z## were sides of a triangle then its different though.
But it leaves us with fewer cases. The original theorem was about triangles, but the problem has an answer in any case. Many of them, I admit, so I reduced them significantly now by demanding ##x,y,z > 0##.

Could you write this in formulas? It leaves all the work for me to do, only to decode this string.
I think I have just a strategy but I am not sure because lets say that P plays according to my strategy and chooses a negative value of ##b## say ##b=-1## but then after Q chose a value of ##c## then I don't know what value of ##a## P should choose, I know (more like I feel) that there are some values of ##a## that he can choose but I don't have any relation like $$a \geq f(b,c)$$
I can maybe think about that problem later but first let me focus on #13

Yes, that's the idea! You have proven f(x,y,z)≤C for some specific value C. But if the maximum would be f(x,y,z)=:M<C then C doesn't answer the question. You must show that C is the least possible solution, a maximum, not only an upper bound.
Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?

Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
But if ##f(x,y,z)## is area of a triangle then ##f(x,y,z)=0## is the least possible value not the maximum value, surely this is wrong!

Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
Also I forgot that ##x,y,z \gt 0## so ##x=y=z=0## cannot be correct anyway.

I will try again this problem later.

Mentor
Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
I am saying that any upper bound is not sufficient. We are looking for the maximum. We get from your calculation and the fact that ##A.M.=G.M.## is possible, that ##f(x,y,z)## is maximal for ##c-2x=c-2y=c-2z,## i.e. ##x=y=z##. Hence we obtain the maximum at ##x=y=z=c/3## with a maximum function value ##c^4/27.##

You simply should have mentioned the conclusion. It is necessary that ##A.M.=G.M.## can be obtained since otherwise we only have an arbitrary upper bound.

In case ##x,y,z## are the sides of a triangle, it is the theorem of Heron which says that ##f(x,y,z)=16\,F^2## where ##F## is the area of the triangle with side lengths ##x,y,z, ## i.e. the triangle with the maximal area by constant circumference is an equilateral triangle.

kshitij
Mentor
Probably a good idea to put a pin in this.
I love the idea and want to understand it. I just thought you could help me a bit to follow you.

StoneTemplePython
Gold Member
I apologize that I am not familiar with your nomenclature. I guess I wouldn't have asked if it had been ##\mathbf{1}\cdot \mathbf{1}^\tau##. The ambiguous use of ##T## is of course totally ok.

And a note that this was the defining equation and not a conclusion was obviously not necessary. Sorry for not guessing it.

I would ask what ##G'## means instead of guessing ##G':=\operatorname{im} \phi ,## but you are obviously annoyed by my lack of fantasy. And sure, it is not necessary to mention that ##G'## isn't ##[G,G]## in a proof in group theory. Or is it, and I missed the point again?

I thought of an example like ##T=\dfrac{1}{3}\cdot \begin{bmatrix}
1 & 1 & 0&\ldots\\1&1&0&\ldots\\0&0&1&\ldots\\ &&&\ddots \\ 0&0&\ldots&1
\end{bmatrix}## where I could make the matrix arbitrary large. This example doesn't fulfill ##\operatorname{trace}T =1##. Hence this condition restricts the number of possible groups ##H## significantly, and the question whether some groups remain is not trivial. You have always the identity in ##H## which can be made arbitrary large, regardless how large ##|H|## is. Sorry for asking.
Look, I am not annoyed at you-- I can remember what it is like to get a response to these challenge questions that is just baffling. So I put myself in your place and I get it. But I am trying hard (and thus far failing) to not get bogged down in lots of details. FWIW I almost wrote ##\mathbf {11}^*## as conjugate transpose seems reasonable over ##\mathbb C##-- probably better notation. And yes your definition of ##G'## is a lot cleaner.

The point of that Artin exercise btw is that for any complex finite matrix group, when you sum over the group, the instances of the trivial representation survive (obvious) and all other irreducible representations are killed by the summation (surprising to me). So Burnside's Formula is just counting the number of trivial representations and each orbit has exactly one trivial rep associated with it. (If we take this view point, we can bypass character theory altogether though I'm not sure whether that is desirable).

I love the idea and want to understand it. I just thought you could help me a bit to follow you.
Let me see. After some sleep: I think @Office_Shredder has come up with a way for me to radically streamline it and cut out everything in the "General Case". I've already shown (convincingly?) that in the Base Case: if you average over ##G'## that ##\text{trace}\big(T\big)=1\implies \text{ G is transitive}##. Now I need to show that ##\text{ G is transitive} \implies \text{trace}\big(T\big)=1 ##. (Should take at most 3 lines to prove). I think that will give "each orbit has exactly one trivial rep associated with it". The finish then should be something like the action of ##G## on ##X## may be decomposed into a partition of orbits so if the Burnside Fromula gives you ##m## there must be ##m## orbits.

julian
Gold Member
Problem #4:

First say ##n = p## where ##p## is prime. Then ##d = \{ 1 ,p \}## and we have:

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = \sigma (1^{k-1}) \varphi (1) + \sigma (p^{k-1}) \varphi (p)
\nonumber \\ & = 1 + (1 + p + p^2 + \cdots p^{k-1}) (p-1)
\nonumber \\ & = 1 + (p^k - 1) =p^k .
\end{align*}

Which proves the result when ##n## is equal to one prime number. We will prove the general result by induction. Let ##n = p_1 p_2 \cdots p_\ell## (where no two primes are the same). Assume

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = n^k
\end{align*}

Now Let ##n' = n p_{\ell+1}## (where ##p_{\ell+1} \not= p_j## for ##1 \leq j \leq \ell##), then the divisors of ##n'## are the divisors of ##n## together with all the divisors of ##n## multiplied by ##p_{\ell+1}##. So that:

\begin{align*}
\sum_{d|n'} \sigma (d^{k-1}) \varphi (d) & = \sum_{d|n} \sigma (d^{k-1}) \varphi (d) + \sum_{d|n} \sigma (d^{k-1} p_{\ell + 1}^{k-1}) \varphi (d p_{\ell + 1})
\nonumber \\
& = n^k + \sum_{d|n} \sigma (d^{k-1}) \varphi (d) \sigma(p_{\ell + 1}^{k-1}) \varphi (p_{\ell + 1})
\nonumber \\
& = n^k + n^k [(1 + p_{\ell+1} + p_{\ell+1}^2 + \cdots + p_{\ell+1}^{k-1}) (p_{\ell + 1} - 1)]
\nonumber \\
& = n^k + n^k [p_{\ell+1}^k - 1]
\nonumber \\
& = (n p_{\ell+1})^k = {n'}^k
\end{align*}

where we have used the inductive assumption, and that both ##\sigma## and ##\varphi## are multiplicative.

fresh_42
Mentor
Problem #4:

First say ##n = p## where ##p## is prime. Then ##d = \{ 1 ,p \}## and we have:

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = \sigma (1^{k-1}) \varphi (1) + \sigma (p^{k-1}) \varphi (p)
\nonumber \\ & = 1 + (1 + p + p^2 + \cdots p^{k-1}) (p-1)
\nonumber \\ & = 1 + (p^k - 1) =p^k .
\end{align*}

Which proves the result when ##n## is equal to one prime number. We will prove the general result by induction. Let ##n = p_1 p_2 \cdots p_\ell## (where no two primes are the same). Assume

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = n^k
\end{align*}

Now Let ##n' = n p_{\ell+1}## (where ##p_{\ell+1} \not= p_j## for ##1 \leq j \leq \ell##), then the divisors of ##n'## are the divisors of ##n## together with all the divisors of ##n## multiplied by ##p_{\ell+1}##. So that:

\begin{align*}
\sum_{d|n'} \sigma (d^{k-1}) \varphi (d) & = \sum_{d|n} \sigma (d^{k-1}) \varphi (d) + \sum_{d|n} \sigma (d^{k-1} p_{\ell + 1}^{k-1}) \varphi (d p_{\ell + 1})
\nonumber \\
& = n^k + \sum_{d|n} \sigma (d^{k-1}) \varphi (d) \sigma(p_{\ell + 1}^{k-1}) \varphi (p_{\ell + 1})
\nonumber \\
& = n^k + n^k [(1 + p_{\ell+1} + p_{\ell+1}^2 + \cdots + p_{\ell+1}^{k-1}) (p_{\ell + 1} - 1)]
\nonumber \\
& = n^k + n^k [p_{\ell+1}^k - 1]
\nonumber \\
& = (n p_{\ell+1})^k = {n'}^k
\end{align*}

where we have used the inductive assumption, and that both ##\sigma## and ##\varphi## are multiplicative.

Alternative to an induction, you can prove that the function ##F(n):=\sum_{d|n}f(d)=\sum_{d|n}\sigma(d^{k-1})\varphi(d)## is also multiplicative:
\begin{align*}
F(ab)&=\sum_{d|ab}f(ab)= \sum_{d|a}\sum_{e|b} f(a)f(b) =\sum_{d|a}f(a) \sum_{e|b} f(b) =F(a)F(b)
\end{align*}

Office_Shredder
Staff Emeritus
Gold Member
Edit:
For 3, did you really mean a lie algebra isomorphism? I thought the general linear algebra was not semismple. Did you just mean an embedding?

Sorry I think this got lost in the shuffle, I intended to direct this question to @fresh_42

Mentor
Sorry I think this got lost in the shuffle, I intended to direct this question to @fresh_42
This is due to the ambiguous use of the English language in this case. I had to learn here on PF that monomorphisms are called isomorphisms into. I personally hate this, and it is well distinguished in German: either it is a monomorphism or an isomorphism, not such a stupid thing as an isomorphism into. But who am I to argue with native speakers? I tried to clarify this stupid use of terms by writing an embedding arrow.

The dimensions alone forbid an isomorphism, even for non semisimple Lie algebras.

Ado's theorem says it can always be done over a field characteristic zero, but it is especially easy to see for semisimple ones.

Last edited:
Mentor
Look, I am not annoyed at you-- I can remember what it is like to get a response to these challenge questions that is just baffling.
Yes, sorry if I was harsh. It is also a matter of time. I read your solution late at night, or was it early in the morning? so it was more difficult to decode than after sleep. But anyway, I had learnt group characters long ago, so I first have to refresh my memory.

Infrared
Gold Member
I don't understand what you're saying?

How can the equation ##y=x^3+c## have three different real roots for any value of ##c##?

Oops, I somehow missed the adjective "real" in the question! I guess I should've realized it was a little too easy.

I still think ##P## can win, so I'll try again. Let ##f(x)=x^3+ax^2+bx+c.## First ##P## chooses ##b=-1##. Note that this ensures that the discriminant of ##f'(x)=3x^2+2ax+b##, which is ##4a^2-12b## must be strictly positive positive, so ##f## has two distinct critical points, the smaller of which must be a local maximum, and the larger of which a local minimum (no matter the value of the other coefficients). If ##Q## selects a value for ##a##, then ##P## wins by selecting any value of ##c## such that ##-c## is between the local maximum and local minimum values of ##x^3+ax^2+bx.##

Now consider the case that ##Q## selects a value for ##c##. Let ##x_-## and ##x_+## respectively represent the smaller and larger critical points for ##f(x).## Let ##g(x)=x^3+ax^2+bx## (so we forget the constant term in ##f##; this is not yet fully defined since we need to pick ##a##) For ##P## to win, we need a strategy to choose ##a## in such a way that ##-c## lies between ##g(x_+)## and ##g(x_-).##

Applying the quadratic formula to ##g'(x)## we find that ##x_-=-2a/3+O(1/a)## in the limit ##a\to\infty## and similarly ##x_+=O(1/a).## Hence ##\lim_{a\to\infty}g(x_-)=\infty## and ##\lim_{a\to\infty}g(x_+)=0.## Thus if ##-c## is positive, then ##P## can win by choosing a sufficiently large value of ##a##. Analogously, if ##-c## is negative, ##P## can win by choosing a sufficiently negative value of ##a.## Finally if ##c=0##, then ##P## may choose ##a=0## since ##x^3-x## has three distinct roots.

Let me know if this looks better @fresh_42.

Last edited: