Math Challenge - July 2021

In summary, the conversation covers topics such as group theory, Lie algebras, number theory, manifolds, calculus, topology, differential equations, and geometry. Various problems and proofs are discussed, including proving that a finite group is a product of groups of prime order, determining the number of orbits in a group, and finding a Lie algebra isomorphism. There are also questions involving finding the composition factors of a group, proving a formula involving Euler's phi-function and the sum of divisors, and showing that a set is a manifold. High school-level questions involving Cartesian coordinates, rotating ellipses, and optimization problems are also discussed. Finally, there is a question about a game involving choosing real values for a polynomial equation and
  • #36
kshitij said:
Infact, I thought that the winning strategy for P would be choosing the value of b as a negative number in the first move, so if Q chooses a value of a in the next move P can simply shift the x-axis up or down accordingly so as to intersect the curve at three points by choosing the value of c and if Q chose the value of c in the second move to make the curve lie above axis or below such that it doesn't cut it at three distinct points then P can choose the value of a accordingly to shift the curve either up or down because a is the coefficient of x2 so it should have a greater say in the position of the curve in the plane and it is also the last move so Q cannot do anything after this.
Could you write this in formulas? It leaves all the work for me to do, only to decode this string.
 
Physics news on Phys.org
  • #37
kshitij said:
I don't know about why is that maximum, if equality holds then they should be maximum. I think I don't understand what you're trying to say?
Yes, that's the idea! You have proven ##f(x,y,z) \leq C## for some specific value ##C##. But if the maximum would be ##f(x,y,z)=:M<C## then ##C## doesn't answer the question. You must show that ##C## is the least possible solution, a maximum, not only an upper bound.
kshitij said:
Also even if ##x,y,z \gt 0##, ##x+y-z## can still be less than zero and we cannot use A.M G.M inequality the way I did, if ##x,y,z## were sides of a triangle then its different though.
But it leaves us with fewer cases. The original theorem was about triangles, but the problem has an answer in any case. Many of them, I admit, so I reduced them significantly now by demanding ##x,y,z > 0##.
 
  • #38
fresh_42 said:
Could you write this in formulas? It leaves all the work for me to do, only to decode this string.
I think I have just a strategy but I am not sure because let's say that P plays according to my strategy and chooses a negative value of ##b## say ##b=-1## but then after Q chose a value of ##c## then I don't know what value of ##a## P should choose, I know (more like I feel) that there are some values of ##a## that he can choose but I don't have any relation like $$a \geq f(b,c)$$
I can maybe think about that problem later but first let me focus on #13
 
  • #39
fresh_42 said:
Yes, that's the idea! You have proven f(x,y,z)≤C for some specific value C. But if the maximum would be f(x,y,z)=:M<C then C doesn't answer the question. You must show that C is the least possible solution, a maximum, not only an upper bound.
Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
 
  • #40
kshitij said:
Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
But if ##f(x,y,z)## is area of a triangle then ##f(x,y,z)=0## is the least possible value not the maximum value, surely this is wrong!
 
  • #41
kshitij said:
Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
Also I forgot that ##x,y,z \gt 0## so ##x=y=z=0## cannot be correct anyway.

I will try again this problem later.
 
  • #42
kshitij said:
Are you saying that I must prove why the equality holds? or do you want me to show when the equality holds?
As I said that for ##a_1,a_2, ... ,a_n##, ##A.M=G.M## when ##a_1=a_2= ... =a_n##

So in our case the equality should hold if $$x+y+z=x+y-z=x-y+z=y+z-x$$
But this equality will only hold true if ##x=y=z=0##, is that the answer? is maximum value of ##f(x,y,z)=0##?
I am saying that any upper bound is not sufficient. We are looking for the maximum. We get from your calculation and the fact that ##A.M.=G.M.## is possible, that ##f(x,y,z)## is maximal for ##c-2x=c-2y=c-2z,## i.e. ##x=y=z##. Hence we obtain the maximum at ##x=y=z=c/3## with a maximum function value ##c^4/27.##

You simply should have mentioned the conclusion. It is necessary that ##A.M.=G.M.## can be obtained since otherwise we only have an arbitrary upper bound.

In case ##x,y,z## are the sides of a triangle, it is the theorem of Heron which says that ##f(x,y,z)=16\,F^2## where ##F## is the area of the triangle with side lengths ##x,y,z, ## i.e. the triangle with the maximal area by constant circumference is an equilateral triangle.
 
  • Like
Likes kshitij
  • #43
StoneTemplePython said:
Probably a good idea to put a pin in this.
I love the idea and want to understand it. I just thought you could help me a bit to follow you.
 
  • #44
fresh_42 said:
I apologize that I am not familiar with your nomenclature. I guess I wouldn't have asked if it had been ##\mathbf{1}\cdot \mathbf{1}^\tau##. The ambiguous use of ##T## is of course totally ok.And a note that this was the defining equation and not a conclusion was obviously not necessary. Sorry for not guessing it.

I would ask what ##G'## means instead of guessing ##G':=\operatorname{im} \phi ,## but you are obviously annoyed by my lack of fantasy. And sure, it is not necessary to mention that ##G'## isn't ##[G,G]## in a proof in group theory. Or is it, and I missed the point again?I thought of an example like ##T=\dfrac{1}{3}\cdot \begin{bmatrix}
1 & 1 & 0&\ldots\\1&1&0&\ldots\\0&0&1&\ldots\\ &&&\ddots \\ 0&0&\ldots&1
\end{bmatrix}## where I could make the matrix arbitrary large. This example doesn't fulfill ##\operatorname{trace}T =1##. Hence this condition restricts the number of possible groups ##H## significantly, and the question whether some groups remain is not trivial. You have always the identity in ##H## which can be made arbitrary large, regardless how large ##|H|## is. Sorry for asking.
Look, I am not annoyed at you-- I can remember what it is like to get a response to these challenge questions that is just baffling. So I put myself in your place and I get it. But I am trying hard (and thus far failing) to not get bogged down in lots of details. FWIW I almost wrote ##\mathbf {11}^*## as conjugate transpose seems reasonable over ##\mathbb C##-- probably better notation. And yes your definition of ##G'## is a lot cleaner.

The point of that Artin exercise btw is that for any complex finite matrix group, when you sum over the group, the instances of the trivial representation survive (obvious) and all other irreducible representations are killed by the summation (surprising to me). So Burnside's Formula is just counting the number of trivial representations and each orbit has exactly one trivial rep associated with it. (If we take this view point, we can bypass character theory altogether though I'm not sure whether that is desirable).

fresh_42 said:
I love the idea and want to understand it. I just thought you could help me a bit to follow you.
Let me see. After some sleep: I think @Office_Shredder has come up with a way for me to radically streamline it and cut out everything in the "General Case". I've already shown (convincingly?) that in the Base Case: if you average over ##G'## that ##\text{trace}\big(T\big)=1\implies \text{ G is transitive}##. Now I need to show that ##\text{ G is transitive} \implies \text{trace}\big(T\big)=1 ##. (Should take at most 3 lines to prove). I think that will give "each orbit has exactly one trivial rep associated with it". The finish then should be something like the action of ##G## on ##X## may be decomposed into a partition of orbits so if the Burnside Fromula gives you ##m## there must be ##m## orbits.
 
  • #45
Problem #4:

First say ##n = p## where ##p## is prime. Then ##d = \{ 1 ,p \}## and we have:

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = \sigma (1^{k-1}) \varphi (1) + \sigma (p^{k-1}) \varphi (p)
\nonumber \\ & = 1 + (1 + p + p^2 + \cdots p^{k-1}) (p-1)
\nonumber \\ & = 1 + (p^k - 1) =p^k .
\end{align*}

Which proves the result when ##n## is equal to one prime number. We will prove the general result by induction. Let ##n = p_1 p_2 \cdots p_\ell## (where no two primes are the same). Assume

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = n^k
\end{align*}

Now Let ##n' = n p_{\ell+1}## (where ##p_{\ell+1} \not= p_j## for ##1 \leq j \leq \ell##), then the divisors of ##n'## are the divisors of ##n## together with all the divisors of ##n## multiplied by ##p_{\ell+1}##. So that:

\begin{align*}
\sum_{d|n'} \sigma (d^{k-1}) \varphi (d) & = \sum_{d|n} \sigma (d^{k-1}) \varphi (d) + \sum_{d|n} \sigma (d^{k-1} p_{\ell + 1}^{k-1}) \varphi (d p_{\ell + 1})
\nonumber \\
& = n^k + \sum_{d|n} \sigma (d^{k-1}) \varphi (d) \sigma(p_{\ell + 1}^{k-1}) \varphi (p_{\ell + 1})
\nonumber \\
& = n^k + n^k [(1 + p_{\ell+1} + p_{\ell+1}^2 + \cdots + p_{\ell+1}^{k-1}) (p_{\ell + 1} - 1)]
\nonumber \\
& = n^k + n^k [p_{\ell+1}^k - 1]
\nonumber \\
& = (n p_{\ell+1})^k = {n'}^k
\end{align*}

where we have used the inductive assumption, and that both ##\sigma## and ##\varphi## are multiplicative.

 
  • Like
Likes fresh_42
  • #46
julian said:
Problem #4:

First say ##n = p## where ##p## is prime. Then ##d = \{ 1 ,p \}## and we have:

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = \sigma (1^{k-1}) \varphi (1) + \sigma (p^{k-1}) \varphi (p)
\nonumber \\ & = 1 + (1 + p + p^2 + \cdots p^{k-1}) (p-1)
\nonumber \\ & = 1 + (p^k - 1) =p^k .
\end{align*}

Which proves the result when ##n## is equal to one prime number. We will prove the general result by induction. Let ##n = p_1 p_2 \cdots p_\ell## (where no two primes are the same). Assume

\begin{align*}
\sum_{d|n} \sigma (d^{k-1}) \varphi (d) & = n^k
\end{align*}

Now Let ##n' = n p_{\ell+1}## (where ##p_{\ell+1} \not= p_j## for ##1 \leq j \leq \ell##), then the divisors of ##n'## are the divisors of ##n## together with all the divisors of ##n## multiplied by ##p_{\ell+1}##. So that:

\begin{align*}
\sum_{d|n'} \sigma (d^{k-1}) \varphi (d) & = \sum_{d|n} \sigma (d^{k-1}) \varphi (d) + \sum_{d|n} \sigma (d^{k-1} p_{\ell + 1}^{k-1}) \varphi (d p_{\ell + 1})
\nonumber \\
& = n^k + \sum_{d|n} \sigma (d^{k-1}) \varphi (d) \sigma(p_{\ell + 1}^{k-1}) \varphi (p_{\ell + 1})
\nonumber \\
& = n^k + n^k [(1 + p_{\ell+1} + p_{\ell+1}^2 + \cdots + p_{\ell+1}^{k-1}) (p_{\ell + 1} - 1)]
\nonumber \\
& = n^k + n^k [p_{\ell+1}^k - 1]
\nonumber \\
& = (n p_{\ell+1})^k = {n'}^k
\end{align*}

where we have used the inductive assumption, and that both ##\sigma## and ##\varphi## are multiplicative.

Alternative to an induction, you can prove that the function ##F(n):=\sum_{d|n}f(d)=\sum_{d|n}\sigma(d^{k-1})\varphi(d)## is also multiplicative:
\begin{align*}
F(ab)&=\sum_{d|ab}f(ab)= \sum_{d|a}\sum_{e|b} f(a)f(b) =\sum_{d|a}f(a) \sum_{e|b} f(b) =F(a)F(b)
\end{align*}
 
  • #47
Office_Shredder said:
Edit:
For 3, did you really mean a lie algebra isomorphism? I thought the general linear algebra was not semismple. Did you just mean an embedding?

Sorry I think this got lost in the shuffle, I intended to direct this question to @fresh_42
 
  • #48
Office_Shredder said:
Sorry I think this got lost in the shuffle, I intended to direct this question to @fresh_42
This is due to the ambiguous use of the English language in this case. I had to learn here on PF that monomorphisms are called isomorphisms into. I personally hate this, and it is well distinguished in German: either it is a monomorphism or an isomorphism, not such a stupid thing as an isomorphism into. But who am I to argue with native speakers? I tried to clarify this stupid use of terms by writing an embedding arrow.

The dimensions alone forbid an isomorphism, even for non semisimple Lie algebras.

Ado's theorem says it can always be done over a field characteristic zero, but it is especially easy to see for semisimple ones.
 
Last edited:
  • #49
StoneTemplePython said:
Look, I am not annoyed at you-- I can remember what it is like to get a response to these challenge questions that is just baffling.
Yes, sorry if I was harsh. It is also a matter of time. I read your solution late at night, or was it early in the morning? so it was more difficult to decode than after sleep. But anyway, I had learned group characters long ago, so I first have to refresh my memory.
 
  • #50
kshitij said:
I don't understand what you're saying?

How can the equation ##y=x^3+c## have three different real roots for any value of ##c##?

Oops, I somehow missed the adjective "real" in the question! I guess I should've realized it was a little too easy.

I still think ##P## can win, so I'll try again. Let ##f(x)=x^3+ax^2+bx+c.## First ##P## chooses ##b=-1##. Note that this ensures that the discriminant of ##f'(x)=3x^2+2ax+b##, which is ##4a^2-12b## must be strictly positive positive, so ##f## has two distinct critical points, the smaller of which must be a local maximum, and the larger of which a local minimum (no matter the value of the other coefficients). If ##Q## selects a value for ##a##, then ##P## wins by selecting any value of ##c## such that ##-c## is between the local maximum and local minimum values of ##x^3+ax^2+bx.##

Now consider the case that ##Q## selects a value for ##c##. Let ##x_-## and ##x_+## respectively represent the smaller and larger critical points for ##f(x).## Let ##g(x)=x^3+ax^2+bx## (so we forget the constant term in ##f##; this is not yet fully defined since we need to pick ##a##) For ##P## to win, we need a strategy to choose ##a## in such a way that ##-c## lies between ##g(x_+)## and ##g(x_-).##

Applying the quadratic formula to ##g'(x)## we find that ##x_-=-2a/3+O(1/a)## in the limit ##a\to\infty## and similarly ##x_+=O(1/a).## Hence ##\lim_{a\to\infty}g(x_-)=\infty## and ##\lim_{a\to\infty}g(x_+)=0.## Thus if ##-c## is positive, then ##P## can win by choosing a sufficiently large value of ##a##. Analogously, if ##-c## is negative, ##P## can win by choosing a sufficiently negative value of ##a.## Finally if ##c=0##, then ##P## may choose ##a=0## since ##x^3-x## has three distinct roots.

Let me know if this looks better @fresh_42.
 
Last edited:
  • #51
Infrared said:
Oops, I somehow missed the adjective "real" in the question! I guess I should've realized it was a little too easy.

I still think ##P## can win, so I'll try again. Let ##f(x)=x^3+ax^2+bx+c.## First ##P## chooses ##b=-1##. Note that this ensures that the discriminant of ##f'(x)=3x^2+2ax+b##, which is ##4a^2-12b## must be strictly positive positive,
\begin{align*}
f'(x)=0&=3x^2+2ax-1 \\&\Longrightarrow 0=x^2+\dfrac{2a}{3}-\dfrac{1}{3}\\&\Longrightarrow x_{1,2}=-\dfrac{a}{3}\pm \sqrt{\dfrac{a^2}{9}+\dfrac{1}{3}}=-\dfrac{a}{3}\pm \dfrac{1}{3}\sqrt{a^2+3}
\end{align*}
which is always a positive discriminant.
Infrared said:
so ##f## has two distinct critical points, the smaller of which must be a local maximum, and the larger of which a local minimum (no matter the value of the other coefficients). If ##Q## selects a value for ##b##,

How? ##P## already set ##b=1##.

Hint: It is surprisingly easy if you do not want to enforce a win by the first move!
 
  • #52
fresh_42 said:
which is always a positive discriminant.
Yep, this is the point!

fresh_42 said:
How? ##P## already set ##b=1##.
Apologies this should say ##a## instead of ##b.##
 
  • #53
Infrared said:
Oops, I somehow missed the adjective "real" in the question! I guess I should've realized it was a little too easy.

I still think ##P## can win, so I'll try again. Let ##f(x)=x^3+ax^2+bx+c.## First ##P## chooses ##b=-1##. Note that this ensures that the discriminant of ##f'(x)=3x^2+2ax+b##, which is ##4a^2-12b## must be strictly positive positive, so ##f## has two distinct critical points, the smaller of which must be a local maximum, and the larger of which a local minimum (no matter the value of the other coefficients). If ##Q## selects a value for ##b##,
Let me assume you meant that ##Q## sets a value for ##a##. Then ##P## can choose ##c## in a way that ##f## intersects the ##x-##axis three times.

Infrared said:
then ##P## wins by selecting any value of ##c## such that ##-c## is between the local maximum and local minimum values of ##x^3+ax^2+bx.##

Now consider the case that ##Q## selects a value for ##c##. Let ##x_-## and ##x_+## respectively represent the smaller and larger critical points for ##f(x).##
Which is ##3x_-=-a-\sqrt{a^2+3}## and ##3x_+=3x_-=-a+\sqrt{a^2+3}.##

Infrared said:
Let ##g(x)=x^3+ax^2+bx## (so we forget the constant term in ##f##; this is not yet fully defined since we need to pick ##a##)
Which is ##g(x)=f(x)-c=x^3+ax^2-x.##
Infrared said:
For ##P## to win, we need a strategy to choose ##a## in such a way that ##-c## lies between ##g(x_+)## and ##g(x_-).##
Why? Do you mean ##x=-c##, ##f(x)=-c##, or ##g(x)=-c##?

Infrared said:
Applying the quadratic formula to ##g'(x)## we find that ##x_=-2a/3+O(1/a)## in the limit ##a\to\infty## and similarly ##x_+=O(1/a).##

Why don't we have ##x_{\pm} = -\dfrac{a}{3}\pm \sqrt{\dfrac{a^2}{9}-\dfrac{1}{3}}= O(a)## because ##g'(x)=f'(x)=3x^2+2ax-1\,\rm ?##

How do you get ##a## into the denominator?

Infrared said:
Hence ##\lim_{a\to\infty}g(x_-)=\infty## and ##\lim_{a\to\infty}g(x_+)=0.## Thus if ##-c## is positive, then ##P## can win by choosing a sufficiently large value of ##a##. Analogously, if ##-c## is negative, ##P## can win by choosing a sufficiently negative value of ##a.## Finally if ##c=0##, then ##P## may choose ##a=0## since ##x^3-x## has three distinct roots.

Let me know if this looks better @fresh_42.
 
Last edited:
  • #54
fresh_42 said:
Why? Do you mean ##x=-c##, ##f(x)=-c##, or ##g(x)=-c##?

I mean that ##P## wins if they can choose ##a## such that ##g(x_+)<-c<g(x_-).## If this holds, then the equation ##g(x)=-c## (which is ##f(x)=0##) has three solutions: one between ##x_-## and ##x_+##, one larger than ##x_+##, and one smaller than ##x_-.##

fresh_42 said:
Why don't we have ##x_{\pm} = -\dfrac{a}{3}\pm \sqrt{\dfrac{a^2}{9}-\dfrac{1}{3}}= O(a)## because ##g'(x)=f'(x)=3x^2+2ax-1\,\rm ?##

How do you get ##a## into the denominator?
We have $$x_-=-a/3-\sqrt{a^2/9-1/3}=-a/3-a/3\sqrt{1-3/a^2}=-a/3-a/3(1+O(1/a^2))=-2a/3+O(1/a).$$ The other term is similar. You're right (up to signs) that this term is also ##O(a)## but that isn't precise enough to conclude that ##g(x_-)## becomes very positive in the limit ##a\to\infty.##
 
  • #55
Ok, I finally got it. It took so long and several posts that I post my solution as reference:

##P## has the following winning strategy:

##P## chooses ##c=1## in his first move. In case ##Q## sets a value for ##a##, then ##P## finally sets ##b < -a-2\,;## whereas in case ##Q## sets a value for ##b##, ##P## finally sets ##a<-b-2\,.##

We now have to show that the equation has three distinct real roots. Let ##f(x)=x^3+ax^2+bx+1\,.## Since ##\lim_{x \to \infty}f(x)=+\infty## and ##\lim_{x \to -\infty}f(x)=-\infty## there is a real number ##k>1## such that
$$
f(k)> 0\, , \,f(0)=1\, , \,f(-k)<0\, , \,f(1)=a+b+2 < 0
$$
By the mean value theorem, there have to be roots ##f(\xi_j)=0## with
$$
-k <\xi_1 <0 < \xi_2 < 1< \xi_3< k
$$
 
Last edited:
  • Like
Likes kshitij
  • #56
fresh_42 said:
I am saying that any upper bound is not sufficient. We are looking for the maximum. We get from your calculation and the fact that ##A.M.=G.M.## is possible, that ##f(x,y,z)## is maximal for ##c-2x=c-2y=c-2z,## i.e. ##x=y=z##. Hence we obtain the maximum at ##x=y=z=c/3## with a maximum function value ##c^4/27.##

You simply should have mentioned the conclusion. It is necessary that ##A.M.=G.M.## can be obtained since otherwise we only have an arbitrary upper bound.

In case ##x,y,z## are the sides of a triangle, it is the theorem of Heron which says that ##f(x,y,z)=16\,F^2## where ##F## is the area of the triangle with side lengths ##x,y,z, ## i.e. the triangle with the maximal area by constant circumference is an equilateral triangle.
So, basically as I understand it,

we had $$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
And ##c## is a constant, so we use ##A.M \geq G.M## on $$g(x,y,z)=(c-2x)(c-2y)(c-2z)$$
So, we get,
\begin{align*}
\dfrac{(c-2x)+(c-2y)+(c-2z)}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{3c-2c}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{c^3}{27} &\geq (c-2x)(c-2y)(c-2z)\\
\end{align*}
And the equality holds when,
$$c-2x=c-2y=c-2z$$
Which gives us ##x=y=z## which is condition for an equilateral triangle, and
$$f(x,y,z)=\dfrac{c^4}{27}$$
And we know that 16 times the square of the area of an equilateral triangle with side length ##a## and perimeter ##c## is,
$$16\left(\dfrac{\sqrt {3}a^2}{4}\right)^2=3a^4=\frac{c^4}{27}$$
Which is the same result that we got!
 
  • Like
Likes fresh_42
  • #57
kshitij said:
So, basically as I understand it,

we had $$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
And ##c## is a constant, so we use ##A.M \geq G.M## on $$g(x,y,z)=(c-2x)(c-2y)(c-2z)$$
So, we get,
\begin{align*}
\dfrac{(c-2x)+(c-2y)+(c-2z)}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{3c-2c}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{c^3}{27} &\geq \left((c-2x)(c-2y)(c-2z)\right)\\
\end{align*}
And the equality holds when,
$$c-2x=c-2y=c-2z$$
Which gives us ##x=y=z## which is condition for an equilateral triangle, and
$$f(x,y,z)=\dfrac{c^4}{27}$$
And we know that 16 times the square of the area of an equilateral triangle with side length ##a## and perimeter ##c## is,
$$16\left(\dfrac{\sqrt {3}a^2}{4}\right)^2=3a^4=\frac{c^4}{27}$$
Which is the same result that we got!
When I first used ##A.M \geq G.M## on$$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
the equality would never hold because if $$c=c-2x=c-2y=c-2z$$
then we would get $$x=y=z=0$$ which was not possible
 
  • #58
fresh_42 said:
Ok, I finally got it. It took so long and several posts that I post my solution as reference:

##P## has the following winning strategy:

##P## chooses ##c=1## in his first move. In case ##Q## sets a value for ##a##, then ##P## finally sets ##b < -a-2\,;## whereas in case ##Q## sets a value for ##b##, ##P## finally sets ##a<-b-2\,.##

We now have to show that the equation has three distinct real roots. Let ##f(x)=x^3+ax^2+bx+1\,.## Since ##\lim_{x \to \infty}=+\infty## and ##\lim_{x \to -\infty}=-\infty## there is a real number ##k>1## such that
$$
f(k)> 0\, , \,f(0)=1\, , \,f(-k)<0\, , \,f(1)=a+b+2 < 0
$$
By the mean value theorem, there have to be roots ##f(\xi_j)=0## with
$$
-k <\xi_1 <0 < \xi_2 < 1< \xi_3< k
$$
I didn't knew there were multiple winning strategies for ##P##!

Also, couldn't we generalise that further for when ##P## chooses any value of ##c## in the first move, then the condition for $$f(1)<0$$ would change to $$a+b+c+1<0$$
So if ##Q## chooses a value of ##a## then ##P## would choose ##b## such that $$b<-a-c-1$$ And if ##Q## chooses ##b## then ##P## should make sure that $$a<-b-c-1$$

Edit: I realized that this wouldn't work because we would have to show that $$f(c)<0$$
 
Last edited:
  • #59
kshitij said:
When I first used ##A.M \geq G.M## on$$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
the equality would never hold because if $$c=c-2x=c-2y=c-2z$$
then we would get $$x=y=z=0$$ which was not possible
You have to apply the inequality on ##c-2x,c-2y,c-2z## and leave out ##c##.
 
  • #60
kshitij said:
I didn't knew there were multiple winning strategies for ##P##!

Also, couldn't we generalise that further for when ##P## chooses any value of ##c## in the first move, then the condition for $$f(1)<0$$ would change to $$a+b+c+1<0$$
So if ##Q## chooses a value of ##a## then ##P## would choose ##b## such that $$b<-a-c-1$$ And if ##Q## chooses ##b## then ##P## should make sure that $$a<-b-c-1$$
Probably. But ##c=1## makes the argument easier to read.
 
  • #61
Thanks for posting these, @fresh_42!

Use ##x+y+z = 0## to suggest a new coordinate system ##(X,Y,Z)## over ##\mathbb{R}^3## where ##Z = x+y+z##. I choose to define ##X = x - y## and ##Y = y - z##. These coordinates are orthogonal, since ##\nabla X \times \nabla Y = \nabla Z##.

Since this is a linear transformation, we can write it in matrix form:
$$\left( \begin{array}{cc} X \\ Y \\ Z \end{array} \right) = \left( \begin{array}{cc} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 1 & 1 & 1 \end{array} \right) \left( \begin{array}{cc} x \\ y \\ z \end{array} \right)$$
and we can invert that
$$\left( \begin{array}{cc} x \\ y \\ z \end{array} \right) = \frac{1}{3} \left( \begin{array}{cc} 2 & 1 & 1 \\ -1 & 1 & 1 \\ -1 & -2 & 1 \end{array} \right) \left( \begin{array}{cc} X \\ Y \\ Z \end{array} \right)$$

Now we can write the second constraint in matrix form $$ 9 = \left( \begin{array}{cc} x & y & z \\ \end{array} \right) \left( \begin{array}{cc} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \\ \end{array} \right) \left(\begin{array}{cc} x \\ y \\ z \\ \end{array} \right)$$
Then we can transform that to the ##(X,Y,Z)## coordinate system:
$$\frac{1}{9} \left( \begin{array}{cc} 2 & -1 & -1 \\ 1 & 1 & -2 \\ 1 & 1 & 1 \end{array} \right) \left( \begin{array}{cc} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \\ \end{array} \right) \left( \begin{array}{cc} 2 & 1 & 1 \\ -1 & 1 & 1 \\ -1 & -2 & 1 \end{array} \right) = \left( \begin{array}{cc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right)$$
So, finally, we have $$X^2 + Y^2 = 9$$ so the space is a submersion (or is it immersion? I can never remember the difference) of ##\mathbb{S}^1## and thus a manifold.

...I'll get back to the the tangent and normal spaces at point p later. My partner is giving me the evil eye for spending Saturday afternoon doing physics. I think I'm in trouble o:)
 
  • #62
Twigg said:
... so the space is a submersion (or is it immersion? I can never remember the difference) of ##\mathbb{S}^1## and thus a manifold.
Both are primarily mappings, submersions have an everywhere surjective derivative, immersions an everywhere injective derivative.
 
  • Informative
Likes Twigg
  • #63
second time's the charm?

Suppose ##X## has ##m## orbits and let ##X## be an ordered set such that the first ##r_1## elements belong to orbit ##X_1## (where ##\vert X_1\vert =r_1##), the next ##r_2## elements belong to orbit ##X_2## and so on.

Then ##\phi_i## is the homomorphism for the standard permutation matrix representation (with matrices in ##\mathbb C^{r_i\times r_i}##) of ##G##'s action on ##X_i##.

lemma:
##T_i:= \frac{1}{\vert G\vert}\sum_{g\in G}\phi_i(g)##
then ##\text{trace}\big(T_i\big)=1##

*proof:*
(i) ##T_i## is idempotent
##T_i^2=\Big(\frac{1}{\vert G\vert}\sum_{g\in G}\phi(g)\Big)\Big(\frac{1}{\vert G\vert}\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\Big(\phi_i(g)\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\vert G\vert \cdot T_i=\frac{1}{\vert G\vert^2}\cdot \vert G\vert^2 \cdot T_i=T_i##

missing detail:
##\phi_i(g)\sum_{g'\in G}\phi_i(g')=\sum_{g'\in G}\phi_i(g)\phi_i(g')=\sum_{g'\in G}\phi_i(gg')=\sum_{g''\in G}\phi_i(g'')= \sum_{g\in G}\phi_i(g)=\vert G\vert \cdot T_i##

(ii) ##T_i## is a positive matrix
##T_i## is a sum (re-scaled by a positive number) of real non-negative matrices and hence is real non-negative. If component ##(k,j)## of ##T_i## is zero that implies there is no ##g\in G## in that maps the kth element of ##X_i## to the ##j##th element -- but ##G## acts transitively on ##X_i## so this cannot be true hence ##T_i## is a positive matrix.

(iii) ##T_i## has one eigenvalue of 1 and all else are zero, hence it has trace 1
##T_i## is a convex combination of (doubly) stochastic matrices hence it is (doubly) stochastic. Since ##T_i## is positive, we may apply Perron Theory for stochastic matrices which tells us that ##\mathbf 1## is an eigenvector with eigenvalue 1, which is *simple*. And since ##T_i## is idempotent, all other (##r_i -1##) eigenvalues must be zero. Thus ##\text{trace}\big(T_i\big)=1##,

(Since ##T_i## is doubly stochastic this actually tells us that it is ##\propto \mathbf{11}^*##-- which implies, in terms of irreducibles, that ##\phi_i(g)## includes exactly one trivial representation-- though as is often the case with representations, we only need the trace.)

the main case
now that we've looked at individual orbits, consider ##G##'s action on ##X## as a whole. This gives a standard permutation representation of, for ##g\in G##

##\phi(g\big) = \left[\begin{matrix}\phi_1(g) & \mathbf 0 & \cdots & \mathbf 0\\ \mathbf 0 & \phi_2(g) & \cdots & \mathbf 0\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf 0 & \mathbf 0 & \cdots & \phi_{m}(g)\end{matrix}\right]##
(i.e. the orbits allow us to decompose ##G##s action on ##X## into a direct sum of its action on individual orbits)
so we have

##\text{# of orbits}=m = \sum_{i=1}^m \text{trace}\big(T_i\big)= \frac{1}{\vert G\vert}\sum_{i=1}^m \sum_{g\in G}\text{trace}\big(\phi_i(g) \big)##
##=\frac{1}{\vert G\vert}\sum_{g\in G}\sum_{i=1}^m\text{trace}\big( \phi_i(g) \big)=\frac{1}{\vert G\vert} \sum_{g\in G}\text{trace}\big(\phi(g) \big)=\frac{1}{\vert G\vert}\sum_{g\in G}|X^g|##
 
  • Like
Likes fresh_42
  • #64
Annnd back to finish Problem 5

##p = (2,-1,-1)## implies ##X_p = 3##, ##Y_p = 0##, ##Z_p = 0##. The tangent plane ##T_p M## points along the Y-axis since ##p## is on the X-axis, so $$T_p M = \{ t \left( \begin{array}{cc} 0 \\ 1 \\ -1 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$ since ##(0,1,-1)## points along the Y-axis.

For the normal plane, it's the same deal except that the normal vector space spans the X-axis, since the Y-axis is the tangent plane. That means $$N_p M = \{ t \left( \begin{array}{cc} 1 \\ -1 \\ 0 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$

As far as showing that the space is a manifold, I feel like just showing it's a circle in the transformed coordinates is sufficient. But if you feel like you need more formal proof, I could write you an atlas with stereographic projection onto the X-axis.
 
  • #65
Twigg said:
Annnd back to finish Problem 5

##p = (2,-1,-1)## implies ##X_p = 3##, ##Y_p = 0##, ##Z_p = 0##. The tangent plane ##T_p M## points along the Y-axis since ##p## is on the X-axis, so $$T_p M = \{ t \left( \begin{array}{cc} 0 \\ 1 \\ -1 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$ since ##(0,1,-1)## points along the Y-axis.

For the normal plane, it's the same deal except that the normal vector space spans the X-axis, since the Y-axis is the tangent plane. That means $$N_p M = \{ t \left( \begin{array}{cc} 1 \\ -1 \\ 0 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$

As far as showing that the space is a manifold, I feel like just showing it's a circle in the transformed coordinates is sufficient. But if you feel like you need more formal proof, I could write you an atlas with stereographic projection onto the X-axis.
I have a different tangent vector, but it's impossible to tell where you went wrong. And the normal space to a one-dimensional subspace in a three-dimensional Euclidean space is two-dimensional.
 
Last edited:
  • #66
StoneTemplePython said:
second time's the charm?

Suppose ##X## has ##m## orbits and let ##X## be an ordered set such that the first ##r_1## elements belong to orbit ##X_1## (where ##\vert X_1\vert =r_1##), the next ##r_2## elements belong to orbit ##X_2## and so on.

Then ##\phi_i## is the homomorphism for the standard permutation matrix representation (with matrices in ##\mathbb C^{r_i\times r_i}##) of ##G##'s action on ##X_i##.

lemma:
##T_i:= \frac{1}{\vert G\vert}\sum_{g\in G}\phi_i(g)##
then ##\text{trace}\big(T_i\big)=1##

*proof:*
(i) ##T_i## is idempotent
##T_i^2=\Big(\frac{1}{\vert G\vert}\sum_{g\in G}\phi(g)\Big)\Big(\frac{1}{\vert G\vert}\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\Big(\phi_i(g)\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\vert G\vert \cdot T_i=\frac{1}{\vert G\vert^2}\cdot \vert G\vert^2 \cdot T_i=T_i##

missing detail:
##\phi_i(g)\sum_{g'\in G}\phi_i(g')=\sum_{g'\in G}\phi_i(g)\phi_i(g')=\sum_{g'\in G}\phi_i(gg')=\sum_{g''\in G}\phi_i(g'')= \sum_{g\in G}\phi_i(g)=\vert G\vert \cdot T_i##

(ii) ##T_i## is a positive matrix
##T_i## is a sum (re-scaled by a positive number) of real non-negative matrices and hence is real non-negative. If component ##(k,j)## of ##T_i## is zero that implies there is no ##g\in G## in that maps the kth element of ##X_i## to the ##j##th element -- but ##G## acts transitively on ##X_i## so this cannot be true hence ##T_i## is a positive matrix.

(iii) ##T_i## has one eigenvalue of 1 and all else are zero, hence it has trace 1
##T_i## is a convex combination of (doubly) stochastic matrices hence it is (doubly) stochastic. Since ##T_i## is positive, we may apply Perron Theory for stochastic matrices which tells us that ##\mathbf 1## is an eigenvector with eigenvalue 1, which is *simple*. And since ##T_i## is idempotent, all other (##r_i -1##) eigenvalues must be zero. Thus ##\text{trace}\big(T_i\big)=1##,

(Since ##T_i## is doubly stochastic this actually tells us that it is ##\propto \mathbf{11}^*##-- which implies, in terms of irreducibles, that ##\phi_i(g)## includes exactly one trivial representation-- though as is often the case with representations, we only need the trace.)

the main case
now that we've looked at individual orbits, consider ##G##'s action on ##X## as a whole. This gives a standard permutation representation of, for ##g\in G##

##\phi(g\big) = \left[\begin{matrix}\phi_1(g) & \mathbf 0 & \cdots & \mathbf 0\\ \mathbf 0 & \phi_2(g) & \cdots & \mathbf 0\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf 0 & \mathbf 0 & \cdots & \phi_{m}(g)\end{matrix}\right]##
(i.e. the orbits allow us to decompose ##G##s action on ##X## into a direct sum of its action on individual orbits)
so we have

##\text{# of orbits}=m = \sum_{i=1}^m \text{trace}\big(T_i\big)= \frac{1}{\vert G\vert}\sum_{i=1}^m \sum_{g\in G}\text{trace}\big(\phi_i(g) \big)##
##=\frac{1}{\vert G\vert}\sum_{g\in G}\sum_{i=1}^m\text{trace}\big( \phi_i(g) \big)=\frac{1}{\vert G\vert} \sum_{g\in G}\text{trace}\big(\phi(g) \big)=\frac{1}{\vert G\vert}\sum_{g\in G}|X^g|##
A beautiful solution.

As @Office_Shredder already mentioned, there is still an elementary proof possible.
(Note to all who want to try with the orbit-stabilizer formula.)

The statement is commonly known as Burnside's Lemma. Burnside wrote this formula down around 1900. Historians of mathematics, however, found this formula already from Cauchy (1845) and Frobenius (1887). Therefore the formula is sometimes referred to as The Lemma which is not from Burnside.
 
  • Like
Likes StoneTemplePython
  • #67
Problem #2:

The set: ##\{ g \cdot x : g \in G \}## is the orbit of ##x## under the action of ##G## which we denote ##\text{Orb}(x)##. We can define an equivalence relation on ##X##: ##x \sim y## iff ##y = g \cdot x## for some ##g \in G##. We check: ##x \sim x## as ##e \cdot x = x##. If ##y = g \cdot x## then ##x = g^{-1} \cdot y## so that ##x \sim y## implies ##y \sim x##. If ##y = g \cdot x## and ##z = h \cdot y## then ##z = h g \cdot x## so that ##x \sim y## and ##y \sim z## implies that ##x \sim z##. As this is an equivalence relation on ##X##, the set ##X## is partitioned into disjoint equivalence classes and with each ##x## appearing in one of these equivalences classes.

We have

\begin{align*}
\sum_{g \in G} |X^g| = \sum_{g \in G} \sum_{x \in X} \delta_{g\cdot x, x} = \sum_{x \in X} \sum_{g \in G} \delta_{g\cdot x, x} = \sum_{x \in X} |G_x|
\end{align*}

The elements of ##G_x## are easily verified to form a group called the stabalizer subgroup of ##G## with respect to ##x##.

We define a map

\begin{align*}
\phi : G \rightarrow \text{Orb} (x)
\end{align*}

by

\begin{align*}
\phi (g) = g \cdot x
\end{align*}

It is clear that ##\phi## is surjective, because the definition of the orbit of ##x## is ##G \cdot x##. Now

\begin{align*}
\phi (g) = \phi (h) \Longleftrightarrow g \cdot x = h \cdot x \Longleftrightarrow g^{-1} h \cdot x = x \Longleftrightarrow g^{-1} h \in G_x .
\end{align*}

That is, ##\phi (g) = \phi (h)## if and only if ##g## and ##h## are in the same coset for the stabalizer subgroup ##G_x##. Thus there is a well-defined bijection:

\begin{align*}
G / G_x \rightarrow \text{Orb} (x)
\end{align*}

So ##\text{Orb}(x)## has the same number of elements as ##G / G_x##. This together with Lagrange's theorem says that

\begin{align*}
|\text{Orb}(x)| = |G| / |G_x|
\end{align*}

Recall we have that ##X## is the disjoint union of all its orbits. Let ##x_i## be a representative point of the ##i##th orbit, then:

\begin{align*}
\frac{1}{|G|} \sum_{g \in G} |X^g| & = \frac{1}{|G|} \sum_{x \in X} |G_x|
\nonumber \\
&= \sum_{x \in X} \frac{1}{|\text{Orb}(x)|}
\nonumber \\
& = \sum_{i = 1}^{|X / G|} \sum_{x \in \text{Orb}(x_i)} \frac{1}{|\text{Orb}(x_i)|}
\nonumber \\
& = \sum_{i = 1}^{|X / G|} 1
\nonumber \\
& = |X / G| .
\end{align*}
 
Last edited:
  • #68
fresh_42 said:
5. Show that
$$
M:=\{\,x\in \mathbb{R}^3\,|\,x_1+x_2+x_3=0\, , \,x_1^2+2x_2^2+x_3^2-2x_2(x_1+x_3)=9\,\} \subseteq \mathbb{R}^3
$$
is a manifold, and determine the tangent space ##T_pM## and the normal space ##N_pM## at ##p=(2,-1,-1)\in M\,.##
The second equation can be written as ##(x_1-x_2)^2+(x_2-x_3)^2=9##. Make the change of variables
$$
x=x_1-x_2, \; y=x_2-x_3, \; z=x_1+x_2+x_3
$$
It is invertable, and in the new variables the set ##M## is given as the solution to the equations
$$
x^2+y^2=9, \; z=0
$$
Obviously a manifold. The point ##p## in the new coordinates is ##(3,0,0)##, so the tangent space is ##x=3, z=0## or in the original coordiantes ##x_1-x_2=0, x_1+x_2+x_3=0##.
 
  • #69
martinbn said:
The second equation can be written as ##(x_1-x_2)^2+(x_2-x_3)^2=9##. Make the change of variables
$$
x=x_1-x_2, \; y=x_2-x_3, \; z=x_1+x_2+x_3
$$
It is invertable, and in the new variables the set ##M## is given as the solution to the equations
$$
x^2+y^2=9, \; z=0
$$
Obviously a manifold. The point ##p## in the new coordinates is ##(3,0,0)##, so the tangent space is ##x=3, z=0## or in the original coordiantes ##x_1-x_2=0, x_1+x_2+x_3=0##.
... which leads to ##T_pM=\ldots ## and ##N_pM= \ldots ##
 
  • #70
Hi @fresh_42. So when you hit 15,000 posts do you get a set of steak knives or something? :smile:
 

Similar threads

  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
46
Views
4K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
2
Replies
69
Views
3K
  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
3
Replies
102
Views
7K
  • Math Proof Training and Practice
2
Replies
60
Views
8K
Back
Top