# Math Challenge - July 2021

• Challenge
• Featured
Mentor
Oops, I somehow missed the adjective "real" in the question! I guess I should've realized it was a little too easy.

I still think ##P## can win, so I'll try again. Let ##f(x)=x^3+ax^2+bx+c.## First ##P## chooses ##b=-1##. Note that this ensures that the discriminant of ##f'(x)=3x^2+2ax+b##, which is ##4a^2-12b## must be strictly positive positive,
\begin{align*}
f'(x)=0&=3x^2+2ax-1 \\&\Longrightarrow 0=x^2+\dfrac{2a}{3}-\dfrac{1}{3}\\&\Longrightarrow x_{1,2}=-\dfrac{a}{3}\pm \sqrt{\dfrac{a^2}{9}+\dfrac{1}{3}}=-\dfrac{a}{3}\pm \dfrac{1}{3}\sqrt{a^2+3}
\end{align*}
which is always a positive discriminant.
so ##f## has two distinct critical points, the smaller of which must be a local maximum, and the larger of which a local minimum (no matter the value of the other coefficients). If ##Q## selects a value for ##b##,

Hint: It is surprisingly easy if you do not want to enforce a win by the first move!

Infrared
Gold Member
which is always a positive discriminant.
Yep, this is the point!

Apologies this should say ##a## instead of ##b.##

Mentor
Oops, I somehow missed the adjective "real" in the question! I guess I should've realized it was a little too easy.

I still think ##P## can win, so I'll try again. Let ##f(x)=x^3+ax^2+bx+c.## First ##P## chooses ##b=-1##. Note that this ensures that the discriminant of ##f'(x)=3x^2+2ax+b##, which is ##4a^2-12b## must be strictly positive positive, so ##f## has two distinct critical points, the smaller of which must be a local maximum, and the larger of which a local minimum (no matter the value of the other coefficients). If ##Q## selects a value for ##b##,
Let me assume you meant that ##Q## sets a value for ##a##. Then ##P## can choose ##c## in a way that ##f## intersects the ##x-##axis three times.

then ##P## wins by selecting any value of ##c## such that ##-c## is between the local maximum and local minimum values of ##x^3+ax^2+bx.##

Now consider the case that ##Q## selects a value for ##c##. Let ##x_-## and ##x_+## respectively represent the smaller and larger critical points for ##f(x).##
Which is ##3x_-=-a-\sqrt{a^2+3}## and ##3x_+=3x_-=-a+\sqrt{a^2+3}.##

Let ##g(x)=x^3+ax^2+bx## (so we forget the constant term in ##f##; this is not yet fully defined since we need to pick ##a##)
Which is ##g(x)=f(x)-c=x^3+ax^2-x.##
For ##P## to win, we need a strategy to choose ##a## in such a way that ##-c## lies between ##g(x_+)## and ##g(x_-).##
Why? Do you mean ##x=-c##, ##f(x)=-c##, or ##g(x)=-c##?

Applying the quadratic formula to ##g'(x)## we find that ##x_=-2a/3+O(1/a)## in the limit ##a\to\infty## and similarly ##x_+=O(1/a).##

Why don't we have ##x_{\pm} = -\dfrac{a}{3}\pm \sqrt{\dfrac{a^2}{9}-\dfrac{1}{3}}= O(a)## because ##g'(x)=f'(x)=3x^2+2ax-1\,\rm ?##

How do you get ##a## into the denominator?

Hence ##\lim_{a\to\infty}g(x_-)=\infty## and ##\lim_{a\to\infty}g(x_+)=0.## Thus if ##-c## is positive, then ##P## can win by choosing a sufficiently large value of ##a##. Analogously, if ##-c## is negative, ##P## can win by choosing a sufficiently negative value of ##a.## Finally if ##c=0##, then ##P## may choose ##a=0## since ##x^3-x## has three distinct roots.

Let me know if this looks better @fresh_42.

Last edited:
Infrared
Gold Member
Why? Do you mean ##x=-c##, ##f(x)=-c##, or ##g(x)=-c##?

I mean that ##P## wins if they can choose ##a## such that ##g(x_+)<-c<g(x_-).## If this holds, then the equation ##g(x)=-c## (which is ##f(x)=0##) has three solutions: one between ##x_-## and ##x_+##, one larger than ##x_+##, and one smaller than ##x_-.##

Why don't we have ##x_{\pm} = -\dfrac{a}{3}\pm \sqrt{\dfrac{a^2}{9}-\dfrac{1}{3}}= O(a)## because ##g'(x)=f'(x)=3x^2+2ax-1\,\rm ?##

How do you get ##a## into the denominator?
We have $$x_-=-a/3-\sqrt{a^2/9-1/3}=-a/3-a/3\sqrt{1-3/a^2}=-a/3-a/3(1+O(1/a^2))=-2a/3+O(1/a).$$ The other term is similar. You're right (up to signs) that this term is also ##O(a)## but that isn't precise enough to conclude that ##g(x_-)## becomes very positive in the limit ##a\to\infty.##

Mentor
Ok, I finally got it. It took so long and several posts that I post my solution as reference:

##P## has the following winning strategy:

##P## chooses ##c=1## in his first move. In case ##Q## sets a value for ##a##, then ##P## finally sets ##b < -a-2\,;## whereas in case ##Q## sets a value for ##b##, ##P## finally sets ##a<-b-2\,.##

We now have to show that the equation has three distinct real roots. Let ##f(x)=x^3+ax^2+bx+1\,.## Since ##\lim_{x \to \infty}f(x)=+\infty## and ##\lim_{x \to -\infty}f(x)=-\infty## there is a real number ##k>1## such that
$$f(k)> 0\, , \,f(0)=1\, , \,f(-k)<0\, , \,f(1)=a+b+2 < 0$$
By the mean value theorem, there have to be roots ##f(\xi_j)=0## with
$$-k <\xi_1 <0 < \xi_2 < 1< \xi_3< k$$

Last edited:
kshitij
I am saying that any upper bound is not sufficient. We are looking for the maximum. We get from your calculation and the fact that ##A.M.=G.M.## is possible, that ##f(x,y,z)## is maximal for ##c-2x=c-2y=c-2z,## i.e. ##x=y=z##. Hence we obtain the maximum at ##x=y=z=c/3## with a maximum function value ##c^4/27.##

You simply should have mentioned the conclusion. It is necessary that ##A.M.=G.M.## can be obtained since otherwise we only have an arbitrary upper bound.

In case ##x,y,z## are the sides of a triangle, it is the theorem of Heron which says that ##f(x,y,z)=16\,F^2## where ##F## is the area of the triangle with side lengths ##x,y,z, ## i.e. the triangle with the maximal area by constant circumference is an equilateral triangle.
So, basically as I understand it,

we had $$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
And ##c## is a constant, so we use ##A.M \geq G.M## on $$g(x,y,z)=(c-2x)(c-2y)(c-2z)$$
So, we get,
\begin{align*}
\dfrac{(c-2x)+(c-2y)+(c-2z)}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{3c-2c}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{c^3}{27} &\geq (c-2x)(c-2y)(c-2z)\\
\end{align*}
And the equality holds when,
$$c-2x=c-2y=c-2z$$
Which gives us ##x=y=z## which is condition for an equilateral triangle, and
$$f(x,y,z)=\dfrac{c^4}{27}$$
And we know that 16 times the square of the area of an equilateral triangle with side length ##a## and perimeter ##c## is,
$$16\left(\dfrac{\sqrt {3}a^2}{4}\right)^2=3a^4=\frac{c^4}{27}$$
Which is the same result that we got!

fresh_42
So, basically as I understand it,

we had $$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
And ##c## is a constant, so we use ##A.M \geq G.M## on $$g(x,y,z)=(c-2x)(c-2y)(c-2z)$$
So, we get,
\begin{align*}
\dfrac{(c-2x)+(c-2y)+(c-2z)}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{3c-2c}{3} &\geq \left((c-2x)(c-2y)(c-2z)\right)^\frac1 3\\
\dfrac{c^3}{27} &\geq \left((c-2x)(c-2y)(c-2z)\right)\\
\end{align*}
And the equality holds when,
$$c-2x=c-2y=c-2z$$
Which gives us ##x=y=z## which is condition for an equilateral triangle, and
$$f(x,y,z)=\dfrac{c^4}{27}$$
And we know that 16 times the square of the area of an equilateral triangle with side length ##a## and perimeter ##c## is,
$$16\left(\dfrac{\sqrt {3}a^2}{4}\right)^2=3a^4=\frac{c^4}{27}$$
Which is the same result that we got!
When I first used ##A.M \geq G.M## on$$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
the equality would never hold because if $$c=c-2x=c-2y=c-2z$$
then we would get $$x=y=z=0$$ which was not possible

Ok, I finally got it. It took so long and several posts that I post my solution as reference:

##P## has the following winning strategy:

##P## chooses ##c=1## in his first move. In case ##Q## sets a value for ##a##, then ##P## finally sets ##b < -a-2\,;## whereas in case ##Q## sets a value for ##b##, ##P## finally sets ##a<-b-2\,.##

We now have to show that the equation has three distinct real roots. Let ##f(x)=x^3+ax^2+bx+1\,.## Since ##\lim_{x \to \infty}=+\infty## and ##\lim_{x \to -\infty}=-\infty## there is a real number ##k>1## such that
$$f(k)> 0\, , \,f(0)=1\, , \,f(-k)<0\, , \,f(1)=a+b+2 < 0$$
By the mean value theorem, there have to be roots ##f(\xi_j)=0## with
$$-k <\xi_1 <0 < \xi_2 < 1< \xi_3< k$$
I didn't knew there were multiple winning strategies for ##P##!

Also, couldn't we generalise that further for when ##P## chooses any value of ##c## in the first move, then the condition for $$f(1)<0$$ would change to $$a+b+c+1<0$$
So if ##Q## chooses a value of ##a## then ##P## would choose ##b## such that $$b<-a-c-1$$ And if ##Q## chooses ##b## then ##P## should make sure that $$a<-b-c-1$$

Edit: I realised that this wouldn't work because we would have to show that $$f(c)<0$$

Last edited:
Mentor
When I first used ##A.M \geq G.M## on$$f(x,y,z)=c(c-2x)(c-2y)(c-2z)$$
the equality would never hold because if $$c=c-2x=c-2y=c-2z$$
then we would get $$x=y=z=0$$ which was not possible
You have to apply the inequality on ##c-2x,c-2y,c-2z## and leave out ##c##.

Mentor
I didn't knew there were multiple winning strategies for ##P##!

Also, couldn't we generalise that further for when ##P## chooses any value of ##c## in the first move, then the condition for $$f(1)<0$$ would change to $$a+b+c+1<0$$
So if ##Q## chooses a value of ##a## then ##P## would choose ##b## such that $$b<-a-c-1$$ And if ##Q## chooses ##b## then ##P## should make sure that $$a<-b-c-1$$
Probably. But ##c=1## makes the argument easier to read.

Twigg
Gold Member
Thanks for posting these, @fresh_42!

Use ##x+y+z = 0## to suggest a new coordinate system ##(X,Y,Z)## over ##\mathbb{R}^3## where ##Z = x+y+z##. I choose to define ##X = x - y## and ##Y = y - z##. These coordinates are orthogonal, since ##\nabla X \times \nabla Y = \nabla Z##.

Since this is a linear transformation, we can write it in matrix form:
$$\left( \begin{array}{cc} X \\ Y \\ Z \end{array} \right) = \left( \begin{array}{cc} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 1 & 1 & 1 \end{array} \right) \left( \begin{array}{cc} x \\ y \\ z \end{array} \right)$$
and we can invert that
$$\left( \begin{array}{cc} x \\ y \\ z \end{array} \right) = \frac{1}{3} \left( \begin{array}{cc} 2 & 1 & 1 \\ -1 & 1 & 1 \\ -1 & -2 & 1 \end{array} \right) \left( \begin{array}{cc} X \\ Y \\ Z \end{array} \right)$$

Now we can write the second constraint in matrix form $$9 = \left( \begin{array}{cc} x & y & z \\ \end{array} \right) \left( \begin{array}{cc} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \\ \end{array} \right) \left(\begin{array}{cc} x \\ y \\ z \\ \end{array} \right)$$
Then we can transform that to the ##(X,Y,Z)## coordinate system:
$$\frac{1}{9} \left( \begin{array}{cc} 2 & -1 & -1 \\ 1 & 1 & -2 \\ 1 & 1 & 1 \end{array} \right) \left( \begin{array}{cc} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \\ \end{array} \right) \left( \begin{array}{cc} 2 & 1 & 1 \\ -1 & 1 & 1 \\ -1 & -2 & 1 \end{array} \right) = \left( \begin{array}{cc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right)$$
So, finally, we have $$X^2 + Y^2 = 9$$ so the space is a submersion (or is it immersion? I can never remember the difference) of ##\mathbb{S}^1## and thus a manifold.

...I'll get back to the the tangent and normal spaces at point p later. My partner is giving me the evil eye for spending Saturday afternoon doing physics. I think I'm in trouble

Mentor
... so the space is a submersion (or is it immersion? I can never remember the difference) of ##\mathbb{S}^1## and thus a manifold.
Both are primarily mappings, submersions have an everywhere surjective derivative, immersions an everywhere injective derivative.

Twigg
StoneTemplePython
Gold Member
second time's the charm?

Suppose ##X## has ##m## orbits and let ##X## be an ordered set such that the first ##r_1## elements belong to orbit ##X_1## (where ##\vert X_1\vert =r_1##), the next ##r_2## elements belong to orbit ##X_2## and so on.

Then ##\phi_i## is the homomorphism for the standard permutation matrix representation (with matrices in ##\mathbb C^{r_i\times r_i}##) of ##G##'s action on ##X_i##.

lemma:
##T_i:= \frac{1}{\vert G\vert}\sum_{g\in G}\phi_i(g)##
then ##\text{trace}\big(T_i\big)=1##

*proof:*
(i) ##T_i## is idempotent
##T_i^2=\Big(\frac{1}{\vert G\vert}\sum_{g\in G}\phi(g)\Big)\Big(\frac{1}{\vert G\vert}\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\Big(\phi_i(g)\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\vert G\vert \cdot T_i=\frac{1}{\vert G\vert^2}\cdot \vert G\vert^2 \cdot T_i=T_i##

missing detail:
##\phi_i(g)\sum_{g'\in G}\phi_i(g')=\sum_{g'\in G}\phi_i(g)\phi_i(g')=\sum_{g'\in G}\phi_i(gg')=\sum_{g''\in G}\phi_i(g'')= \sum_{g\in G}\phi_i(g)=\vert G\vert \cdot T_i##

(ii) ##T_i## is a positive matrix
##T_i## is a sum (re-scaled by a positive number) of real non-negative matrices and hence is real non-negative. If component ##(k,j)## of ##T_i## is zero that implies there is no ##g\in G## in that maps the kth element of ##X_i## to the ##j##th element -- but ##G## acts transitively on ##X_i## so this cannot be true hence ##T_i## is a positive matrix.

(iii) ##T_i## has one eigenvalue of 1 and all else are zero, hence it has trace 1
##T_i## is a convex combination of (doubly) stochastic matrices hence it is (doubly) stochastic. Since ##T_i## is positive, we may apply Perron Theory for stochastic matrices which tells us that ##\mathbf 1## is an eigenvector with eigenvalue 1, which is *simple*. And since ##T_i## is idempotent, all other (##r_i -1##) eigenvalues must be zero. Thus ##\text{trace}\big(T_i\big)=1##,

(Since ##T_i## is doubly stochastic this actually tells us that it is ##\propto \mathbf{11}^*##-- which implies, in terms of irreducibles, that ##\phi_i(g)## includes exactly one trivial representation-- though as is often the case with representations, we only need the trace.)

the main case
now that we've looked at individual orbits, consider ##G##'s action on ##X## as a whole. This gives a standard permutation representation of, for ##g\in G##

##\phi(g\big) = \left[\begin{matrix}\phi_1(g) & \mathbf 0 & \cdots & \mathbf 0\\ \mathbf 0 & \phi_2(g) & \cdots & \mathbf 0\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf 0 & \mathbf 0 & \cdots & \phi_{m}(g)\end{matrix}\right]##
(i.e. the orbits allow us to decompose ##G##s action on ##X## into a direct sum of its action on individual orbits)
so we have

##\text{# of orbits}=m = \sum_{i=1}^m \text{trace}\big(T_i\big)= \frac{1}{\vert G\vert}\sum_{i=1}^m \sum_{g\in G}\text{trace}\big(\phi_i(g) \big)##
##=\frac{1}{\vert G\vert}\sum_{g\in G}\sum_{i=1}^m\text{trace}\big( \phi_i(g) \big)=\frac{1}{\vert G\vert} \sum_{g\in G}\text{trace}\big(\phi(g) \big)=\frac{1}{\vert G\vert}\sum_{g\in G}|X^g|##

fresh_42
Twigg
Gold Member
Annnd back to finish Problem 5

##p = (2,-1,-1)## implies ##X_p = 3##, ##Y_p = 0##, ##Z_p = 0##. The tangent plane ##T_p M## points along the Y-axis since ##p## is on the X-axis, so $$T_p M = \{ t \left( \begin{array}{cc} 0 \\ 1 \\ -1 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$ since ##(0,1,-1)## points along the Y-axis.

For the normal plane, it's the same deal except that the normal vector space spans the X-axis, since the Y-axis is the tangent plane. That means $$N_p M = \{ t \left( \begin{array}{cc} 1 \\ -1 \\ 0 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$

As far as showing that the space is a manifold, I feel like just showing it's a circle in the transformed coordinates is sufficient. But if you feel like you need more formal proof, I could write you an atlas with stereographic projection onto the X-axis.

Mentor
Annnd back to finish Problem 5

##p = (2,-1,-1)## implies ##X_p = 3##, ##Y_p = 0##, ##Z_p = 0##. The tangent plane ##T_p M## points along the Y-axis since ##p## is on the X-axis, so $$T_p M = \{ t \left( \begin{array}{cc} 0 \\ 1 \\ -1 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$ since ##(0,1,-1)## points along the Y-axis.

For the normal plane, it's the same deal except that the normal vector space spans the X-axis, since the Y-axis is the tangent plane. That means $$N_p M = \{ t \left( \begin{array}{cc} 1 \\ -1 \\ 0 \\ \end{array} \right) : \forall t \in \mathbb{R} \}$$

As far as showing that the space is a manifold, I feel like just showing it's a circle in the transformed coordinates is sufficient. But if you feel like you need more formal proof, I could write you an atlas with stereographic projection onto the X-axis.
I have a different tangent vector, but it's impossible to tell where you went wrong. And the normal space to a one-dimensional subspace in a three-dimensional Euclidean space is two-dimensional.

Last edited:
Mentor
second time's the charm?

Suppose ##X## has ##m## orbits and let ##X## be an ordered set such that the first ##r_1## elements belong to orbit ##X_1## (where ##\vert X_1\vert =r_1##), the next ##r_2## elements belong to orbit ##X_2## and so on.

Then ##\phi_i## is the homomorphism for the standard permutation matrix representation (with matrices in ##\mathbb C^{r_i\times r_i}##) of ##G##'s action on ##X_i##.

lemma:
##T_i:= \frac{1}{\vert G\vert}\sum_{g\in G}\phi_i(g)##
then ##\text{trace}\big(T_i\big)=1##

*proof:*
(i) ##T_i## is idempotent
##T_i^2=\Big(\frac{1}{\vert G\vert}\sum_{g\in G}\phi(g)\Big)\Big(\frac{1}{\vert G\vert}\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\Big(\phi_i(g)\sum_{g'\in G}\phi_i(g')\Big)=\frac{1}{\vert G\vert^2}\sum_{g\in G}\vert G\vert \cdot T_i=\frac{1}{\vert G\vert^2}\cdot \vert G\vert^2 \cdot T_i=T_i##

missing detail:
##\phi_i(g)\sum_{g'\in G}\phi_i(g')=\sum_{g'\in G}\phi_i(g)\phi_i(g')=\sum_{g'\in G}\phi_i(gg')=\sum_{g''\in G}\phi_i(g'')= \sum_{g\in G}\phi_i(g)=\vert G\vert \cdot T_i##

(ii) ##T_i## is a positive matrix
##T_i## is a sum (re-scaled by a positive number) of real non-negative matrices and hence is real non-negative. If component ##(k,j)## of ##T_i## is zero that implies there is no ##g\in G## in that maps the kth element of ##X_i## to the ##j##th element -- but ##G## acts transitively on ##X_i## so this cannot be true hence ##T_i## is a positive matrix.

(iii) ##T_i## has one eigenvalue of 1 and all else are zero, hence it has trace 1
##T_i## is a convex combination of (doubly) stochastic matrices hence it is (doubly) stochastic. Since ##T_i## is positive, we may apply Perron Theory for stochastic matrices which tells us that ##\mathbf 1## is an eigenvector with eigenvalue 1, which is *simple*. And since ##T_i## is idempotent, all other (##r_i -1##) eigenvalues must be zero. Thus ##\text{trace}\big(T_i\big)=1##,

(Since ##T_i## is doubly stochastic this actually tells us that it is ##\propto \mathbf{11}^*##-- which implies, in terms of irreducibles, that ##\phi_i(g)## includes exactly one trivial representation-- though as is often the case with representations, we only need the trace.)

the main case
now that we've looked at individual orbits, consider ##G##'s action on ##X## as a whole. This gives a standard permutation representation of, for ##g\in G##

##\phi(g\big) = \left[\begin{matrix}\phi_1(g) & \mathbf 0 & \cdots & \mathbf 0\\ \mathbf 0 & \phi_2(g) & \cdots & \mathbf 0\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf 0 & \mathbf 0 & \cdots & \phi_{m}(g)\end{matrix}\right]##
(i.e. the orbits allow us to decompose ##G##s action on ##X## into a direct sum of its action on individual orbits)
so we have

##\text{# of orbits}=m = \sum_{i=1}^m \text{trace}\big(T_i\big)= \frac{1}{\vert G\vert}\sum_{i=1}^m \sum_{g\in G}\text{trace}\big(\phi_i(g) \big)##
##=\frac{1}{\vert G\vert}\sum_{g\in G}\sum_{i=1}^m\text{trace}\big( \phi_i(g) \big)=\frac{1}{\vert G\vert} \sum_{g\in G}\text{trace}\big(\phi(g) \big)=\frac{1}{\vert G\vert}\sum_{g\in G}|X^g|##
A beautiful solution.

As @Office_Shredder already mentioned, there is still an elementary proof possible.
(Note to all who want to try with the orbit-stabilizer formula.)

The statement is commonly known as Burnside's Lemma. Burnside wrote this formula down around 1900. Historians of mathematics, however, found this formula already from Cauchy (1845) and Frobenius (1887). Therefore the formula is sometimes referred to as The Lemma which is not from Burnside.

StoneTemplePython
julian
Gold Member
Problem #2:

The set: ##\{ g \cdot x : g \in G \}## is the orbit of ##x## under the action of ##G## which we denote ##\text{Orb}(x)##. We can define an equivalence relation on ##X##: ##x \sim y## iff ##y = g \cdot x## for some ##g \in G##. We check: ##x \sim x## as ##e \cdot x = x##. If ##y = g \cdot x## then ##x = g^{-1} \cdot y## so that ##x \sim y## implies ##y \sim x##. If ##y = g \cdot x## and ##z = h \cdot y## then ##z = h g \cdot x## so that ##x \sim y## and ##y \sim z## implies that ##x \sim z##. As this is an equivalence relation on ##X##, the set ##X## is partitioned into disjoint equivalence classes and with each ##x## appearing in one of these equivalences classes.

We have

\begin{align*}
\sum_{g \in G} |X^g| = \sum_{g \in G} \sum_{x \in X} \delta_{g\cdot x, x} = \sum_{x \in X} \sum_{g \in G} \delta_{g\cdot x, x} = \sum_{x \in X} |G_x|
\end{align*}

The elements of ##G_x## are easily verified to form a group called the stabalizer subgroup of ##G## with respect to ##x##.

We define a map

\begin{align*}
\phi : G \rightarrow \text{Orb} (x)
\end{align*}

by

\begin{align*}
\phi (g) = g \cdot x
\end{align*}

It is clear that ##\phi## is surjective, because the definition of the orbit of ##x## is ##G \cdot x##. Now

\begin{align*}
\phi (g) = \phi (h) \Longleftrightarrow g \cdot x = h \cdot x \Longleftrightarrow g^{-1} h \cdot x = x \Longleftrightarrow g^{-1} h \in G_x .
\end{align*}

That is, ##\phi (g) = \phi (h)## if and only if ##g## and ##h## are in the same coset for the stabalizer subgroup ##G_x##. Thus there is a well-defined bijection:

\begin{align*}
G / G_x \rightarrow \text{Orb} (x)
\end{align*}

So ##\text{Orb}(x)## has the same number of elements as ##G / G_x##. This together with Lagrange's theorem says that

\begin{align*}
|\text{Orb}(x)| = |G| / |G_x|
\end{align*}

Recall we have that ##X## is the disjoint union of all its orbits. Let ##x_i## be a representative point of the ##i##th orbit, then:

\begin{align*}
\frac{1}{|G|} \sum_{g \in G} |X^g| & = \frac{1}{|G|} \sum_{x \in X} |G_x|
\nonumber \\
&= \sum_{x \in X} \frac{1}{|\text{Orb}(x)|}
\nonumber \\
& = \sum_{i = 1}^{|X / G|} \sum_{x \in \text{Orb}(x_i)} \frac{1}{|\text{Orb}(x_i)|}
\nonumber \\
& = \sum_{i = 1}^{|X / G|} 1
\nonumber \\
& = |X / G| .
\end{align*}

Last edited:
martinbn
5. Show that
$$M:=\{\,x\in \mathbb{R}^3\,|\,x_1+x_2+x_3=0\, , \,x_1^2+2x_2^2+x_3^2-2x_2(x_1+x_3)=9\,\} \subseteq \mathbb{R}^3$$
is a manifold, and determine the tangent space ##T_pM## and the normal space ##N_pM## at ##p=(2,-1,-1)\in M\,.##
The second equation can be written as ##(x_1-x_2)^2+(x_2-x_3)^2=9##. Make the change of variables
$$x=x_1-x_2, \; y=x_2-x_3, \; z=x_1+x_2+x_3$$
It is invertable, and in the new variables the set ##M## is given as the solution to the equations
$$x^2+y^2=9, \; z=0$$
Obviously a manifold. The point ##p## in the new coordinates is ##(3,0,0)##, so the tangent space is ##x=3, z=0## or in the original coordiantes ##x_1-x_2=0, x_1+x_2+x_3=0##.

Mentor
The second equation can be written as ##(x_1-x_2)^2+(x_2-x_3)^2=9##. Make the change of variables
$$x=x_1-x_2, \; y=x_2-x_3, \; z=x_1+x_2+x_3$$
It is invertable, and in the new variables the set ##M## is given as the solution to the equations
$$x^2+y^2=9, \; z=0$$
Obviously a manifold. The point ##p## in the new coordinates is ##(3,0,0)##, so the tangent space is ##x=3, z=0## or in the original coordiantes ##x_1-x_2=0, x_1+x_2+x_3=0##.
... which leads to ##T_pM=\ldots ## and ##N_pM= \ldots ##

julian
Gold Member
Hi @fresh_42. So when you hit 15,000 posts do you get a set of steak knives or something?

martinbn
... which leads to ##T_pM=\ldots ## and ##N_pM= \ldots ##
Right, forgot the normal one. It is the plane ## x_2=x_3##.

Mentor
Hi @fresh_42. So when you hit 15,000 posts do you get a set of steak knives or something?
... maybe in the back ...

julian
Gold Member
You deserve a special award off Greg.

Twigg and fresh_42
Mentor
The second equation can be written as ##(x_1-x_2)^2+(x_2-x_3)^2=9##. Make the change of variables
$$x=x_1-x_2, \; y=x_2-x_3, \; z=x_1+x_2+x_3$$
It is invertable, and in the new variables the set ##M## is given as the solution to the equations
$$x^2+y^2=9, \; z=0$$
Obviously a manifold. The point ##p## in the new coordinates is ##(3,0,0)##, so the tangent space is ##x=3, z=0## or in the original coordiantes ##x_1-x_2=0, x_1+x_2+x_3=0##.
Right, forgot the normal one. It is the plane ## x_2=x_3##.

The easier the problem, the lazier the solver ...
For all viewers, here is the long version:

We consider the function ##f\, : \,\mathbb{R}^3\longrightarrow \mathbb{R}^2## defined by
$$\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} \stackrel{f}{\longrightarrow} \begin{bmatrix}x_1+x_2+x_3 \\ x_1^2+2x_2^2+x_3^2-2x_2(x_1+x_3)-9\end{bmatrix}$$
such that ##M=f^{-1}(\{(0,0)\}).## Its Jacobi matrix is
$$J_xf = \begin{bmatrix}1&1&1\\2(x_1-x_2)&2(2x_2-x_1-x_3)&2(x_3-x_2)\end{bmatrix}$$
##\operatorname{rk}J_xf=1## if ##x_1-x_2=2x_2-x_1-x_3=x_3-x_2## or ##x_1=x_2=x_3\,.## Since ##f(t,t,t)=(3t,-9)\neq 0##, ##(0,0)## is a regular value of ##f## and ##M## a submanifold of dimension ##3-1-1=1\,.##

For ##p=(2,-1,-1)## we have ##J_pf=\begin{bmatrix}1&1&1\\ 6&-6&0\end{bmatrix}##, hence ##T_pM=\ker D_pf=\mathbb{R}\cdot \begin{bmatrix} 1&1&-2 \end{bmatrix}## and ##N_pM=(T_pM)^\perp=\mathbb{R}\cdot \begin{bmatrix}1\\1\\1\end{bmatrix}+ \mathbb{R}\cdot \begin{bmatrix}1\\-1\\0\end{bmatrix}##, which is the row space of ##J_pf\,.##

martinbn