Math Challenge - June 2021

• Challenge
• fresh_42
• Featured
In summary: Problem #9I don't know what "stochastically independent" means, but it's clear that$$P(A)=\frac{7}{8} \quad P(B)=\frac{1}{6} \quad P(C)=\frac{7}{8}+ \frac{1}{2} -\frac{1}{6} = \frac{2}{3}$$and$$P(A\cap B)=\frac{1}{8} \quad P(A\cap C)=\frac{3}{8} \quad P(B\cap C)=\frac{1}{6}-\frac{1}{8}=\frac{1}{24}$$
kshitij said:
I shouldn't write ##c^2=(k-2)(k+2)## I should have directly wrote that from ##c^2+2^2=k^2## the only solution we get is ##c=0## and ##k=2## (as there are no pythagorean triplets with 2), I was thinking something else when I wrote ##c^2=(k-2)(k+2)##.

I don't like that solution anyway, maybe I can think a better one later.
You only claimed to know c and k are rational. ##(5/2)^2=(3/2)^2+2^2##

I think problem 15 has been pieced together correctly, but I think it's important to understand why the first attempt failed on the last line. If you still aren't sure please post here again.

@BWV I think A and B are independent. I haven't checked carefully what you wrote but consider the outcome of the sum being odd after you fix both die rolls (which completely determines B) and also two of the coins just for fun. What is the probability of event A? Any event that is determined just by the die rolls must be independent of A because of this.

That leaves the fun part of checking the triple.

Last edited:
You are right - need to divide the 7/72 by P(B) which gives 1/2 so they are independent

Ok one last shot (maybe can delete the previous?)
An ideal coin is thrown three times in a row and then an ideal dice is thrown twice in a row. Each time you toss a coin you get one point if the coin shows "tails" and two points if the coin shows "heads". If you add the total of the two dice rolls to this number of points, you get the total number of points.

Let A be the event "the total number of points achieved is odd", B be the event "the total of the two dice rolls is divisible by 5", and C the event "the number of points achieved in the three coin tosses is at least 5". Investigate whether A, B, C are pairwise stochastically independent. Also investigate whether A, B, C are stochastically independent.

T=total = (5:18)

Probabilities:

A = P(odd) =½ (even number of integer outcomes, symmetrical prob)

B= P(Dice total div by 5)=1/9+1/12=7/36

C=Pcoin(>=5)= ½ (p(6)=1/8+p(5)=3/8)
Two events independent if P(A∩B)= P(A)P(B) or, alternatively P(A|B)=P(A) and P(B|A)=P(B)

A and B independent as

P(A)=1/2, P(A|B)=1/2

 Dice P Coin P 3​ 1/8​ 5​ 1/9​ 4​ 3/8​ 5​ 3/8​ 10​ 1/12​ 6​ 1/8​ Sum p Odd? 8​ 1/72​ FALSE​ 0​ 9​ 1/24​ TRUE​ 1/24​ 10​ 1/24​ FALSE​ 0​ 11​ 1/72​ TRUE​ 1/72​ 13​ 1/96​ TRUE​ 1/96​ 14​ 1/32​ FALSE​ 0​ 15​ 1/32​ TRUE​ 1/32​ 16​ 1/96​ FALSE​ 0​ 7/72​
7/72 / 7/36 = 1/2

A and C independent as

P(A)P(C)=1/4 = P(A∩C)=P(A|C)P(C)=1/4

B and C independent as

P(B)P(C)=7/72 = P(B∩C)=P(B|C)P(C)=7/72

A,B,C not jointly independent as
P(A)P(B)P(C)=(1/2)(1/2)(7/36) = 7/144 <> P(C|AB) (only outcomes =11,15 above on A and B)= 1/72+1/32 = 13/288

Hölder continuous implies uniformly continuous. So, we'll show that these conditions are not equivalent.

The set $[0,c]$ is compact and $-\frac{1}{ \ln x}\to 0 = f(0)$ as $x\to 0+$. Hence $f$ is uniformly continuous.

Assume for a contradiction $f$ is Hölder continuous. That is, there exist $0< \alpha \leqslant 1$ and $M>0$ such that
$$\forall x,x'\in [0,c],\quad \left| f(x) - f(x')\right| \leqslant M |x-x'|^\alpha.$$
In particular (taking $x'=0$) we have for every $0\leqslant x\leqslant c$ that
$$\left | \frac{1}{\ln x} \right| \leqslant M |x|^\alpha \Leftrightarrow 1\leqslant M|x^\alpha\ln x|.$$
But $|x^\alpha \ln x| \to 0$ as $x\to 0+$. E.g apply L'Hopital to $\frac{\ln x}{x^{-\alpha}}$ as $x\to 0+$. This is a contradiction, so $f$ cannot be Hölder continuous.

Last edited:
I have two ways for problem 3, but can't think of a third right now.
i) by direct substitution:\begin{align*}
p \dfrac{\partial V}{\partial T} \bigg{)}_{p} - C = 0 \quad &\implies \quad \dfrac{\partial V}{\partial T} \bigg{)}_{p} = \frac{c}{p} \\

V + \frac{CB}{2\sqrt{p}} - C \dfrac{\partial T}{\partial p} \bigg{)}_{V} = 0 \quad &\implies \quad \dfrac{\partial T}{\partial p} \bigg{)}_{V} = \frac{V + \frac{CB}{2\sqrt{p}}}{C} \\

p + V \dfrac{\partial p}{\partial V} \bigg{)}_{T} + \frac{CB}{2\sqrt{p}} \dfrac{\partial p}{\partial V} \bigg{)}_{T} = 0 \quad &\implies \quad \dfrac{\partial p}{\partial V} \bigg{)}_{T} = \frac{-p}{V + \frac{CB}{2\sqrt{p}}}
\end{align*}then ##\dfrac{\partial V}{\partial T} \bigg{)}_{p} \dfrac{\partial T}{\partial p} \bigg{)}_{V} \dfrac{\partial p}{\partial V} \bigg{)}_{T} = \dfrac{C}{p} \cdot \dfrac{V + \frac{CB}{2\sqrt{p}}}{C} \cdot \dfrac{-p}{V + \frac{CB}{2\sqrt{p}}} = -1##.

ii) Given ##f(p,V,T) = 0## for some ##f## then if ##f_p \neq 0## at some point then by the implicit function theorem there's some neighbourhood where ##p = p(V,T)## and so ##f(p(V,T),V,T) = 0##. Then ##f_V + f_p \dfrac{\partial p}{\partial V} \bigg{)}_T = 0 \implies \dfrac{\partial p}{\partial V} \bigg{)}_T = -f_V / f_p##. Similarly writing ##V = V(p,T)## gives ##\dfrac{\partial V}{\partial p} \bigg{)}_T = -f_p/f_V##, which proves the reciprocity relation ##\dfrac{\partial p}{\partial V} \bigg{)}_T \dfrac{\partial V}{\partial p} \bigg{)}_T = 1## [and same for the other two combinations].

Now given ##dV = \dfrac{\partial V}{\partial p} \bigg{)}_{T} dp + \dfrac{\partial V}{\partial T} \bigg{)}_{p} dT## write\begin{align*}
dp &= \dfrac{\partial p}{\partial V} \bigg{)}_{T} dV + \dfrac{\partial p}{\partial T} \bigg{)}_{V} dT \\

dp &= \dfrac{\partial p}{\partial V} \bigg{)}_{T} \left\{ \dfrac{\partial V}{\partial p} \bigg{)}_{T} dp + \dfrac{\partial V}{\partial T} \bigg{)}_{p} dT \right\} + \dfrac{\partial p}{\partial T} \bigg{)}_{V} dT \\

dp &= dp + \dfrac{\partial p}{\partial V} \bigg{)}_{T} \dfrac{\partial V}{\partial T} \bigg{)}_{p} dT + \dfrac{\partial p}{\partial T} \bigg{)}_{V} dT
\end{align*}hence ##\dfrac{\partial p}{\partial V} \bigg{)}_{T} \dfrac{\partial V}{\partial T} \bigg{)}_{p} = - \dfrac{\partial p}{\partial T} \bigg{)}_{V} \implies \dfrac{\partial p}{\partial V} \bigg{)}_{T} \dfrac{\partial V}{\partial T} \bigg{)}_{p} \dfrac{\partial T}{\partial p} \bigg{)}_{V} = -1##.

@BWV that looks good to me now.

I think this is just a complicated version of you flip 2 coins which give values 1 if tails and 2 if heads, and let A= coin 1 is even, B= coin 2 is even, C=coin1+coin2 is even. If you want a simpler example in your toolbox :)

nuuskur said:
Hölder continuous implies uniformly continuous. So, we'll show that these conditions are not equivalent.

The set $[0,c]$ is compact and $-\frac{1}{ \ln x}\to 0 = f(0)$ as $x\to 0+$. Hence $f$ is uniformly continuous.

Assume for a contradiction $f$ is Hölder continuous. That is, there exist $0< \alpha \leqslant 1$ and $M>0$ such that
$$\forall x,x'\in [0,c],\quad \left| f(x) - f(x')\right| \leqslant M |x-x'|^\alpha.$$
In particular (taking $x'=0$) we have for every $0\leqslant x\leqslant c$ that
$$\left | \frac{1}{\ln x} \right| \leqslant M |x|^\alpha \Leftrightarrow 1\leqslant M|x^\alpha\ln x|.$$
But $|x^\alpha \ln x| \to 0$ as $x\to 0+$. E.g apply L'Hopital to $\frac{\ln x}{x^{-\alpha}}$ as $x\to 0+$. This is a contradiction, so $f$ cannot be Hölder continuous.
This looks good. An exercise for the reader I guess is to show that any continuous function on a compact interval is uniformly continuous.

ergospherical said:
I have two ways for problem 3, but can't think of a third right now.
i) by direct substitution:\begin{align*}
p \dfrac{\partial V}{\partial T} \bigg{)}_{p} - C = 0 \quad &\implies \quad \dfrac{\partial V}{\partial T} \bigg{)}_{p} = \frac{c}{p} \\

V + \frac{CB}{2\sqrt{p}} - C \dfrac{\partial T}{\partial p} \bigg{)}_{V} = 0 \quad &\implies \quad \dfrac{\partial T}{\partial p} \bigg{)}_{V} = \frac{V + \frac{CB}{2\sqrt{p}}}{C} \\

p + V \dfrac{\partial p}{\partial V} \bigg{)}_{T} + \frac{CB}{2\sqrt{p}} \dfrac{\partial p}{\partial V} \bigg{)}_{T} = 0 \quad &\implies \quad \dfrac{\partial p}{\partial V} \bigg{)}_{T} = \frac{-p}{V + \frac{CB}{2\sqrt{p}}}
\end{align*}then ##\dfrac{\partial V}{\partial T} \bigg{)}_{p} \dfrac{\partial T}{\partial p} \bigg{)}_{V} \dfrac{\partial p}{\partial V} \bigg{)}_{T} = \dfrac{C}{p} \cdot \dfrac{V + \frac{CB}{2\sqrt{p}}}{C} \cdot \dfrac{-p}{V + \frac{CB}{2\sqrt{p}}} = -1##.

ii) Given ##f(p,V,T) = 0## for some ##f## then if ##f_p \neq 0## at some point then by the implicit function theorem there's some neighbourhood where ##p = p(V,T)## and so ##f(p(V,T),V,T) = 0##. Then ##f_V + f_p \dfrac{\partial p}{\partial V} \bigg{)}_T = 0 \implies \dfrac{\partial p}{\partial V} \bigg{)}_T = -f_V / f_p##. Similarly writing ##V = V(p,T)## gives ##\dfrac{\partial V}{\partial p} \bigg{)}_T = -f_p/f_V##, which proves the reciprocity relation ##\dfrac{\partial p}{\partial V} \bigg{)}_T \dfrac{\partial V}{\partial p} \bigg{)}_T = 1## [and same for the other two combinations].

Now given ##dV = \dfrac{\partial V}{\partial p} \bigg{)}_{T} dp + \dfrac{\partial V}{\partial T} \bigg{)}_{p} dT## write\begin{align*}
dp &= \dfrac{\partial p}{\partial V} \bigg{)}_{T} dV + \dfrac{\partial p}{\partial T} \bigg{)}_{V} dT \\

dp &= \dfrac{\partial p}{\partial V} \bigg{)}_{T} \left\{ \dfrac{\partial V}{\partial p} \bigg{)}_{T} dp + \dfrac{\partial V}{\partial T} \bigg{)}_{p} dT \right\} + \dfrac{\partial p}{\partial T} \bigg{)}_{V} dT \\

dp &= dp + \dfrac{\partial p}{\partial V} \bigg{)}_{T} \dfrac{\partial V}{\partial T} \bigg{)}_{p} dT + \dfrac{\partial p}{\partial T} \bigg{)}_{V} dT
\end{align*}hence ##\dfrac{\partial p}{\partial V} \bigg{)}_{T} \dfrac{\partial V}{\partial T} \bigg{)}_{p} = - \dfrac{\partial p}{\partial T} \bigg{)}_{V} \implies \dfrac{\partial p}{\partial V} \bigg{)}_{T} \dfrac{\partial V}{\partial T} \bigg{)}_{p} \dfrac{\partial T}{\partial p} \bigg{)}_{V} = -1##.

This looks good to me. I think you might have made your second method too complicated. Once you have ##\dfrac{\partial p}{\partial V} \bigg{)}_T = -f_V/f_p##, then you also have by just copying the symbols in the same pattern
$$\dfrac{\partial p}{\partial V} \bigg{)}_T \dfrac{\partial V}{\partial T} \bigg{)}_p \dfrac{\partial T}{\partial p} \bigg{)}_V = \frac{-f_V}{f_p} \frac{-f_T}{f_V} \frac{-f_p}{f_T} = -1$$.

Fresh provided three methods that were implicit function theorem, and implicit differentiation to solve for the derivatives. I feel like the third method from here is a little non-obvious, so I'll just say it's computing the actual functions ##V(T)##, ##p(V)## and ##T(p)## directly before taking any derivatives.

ergospherical and BWV
Problem 12

Let ##\sqrt{n}+\sqrt{n+4}=p## such that ##p\in\mathbb{Q}##

So we get,
\begin{align} \sqrt{n}+\sqrt{n+4}&=p\nonumber\\ n+n+4+2\sqrt{n}\cdot\sqrt{n+4}&=p^2\nonumber\\ \sqrt{n}\cdot\sqrt{n+4}=\frac{p^2-2n-4}{2}&=q\space\text{(say)}\nonumber \end{align}
We can see that ##q\in\mathbb{Q}## as ##n\in\mathbb{N}##

Now let ##\sqrt{n}## and ##\sqrt{n+4}## be the roots of the quadratic equation
$$x^2-px+q=0$$
Putting the value of ##x## in the above equation as ##\sqrt{n}##, we get
\begin{align} n-p\sqrt{n}+q&=0\nonumber\\ \sqrt{n}=\frac{n+q}{p}&=a\space\text{(say)}\nonumber \end{align}
Similarly, putting the value of ##x## as ##\sqrt{n+4}##, we get
\begin{align} n+4-p\sqrt{n+4}+q&=0\nonumber\\ \sqrt{n+4}=\frac{n+q+4}{p}&=b\space\text{(say)}\nonumber \end{align}
As we can see that ##a## and ##b## are also rational numbers, moreover, as ##n## is a natural number, ##a## and ##b## should also be natural numbers

So, we get,
\begin{align*}
\sqrt{n}=a\\
n=a^2
\end{align*}
And
$$n+4=b^2$$
Putting that value of ##n##,
\begin{align*}
a^2+4=b^2\\
4=b^2-a^2\\
2^2=(b-a)(b+a)
\end{align*}
Now, ##2^2=2^2\cdot 2^0## and ##2^2=2^1\cdot 2^1## are the only possible ways of writing ##2^2## as a product of two natural numbers,

So, the only solutions are
##b-a=b+a=2## from this we get ##a=0## which is not possible
##b-a=1## and ##b+a=4## from this ##b=\frac5 2## and ##a=\frac3 2## which is again not possible
##b-a=4## and ##b+a=1## which gives ##b=\frac5 2## and ##a=\frac{-3} 2## which is also not possible

So, ##\sqrt{n}+\sqrt{n+4}## cannot be a rational number if ##n## is a natural number.

julian said:
Hi @kshitij . Sorry, for the delay. I've been seriously preoccupied.

OK, so you were assuming that:

\begin{align*}
\frac{\binom{2n}{n}}{2^{2n}} < \frac{1}{\sqrt{2n+1}} \qquad (*)
\end{align*}

holds and you wanted to show that it follows that

\begin{align*}
\frac{\binom{2n+2}{n+1}}{2^{2n+2}} < \frac{1}{\sqrt{2n+3}} \qquad (**)
\end{align*}

This condition, ##(**)##, is the final statement. You start with the final statement and what you do is you keep making changes that until something comes up, which together with the use ##(*)##, is true. But the changes that you make along the way must result in equivalent conditions so that finally you can write them down in the backward direction and end up with ##(**)##.

You correctly deduced that condition ##(**)## is equivalent to:

\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}} \qquad (***)
\end{align*}

However, you didn't give a valid justificiation as to why the condition:

\begin{align*}
\frac{\binom{2n}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+1}} \qquad (****)
\end{align*}

is equivalent to ##(***)##, and so you can't work backwards from ##(****)## to ##(***)##. That's the problem.
Hey @julian, Thanks for your reply, but I still have the same question,

you said that,
\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}} \qquad (*)
\end{align*}
is correct

But as we have,
\begin{align*}
\binom{2n}{n} &< \binom{2n+1}{n}\\
\dfrac{\binom {2n} {n}}{2^{2n+1}}&<\dfrac{\binom {2n+1} {n}}{2^{2n+1}} \qquad(**)
\end{align*}
Similarly,$$\dfrac{1}{\sqrt{2n+3}}< \dfrac{1}{\sqrt{2n+1}} \qquad(***)$$

Now if we have a relation like,
##b<c##, ##a<b## and ##c<d##
Then we can write ##a<d##
because the order between ##a,b,c,d## is ##a<b<c<d##

So, in our case, we have ##a=\frac{\binom{2n}{n}}{2^{2n+1}} ##, ##b=\dfrac{\binom {2n+1} {n}}{2^{2n+1}}##, ##c=\dfrac{1}{\sqrt{2n+3}}## and ##d=\dfrac{1}{\sqrt{2n+1}}##

From ##(*)## we had ##b<c##, from ##(**)## we got ##a<b## and from ##(***)## we got ##c<d##

So, again the order between ##a,b,c,d## is ##a<b<c<d##

That gives
\begin{align*}
a&<d\\
\dfrac{\binom {2n} {n}}{2^{2n+1}}&<\dfrac{1}{\sqrt{2n+1}} \qquad(****)
\end{align*}
So, from ##(*)## we deduced that ##(****)## is true, then why is this wrong?

julian said:
This condition, (∗∗), is the final statement. You start with the final statement and what you do is you keep making changes that until something comes up, which together with the use (∗), is true.
I do agree with what you said here, I also agree with your solution which was discussed earlier in this thread, but I think that there is another method of doing this problem which is with the use of orders ##a<b<c<d## which I mentioned above.

julian said:
But the changes that you make along the way must result in equivalent conditions so that finally you can write them down in the backward direction and end up with (∗∗).
But this statement is confusing for me.

If the problem involved only equalities, e.g., from $$b=c \qquad(*)$$ we had to prove that $$a=d \qquad(**)$$
Then in this case we can always work backwards from ##(**)## to ##(*)##
We just have to show that ##a=d=k##, ##b=k## and ##c=k##
this would give us the only possibility, i.e., ##a=d=b=c=k##

But if the question involves inequalities, e.g., from $$b<c \qquad(*)$$ we had to prove that $$a<d \qquad(**)$$
Then in this case how can we always work backwards from ##(**)## to ##(*)##

unlike the first case, even if could have shown that ##a<d<k##, ##b<k## and ##c<k##
then that doesn't mean that ##b<c## if ##a<d##
because there are multiple possible orders between ##a,b,c,d,k## like
##a<d<b<c<k##, ##a<c<d<b<k##, ##a<c<b<d<k##, etc
and all of them satisfy ##a<d<k##, ##b<k## and ##c<k##

In case of equality there is only one possibility i.e., ##a=b=c=d## but in case of inequalities there are multiple numbers between them so there are multiple possible orders between them

In other words we cannot do anything only using the relation ##a<d## by which we will end up with a fixed possible order between ##a,b,c,d##

So, how can we work backwards in this case if there are multiple possibilities?

Even in your solution, I cannot think how to work backwards from,
\begin{align*}

\frac{1}{2} \cdot \frac{2n+1}{n+1} < \sqrt{\frac{2n+1}{2n+3}}

\end{align*}
to get,
\begin{align*}

\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}

\end{align*}

Last edited:
kshitij said:
I do agree with what you said here, I also agree with your solution which was discussed earlier in this thread, but I think that there is another method of doing this problem which is with the use of orders ##a<b<c<d## which I mentioned above.But this statement is confusing for me.

If the problem involved only equalities, e.g., from $$b=c \qquad(*)$$ we had to prove that $$a=d \qquad(**)$$
Then in this case we can always work backwards from ##(**)## to ##(*)##
We just have to show that ##a=d=k##, ##b=k## and ##c=k##
this would give us the only possibility, i.e., ##a=d=b=c=k##

But if the question involves inequalities, e.g., from $$b<c \qquad(*)$$ we had to prove that $$a<d \qquad(**)$$
Then in this case how can we always work backwards from ##(**)## to ##(*)##

unlike the first case, even if could have shown that ##a<d<k##, ##b<k## and ##c<k##
then that doesn't mean that ##b<c## if ##a<d##
because there are multiple possible orders between ##a,b,c,d,k## like
##a<d<b<c<k##, ##a<c<d<b<k##, ##a<c<b<d<k##, etc
and all of them satisfy ##a<d<k##, ##b<k## and ##c<k##

As ##a,d## are not equal so there are multiple numbers in between them, similarly for ##b,c## as well

In case of equality there is only one possibility i.e., ##a=b=c=d## but in case of inequalities there are multiple numbers between them so there are multiple possible orders between them

In other words we cannot do anything only using the relation ##a<d## by which we will end up with a fixed possible order between ##a,b,c,d##

So, how can we work backwards in this case if there are multiple possibilities?

Even in your solution, I cannot think how to work backwards from,
\begin{align*}

\frac{1}{2} \cdot \frac{2n+1}{n+1} < \sqrt{\frac{2n+1}{2n+3}}

\end{align*}
to get,
\begin{align*}

\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}

\end{align*}
On rereading, I get this feeling that may be I'm doing something wrong here, but I'm confused as we are dealing with inequalities here, if we had been dealing with equalities then everything you said makes sense to me.

I think the simplest way to think about the 15 attempt is to actually just flip the order of everything. You wanted to prove (*) and took a bunch of steps to get from (*) to (****). But your actual goal was to start with (****) and get to (*). Often if you can go in one direction you can go in the other direction but it's not always true.

As a dumb example if ##x=y## then ##x^2=y^2## but the other direction is not true. There's nothing special about inequalities here.

If the steps are actually reversible, you should just reverse them. Start with (****) and get to (*) at the end.

I'll check your new 12 attempt later today.

Last edited:
I think I figured out what was wrong

(refer to my first attempt of problem 15)

I wanted to show that $$b<c$$
I knew that $$e<f$$
is true.

I assumed that ##b<c## is also true

First I showed that ##a<b## and then I showed ##c<d##
so I concluded that $$a<d$$
Now I did the same process with ##e,f## i.e., I first showed that ##a<e## and then ##f<d## by which I concluded that,
$$a<d$$
which is the same result that I got with ##b,c##

And here I concluded that since I got the same result with both the relations ##e<f## (this is known to be true) and ##b<c## (this I assumed to be true) then they both must be true!

However, when I do the same thing but this time assuming that $$c<b$$ ##a<b## and ##c<d## can still be true.

To elaborate, see the image below,

Lets say that ##b<c## and then after showing that ##a<b## and ##c<d##, I can say that,$$a<d$$

Now see the situation of ##e,f##,

It is known that ##e<f## and I showed again using ##a<e## and ##f<d## that $$a<d$$
But unlike the previous case ##e<f## is known to be true, so ##a<d## must be true as well.

Now what if ##c<b##?

You can clearly see that everything I did assuming ##b<c## can also be done using ##c<b## because still we can see that ##a<b## and ##c<d##

So, even though whatever I did with ##a,b,c,d## might be correct and even the relation $$a<d$$ is also correct, but that clearly didn't give us a relation between ##b## and ##c##!
And that is why the first attempt is incomplete (infact I achieved nothing new, I just played around with ##a,b,c,d##)

kshitij said:
I think I figured out what was wrong

(refer to my first attempt of problem 15)

I wanted to show that $$b<c$$
I knew that $$e<f$$
is true.

I assumed that ##b<c## is also true

First I showed that ##a<b## and then I showed ##c<d##
so I concluded that $$a<d$$
Now I did the same process with ##e,f## i.e., I first showed that ##a<e## and then ##f<d## by which I concluded that,
$$a<d$$
which is the same result that I got with ##b,c##

And here I concluded that since I got the same result with both the relations ##e<f## (this is known to be true) and ##b<c## (this I assumed to be true) then they both must be true!

However, when I do the same thing but this time assuming that $$c<b$$ ##a<b## and ##c<d## can still be true.

To elaborate, see the image below,
View attachment 285014
Lets say that ##b<c## and then after showing that ##a<b## and ##c<d##, I can say that,$$a<d$$

Now see the situation of ##e,f##,
View attachment 285015
It is known that ##e<f## and I showed again using ##a<e## and ##f<d## that $$a<d$$
But unlike the previous case ##e<f## is known to be true, so ##a<d## must be true as well.

Now what if ##c<b##?
View attachment 285016
You can clearly see that everything I did assuming ##b<c## can also be done using ##c<b## because still we can see that ##a<b## and ##c<d##

So, even though whatever I did with ##a,b,c,d## might be correct and even the relation $$a<d$$ is also correct, but that clearly didn't give us a relation between ##b## and ##c##!
And that is why the first attempt is incomplete (infact I achieved nothing new, I just played around with ##a,b,c,d##)
Now this looks quiet confusing to read as a third person who doesn't know what's going in my head, but to me it makes perfect sense!

Still if someone wants to read that, then the case ##e<f## was in the original attempt (see post #21 of this thread) ##\dfrac{\binom {2n} n}{2^{2n}}< \dfrac{1}{\sqrt{2n+1}}##
And the case ##b<c## was ##\dfrac{\binom {2n+2} {n+1}}{2^{2n+2}}< \dfrac{1}{\sqrt{2n+3}}##
And finally ##a<d## was ##\dfrac{\binom {2n} n}{2^{2n+1}}< \dfrac{1}{\sqrt{2n+1}}##

And yes ##d## and ##f## are basically the same thing but that doesn't make much difference in the process which I was explaining

Last edited:
Office_Shredder said:
As a dumb example if x=y then x2=y2 but the other direction is not true. There's nothing special about inequalities here.
How do you consistently come up with an example that I overlook?

I really like that solution to 11* Writing that quadratic equation is pretty cool.

kshitij
kshitij said:
Hey @julian, Thanks for your reply, but I still have the same question,

you said that,
\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}} \qquad (*)
\end{align*}
is correct
Hi @kshitij. I'm preoccupied at the moment so I'll just address a couple of points.

I didn't say it was correct. I was saying that this condition:

\begin{align*}
\frac{\binom{2n+2}{n+1}}{2^{2n+2}} < \frac{1}{\sqrt{2n+3}} \qquad (*)
\end{align*}

is equivalent to this condition:

\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}
\end{align*}

which follows from the fact that

\begin{align*}
\frac{\binom{2n+2}{n+1}}{2^{2n+2}} = \frac{\binom{2n+1}{n}}{2^{2n+1}}
\end{align*}

Sorry, I'm tired at the moment. You do understand that you are supposed to be proving that ##(*)## is true - yeah?

Last edited:
kshitij said:
Even in your solution, I cannot think how to work backwards from,
\begin{align*}

\frac{1}{2} \cdot \frac{2n+1}{n+1} < \sqrt{\frac{2n+1}{2n+3}}

\end{align*}
to get,
\begin{align*}

\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}

\end{align*}
So you were assuming that

\begin{align*}
\frac{\binom{2n}{n}}{2^{2n}} < \frac{1}{\sqrt{2n+1}} \qquad (*)
\end{align*}

holds and you wanted to show that it follows that

\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}} \qquad (**)
\end{align*}

is true.

We proved that:

\begin{align*}
\frac{1}{2} \cdot \frac{2n+1}{n+1} < \sqrt{\frac{2n+1}{2n+3}}
\end{align*}

We rewrite this as

\begin{align*}
\dfrac{\frac{1}{2} \cdot \frac{2n+1}{n+1} }{ \sqrt{\frac{2n+1}{2n+3}} } < 1
\end{align*}

and use it in our assumption ##(*)## to obtain:

\begin{align*}
\dfrac{\frac{1}{2} \cdot \frac{2n+1}{n+1} }{ \sqrt{\frac{2n+1}{2n+3}} } \cdot \frac{\binom{2n}{n}}{2^{2n}} < \frac{1}{\sqrt{2n+1}}
\end{align*}

We rewrite this as:

\begin{align*}
\left( \frac{1}{2} \cdot \frac{2n+1}{n+1} \right) \cdot\frac{\binom{2n}{n}}{2^{2n}} < \sqrt{\frac{2n+1}{2n+3}} \cdot \frac{1}{\sqrt{2n+1}}
\end{align*}

which is the same as

\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}
\end{align*}

i.e. we have proven that ##(**)## holds.

kshitij
julian said:
Hi @kshitij. I'm preoccupied at the moment so I'll just address a couple of points.

I didn't say it was correct. I was saying that this condition:

\begin{align*}
\frac{\binom{2n+2}{n+1}}{2^{2n+2}} < \frac{1}{\sqrt{2n+3}} \qquad (*)
\end{align*}

is equivalent to this condition:

\begin{align*}
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}
\end{align*}

which follows from the fact that

\begin{align*}
\frac{\binom{2n+2}{n+1}}{2^{2n+2}} = \frac{\binom{2n+1}{n}}{2^{2n+1}}
\end{align*}

Sorry, I'm tired at the moment. You do understand that you are supposed to be proving that ##(*)## is true - yeah?
I did realize my mistake, thank you for your help and also for showing how to work backwards from ##\frac{\binom{2n}{n}}{2^{2n}} < \frac{1}{\sqrt{2n+1}} ## to ##
\frac{\binom{2n+1}{n}}{2^{2n+1}} < \frac{1}{\sqrt{2n+3}}##

julian
I want to explicitly say thank you to @Office_Shredder for managing this month's Challenge thread. It was a great relief to have a break while he bothered incoming answers. Thanks a lot!

Problem #5:

Question (a): Let ##\sigma \in \text{Aut} (S_n)## be an automorphism of the symmetric group ##S_n## ##(n \geq 4)## such that ##\sigma## sends transpositions to transpositions, then prove that ##\sigma## is an inner automorphism.

Question (b): Determine the inner automorphism groups of (i) the symmetric groups and (ii) the alternating groups for ##n \geq 4##.

The transpositions ##(1,k)## are a generating set for ##S_n## for ##k = 2,3, \dots , n##. We have that ##\sigma (1,k) = (a_k , b_k)## for ##k = 2,3, \dots , n##, where ##a_k \not= b_k## and where ##a_k, b_k \in \{ 1,2, \dots , n \}##. As ##(1,2)## and ##(1,3)## don't commute, their images ##(a_2,b_2)## and ##(a_3,b_3)## don't commute. As such their intersection ##\{ a_2 , b_2 \} \cap \{ a_3 , b_3 \}## is a single number. WOLG we assume that ##a_2 = a_3 = a##.

We prove that ##a \in \{ a_k , b_k \}## for ##k >3##. Assume otherwise, that is assume that for some ##k > 3## we have that ##a_k \not= a \not= b_k##. As ##(1,k)## does not commute with ##(1,2)## we must have that ##b_2 \in \{ a_k , b_k \}##. Similarly, ##b_3 \in \{ a_k , b_k \}## as ##(1,k)## and ##(1,3)## don't commute. Therefore, ##(a_k , b_k) = (b_2 , b_3)##. But then we would have

\begin{align*}
\sigma (2,3) = \sigma ( (1,3) (1,2) (1,3)) = (a , b_3) (a , b_2) (a , b_3) = (b_2 , b_3) = \sigma (1,k)
\end{align*}

which is in contradiction with the fact that ##\sigma## is injective.

Thus the transpositions ##(a_k , b_k)## transforms the element ##a##, and WLOG we can assume ##a_k = a## for all ##k##. All the integers ##b_k \not= a## and are all distinct. So we have that ##\sigma (1 , k) (a) = (a , b_k) (a) = b_k##, ##\sigma (1 , k) (b_k) = (a , b_k) (b_k) = a##, and ##\sigma (1 , k) (b_{k'}) = (a , b_k) (b_{k'}) = b_{k'}## for ##k' \not= k##.

Consider the element ##\tau \in S_n## defined by ##\tau (1) = a## and ##\tau (k) = b_k##. We have

\begin{align*}
\tau (1,k) \tau^{-1} (a) & = \tau (1,k) (1)
\nonumber \\
& = \tau (k)
\nonumber \\
& = b_k
\end{align*}

and

\begin{align*}
\tau (1,k) \tau^{-1} (b_k) & = \tau (1,k) (k)
\nonumber \\
& = \tau (1)
\nonumber \\
& = a
\end{align*}

and when ##k' \not= k##:

\begin{align*}
\tau (1,k) \tau^{-1} (b_{k'}) & = \tau (1,k) (k')
\nonumber \\
& = \tau (k')
\nonumber \\
& = b_{k'}
\end{align*}

Thus we have shown that ##\sigma## agrees with the inner automorphism ##\tau t \tau^{-1}## on all the generators ##t = (1,k)## of the group ##S_n## and so it is an inner automorphism of ##S_n##.

QED

Lemma 1: Let ##G## be a group with centre ##Z (G)##. Then ##\text{Inn} (G) \simeq G / Z (G)##.

Proof: Let ##h \in G##. An inner automorphism ##\phi_h## of ##G## is given by ##\phi_h (g) = h g h^{-1}##

Define a group homomorphism:

\begin{align*}
\varphi : G \rightarrow \text{Inn} (G)
\end{align*}

by

\begin{align*}
\varphi (h) = \phi_h .
\end{align*}

##\varphi## is clearly surjective. We identify the kernel of ##\varphi##. We have that:

\begin{align*}
h \in \text{ker} \varphi & \Longleftrightarrow \phi_h (g) = g \quad \text{for all } g \in G
\nonumber \\
& \Longleftrightarrow h g h^{-1} = g \quad \text{for all } g \in G
\nonumber \\
& \Longleftrightarrow h g = g h \quad \text{for all } g \in G
\nonumber \\
& \Longleftrightarrow h \in Z (G) .
\end{align*}

We then apply the first isomorphism theorem.

QED

The centre of ##S_n## for ##n \geq 4## is the identity permutation ##e##.

Proof: Suppose that ##\rho## is an arbitrary element of ##S_n## which is not the identity. Choose ##i## such that ##j = \rho (i) \not= i##. Because ##n \geq 4## there is ##k \not= \{ i , j \}## and let ##\tau = (j,k)##. Then:

\begin{align*}
\tau \rho \tau^{-1} (i) = \tau \rho (i) = \tau (j) = k \not= j = \rho (i)
\end{align*}

so that ##\rho## does not belong to the centre.

QED

Using this result in lemma 1 we obtain that

\begin{align*}
\text{Inn} (S_n) \simeq S_n , \qquad \text{for } n \geq 4 .
\end{align*}

The centre of ##A_n## for ##n \geq 4## is the identity permutation ##e##.

Proof: Suppose that ##\rho## is an arbitrary element of ##A_n## which is not the identity. Choose ##i## such that ##j = \rho (i) \not= i##. Because ##n \geq 4## we can choose distinct ##k## and ##l## such that ##k , l \not= \{ i , j \}## and let ##\tau = (j,k,l) = (j,l) (j,k) \in A_n##. Then:

\begin{align*}
\tau \rho \tau^{-1} (i) = \tau \rho (i) = \tau (j) = k \not= j = \rho (i)
\end{align*}

so that ##\rho## does not belong to the centre.

QED

Using this result in lemma 1 we obtain that

\begin{align*}
\text{Inn} (A_n) \simeq A_n , \qquad \text{for } n \geq 4 .
\end{align*}

Last edited:
Problem #8:

By definition

\begin{align*}
H (p) = 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p -1} = \frac{a}{b} .
\end{align*}

If we can show that:

\begin{align*}
(p - 1)! \left( 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{p - 1} \right) = k p^2
\end{align*}

where ##k## is an integer then we would have

\begin{align*}
H (p) = \frac{k p^2}{(p - 1)!} .
\end{align*}

Since none of the factors in ##(p - 1)!## divide ##p## no ##p## in the numerator can be canceled out. Thus ##p^2## would divide the numerator.

In the polynomial

\begin{align*}
g (x) = (x-1)(x-2) \cdots (x - (p - 1)) = x^{p-1} + a_1 x^{p - 2} + \cdots + a_{p -2} x + a_{p - 1} \quad (*)
\end{align*}

the term ##a_{p - 2}## is given by

\begin{align*}
a_{p -2} = - (p -1)! \left( 1 + \frac{1}{2} + \cdots + \frac{1}{p - 1} \right)
\end{align*}

If we can show that ##- a_{p - 2} = k p^2## that will solve the problem.

We have ##a_{p - 1} = (p - 1)!##.

Fermat's little theorem says that ##a^{p - 1} \equiv 1 \; (\text{mod } p)## for any integer ##a##. It follows that modulo ##p##, that ##x^{p - 1} - 1## has the roots ##p - 1## roots ##1,2, \dots , p-1##. So that:

\begin{align*}
h (x) = x^{p - 1} - 1 \equiv (x-1)(x-2) \cdots (x - (p - 1)) \; (\text{mod } p) .
\end{align*}

Note that modulo ##p##, ##h (x)## and ##g (x)## have the same roots.

Also, Wilson's theorem says that:

\begin{align*}
(p - 1)! \equiv -1 \; (\text{mod } p)
\end{align*}

Using all of the above in the polynomial ##g (x) - h (x)## we obtain

\begin{align*}
f (x) & = g (x) - h (x)
\nonumber \\
& = x^{p-1} + a_1 x^{p - 2} + \cdots + a_{p -2} x + a_{p - 1} - x^{p - 1} + 1
\nonumber \\
& \equiv a_1 x^{p - 2} + a_2 x^{p -3} + \cdots + a_{p -2} x \; (\text{mod } p)
\nonumber \\
& \equiv 0 .
\end{align*}

As this is a polynomial of degree ##p - 2## with the ##p - 1## solutions by Lagrange's theorem ##p## divides all the coefficients.

Putting ##x = p## in ##(*)## gives

\begin{align*}
(p-1)! = p^{p-1} + a_1 p^{p - 2} + a_2 p^{p -3} + \cdots + a_{p -2} p + a_{p - 1}
\end{align*}

Subtracting ##(p - 1)!## from both sides, dividing by ##p##, and then rearranging gives

\begin{align*}
- a_{p - 2} = p^{p-2} + a_1 p^{p - 3} + \cdots + a_{p - 3} p .
\end{align*}

As we are taking ##p \geq 5##, and as each of the coefficients ##a_1, a_2, \dots a_{p - 3}## is proportional to ##p## the RHS is equal to ##k p^2## where ##k## is an integer.

If ##p## were equal to 3 we couldn't say that the term ##a_1 p^{p - 3}## is proportional to ##3^2## as ##a_1 p^{p - 3} = a_1##. The result clearly doesn't apply to ##H (3)##:

\begin{align*}
H (3) = 1 + \frac{1}{2} + \frac{1}{3} = \frac{11}{6} .
\end{align*}

Last edited:
This is a variation on the first proof I gave for problem #8 in the previous post. It avoids mod arithmetic and knowing theorems:

Again we solve the problem by showing that

\begin{align*}
- a_{p -2} = (p -1)! \left( 1 + \frac{1}{2} + \cdots + \frac{1}{p - 1} \right)
\end{align*}

is equal to ##k p^2## where ##k## is an integer.

First we prove that if ##p## is prime and ##0 < i < p##, then ##p | \binom{p}{i}##. We have

\begin{align*}
\binom{p}{i} = \frac{p}{i!} (p - 1) (p - 2) \cdots (p - i + 1)
\end{align*}

As none of the factors in ##i!## can divide ##p## the ##p## cannot be cancelled. Hence ##p## divides ##\binom{p}{i}##.

Consider the polynomial

\begin{align*}
f (x) = (x +1) (x + 2) \cdots (x + p - 1) = x^{p - 1} - a_1 x^{p - 2} + a_2 x^{p - 3} - \cdots - a_{p - 2} x + a_{p - 1}
\end{align*}

where the ##a_i##'s are the same as before. We will prove that the coefficients ##a_1, \dots , a_{p - 2}## are all divisible by ##p## when ##p## is an odd prime.

Proof: Note that

\begin{align*}
f (x + 1) = (x +2) (x + 3) \cdots (x + p)
\end{align*}

and so

\begin{align*}
(x + 1) f (x + 1) = f (x) (x + p)
\end{align*}

implying

\begin{align*}
p f (x) = (x + 1) f (x) - x f(x) .
\end{align*}

Hence,

\begin{align*}
& p x^{p - 1} - p a_1x^{p - 2} + p a_2 x^{p - 3} - \cdots - p a_{p - 2} x + p!
\nonumber \\
& = [(x + 1)^p - x^p] - a_1 [(x + 1)^{p - 1} - x^{p - 1}] + a_2 [(x + 1)^{p - 2} - x^{p - 2}] - \cdots -
\nonumber \\
& - a_{p - 2} [(x + 1)^2 - x^2] + [(x + 1) - x](p - 1)!
\nonumber \\
& = x^{p - 1} \binom{p}{1} + x^{p - 2} \binom{p}{2} + x^{p - 3} \binom{p}{3} + \cdots + \binom{p}{p - 1} x + 1
\nonumber \\
& + (-a_1) \left[ x^{p - 2} \binom{p - 1}{1} + x^{p - 3} \binom{p - 1}{2} + \cdots + \binom{p - 1}{p - 2} x + 1 \right]
\nonumber \\
& + a_2 \left[ x^{p - 3} \binom{p - 2}{1} + x^{p - 4} \binom{p - 2}{2} + \cdots + \binom{p - 2}{p - 3} x + 1 \right]
\nonumber \\
& \vdots
\nonumber \\
& +(-a_{p - 2}) \left[ x \binom{2}{1} + 1 \right]
\nonumber \\
& + (p - 1)!
\end{align*}

By equating coefficients if of ##x^{p - 2} , x^{p - 3} , \cdots , x## we obtain:

\begin{align*}
p (- a_1) & = \binom{p}{2} + \binom{p - 1}{1} (-a_1)
\nonumber \\
p a_2 & = \binom{p}{3} + \binom{p - 1}{2} (-a_1) + \binom{p - 2}{1} a_2
\nonumber \\
& \vdots
\nonumber \\
p (-a_{p-2}) &= \binom{p}{p - 1} + \binom{p - 1}{p - 2} (-a_1) + \cdots + \binom{2}{1} (-a_{p - 2}) .
\end{align*}

From the first equation, using that ##p | \binom{p}{2}##, we have that ##p | (-a_1)##. Using this in the second equation together with ##p | \binom{p}{3}##, we have that ##p | a_2##. And so on. So we have ##p | (-a_1) , a_2 , \cdots , (-a_{p - 2})##.

QED

Putting ##x = p## into

\begin{align*}
(x-1)(x-2) \cdots (x - (p - 1)) = x^{p-1} + a_1 x^{p - 2} + \cdots + a_{p -2} x + (p - 1)!
\end{align*}

then rearranging gives

\begin{align*}
- a_{p - 2} = p^{p-2} + a_1 p^{p - 3} + \cdots + a_{p - 3} p .
\end{align*}

As we are taking ##p \geq 5##, and as each of the coefficients ##a_1, a_2, \dots a_{p - 3}## is proportional to ##p## the RHS is equal to ##k p^2## where ##k## is an integer. Solving problem #8.

QED

Last edited:
Problem #6:

By definition

\begin{align*}
d = \min_{(x,y) \in C^2 , x \not= y} d (x,y)
\end{align*}

where ##d(x,y)## be the Hamming distance of ##x## and ##y##. A bound is proved by bounding the quantity

\begin{align*}
\sum_{(x,y) \in C^2 , x \not= y} d (x,y)
\end{align*}

in two different ways. On the one hand, there are ##\# C## choices for ##x## and for each such choice, there are ## \# C - 1## choices for ##y##. Since by definition ##d (x,y) \geq d## for all ##x## and ##y## (##x \not= y##), it follows that

\begin{align*}
\sum_{(x,y) \in C^2 , x \not= y} d (x,y) \geq \# C (\# C - 1) d .
\end{align*}

The other inequality is obtained by writing down all the ##\# C## codewords of ##C## into a ##\# C \times n##-matrix. Let ##n_{i , \alpha}## denote the number of times the element ##\alpha## of the alphabet (with ##q## elements) occurs in the ##i##th column. For each choice of ##\alpha## in the ##i##th column there are ##\# C - n_{i , \alpha}## elements not equal to ##\alpha## in the same column. Then

\begin{align*}
\sum_{(x,y) \in C^2 , x \not= y} d (x,y) & = \sum_{i = 1}^n \sum_{\alpha} n_{i , \alpha} (\# C - n_{i , \alpha})
\nonumber \\
& = \sum_{i = 1}^n \left( (\# C)^2 - \sum_\alpha n_{i , \alpha}^2 \right) \qquad ( \sum_{\alpha} n_{i , \alpha} = \# C)
\nonumber \\
& \leq \sum_{i = 1}^n \left( (\# C)^2 - \frac{(\# C)^2}{q} \right) \qquad ( (\# C)^2 = (\sum_{\alpha} n_{i , \alpha} \cdot 1)^2 \leq (\sum_{\alpha} n_{i , \alpha}^2) (\sum_{\alpha} 1^2) )
\nonumber \\
&= n \cdot (\# C)^2 \cdot \left( 1 - \frac{1}{q} \right) .
\end{align*}

Combining the two bounds gives

\begin{align*}
\# C (\# C - 1) d \leq (\# C)^2 \cdot n \cdot \left( \frac{q - 1}{q} \right)
\end{align*}

or

\begin{align*}
d (\# C - 1) \leq \# C \cdot n \cdot \left( \frac{q - 1}{q} \right) \qquad (*)
\end{align*}

If ##d < n \cdot \dfrac{q - 1}{q}## the bound ##(*)## can be rearranged as

\begin{align*}
\# C \leq \frac{d}{d - n \cdot \dfrac{q - 1}{q}} .
\end{align*}

Last edited:
fresh_42
Problem #7:

We start with a reminder of the definition of the Cantor dust. Define ##C_1## as:

\begin{align*}
C_1 & = [0,1] - ( \frac{1}{3} , \frac{2}{3})
\nonumber \\
& = [0 , \frac{1}{3}] \cup [\frac{2}{3} , 1]
\end{align*}

Take the two intervals and split them into thirds. Then remove each of their open middle thirds to get

\begin{align*}
C_2 = [0 , \frac{1}{9}] \cup [\frac{2}{9} , \frac{1}{3}] \cup [\frac{2}{3} , \frac{7}{9}] \cup [\frac{8}{9} ,1]
\end{align*}

To get the next set ##C_3##, split each of the four intervals of ##C_2## into three equal parts and remove their open middle thirds. Continue this way for each ##k##. The Cantor dust is the intersection of all these ##C_k##:

\begin{align*}
\text{Cantor dust} = C = \cap_{k = 1}^\infty C_k .
\end{align*}

To prove it is uncountable, first we express any number ##x## in the unit interval ##[0,1]## in its tri-adic expansion

\begin{align*}
x = \sum_{k = 1}^\infty \frac{n_k}{3^k} .
\end{align*}

where ##n_k## takes the values zero, one or two. We can write the tri-adic expansion in base three notation, to obtain

\begin{align*}
x = . n_1 n_2 n_3 \dots
\end{align*}

As with the decimal expansion, the base three expansion's coefficients ##n_k## are unique, provided we always round up. Thus we will say that

\begin{align*}
.102222 \dots = .110000 \dots
\end{align*}

for example (the case ##.2222 \dots## we leave as it is).

In terms of the base three expansion

\begin{align*}
C = \{ . n_1 n_2 n_3 \dots : \text{where } n_k \text{ is equal to either zero or two} \}
\end{align*}

We prove that the Cantor dust is uncountable by contradiction. We assume there is a bijective map ##\phi : \mathbb{N} \rightarrow C## and then prove that there is a number in ##C## that not in the image of ##\phi##, contradicting that ##\phi## is surjective.

Let us list the images of our assumed bijection ##\phi : \mathbb{N} \rightarrow C##:

\begin{align*}
\phi (1) & = . a_1 a_2 a_3 \dots
\nonumber \\
\phi (2) & = . b_1 b_2 b_3 \dots
\nonumber \\
\phi (3) & = . c_1 c_2 c_3 \dots
\nonumber \\
\phi (4) & = . d_1 d_2 d_3 \dots
\nonumber \\
& \;\; \vdots
\end{align*}

Note that the ##a_i, b_j##, etc are now fixed numbers that are either ##0## or ##2##, given to us by the assumed bijection. We will construct a new number ##.n'_1 n'_2 n'_3 \dots## that cannot appear in the above list, thereby contradicting the assumption that ##\phi## is surjective. Set

\begin{align*}
n'_k = \left\{
\begin{matrix}
0, & \text{if the } kth \text{ entry of } \phi (k) \not= 0
\nonumber \\
2, & \text{if the } kth \text{ entry of } \phi (k) = 0
\end{matrix}
\right.
\end{align*}

Note that ##n'_1## is ##0## if ##a_1 = 2## and is ##2## if ##a_1 = 0##. Thus

\begin{align*}
.n'_1 n'_2 n'_3 \dots \not= . a_1 a_2 a_3 \dots = \phi (1) .
\end{align*}

Likewise, ##n'_2## is ##0## if ##b_2 = 2## and is ##2## if ##b_2 = 0## and so

\begin{align*}
.n'_1 n'_2 n'_3 \dots \not= . b_1 b_2 b_3 \dots = \phi (2) .
\end{align*}

And so on indefinitely. Since the base three expansions are unique, and since each ##n'_k## is defined so that it is not equal to the ##kth## term in ##\phi (k)##, we must have that ##.n'_1 n'_2 n'_3 \dots## is not equal to any ##\phi (k)##, meaning that ##\phi## cannot be surjective.

With Cantor dust, the portion of the object corresponding to the initial one-third of the original segment is an exact replica of the complete object, but at 1/3rd its scale. Note, moreover, that you need 2 copies of the scaled-down object to cover the object at full-scale. Hence, the Hausdorff–Besicovitch dimension is

\begin{align*}
d = \frac{\ln 2}{\ln 3} = 0.630923 \dots
\end{align*}

fresh_42
julian said:
Problem #5:

Question (a): Let ##\sigma \in \text{Aut} (S_n)## be an automorphism of the symmetric group ##S_n## ##(n \geq 4)## such that ##\sigma## sends transpositions to transpositions, then prove that ##\sigma## is an inner automorphism.

Question (b): Determine the inner automorphism groups of (i) the symmetric groups and (ii) the alternating groups for ##n \geq 4##.

The transpositions ##(1,k)## are a generating set for ##S_n## for ##k = 2,3, \dots , n##. We have that ##\sigma (1,k) = (a_k , b_k)## for ##k = 2,3, \dots , n##, where ##a_k \not= b_k## and where ##a_k, b_k \in \{ 1,2, \dots , n \}##. As ##(1,2)## and ##(1,3)## don't commute, their images ##(a_2,b_2)## and ##(a_3,b_3)## don't commute. As such their intersection ##\{ a_2 , b_2 \} \cap \{ a_3 , b_3 \}## is a single number. WOLG we assume that ##a_2 = a_3 = a##.

We prove that ##a \in \{ a_k , b_k \}## for ##k >3##. Assume otherwise, that is assume that for some ##k > 3## we have that ##a_k \not= a \not= b_k##. As ##(1,k)## does not commute with ##(1,2)## we must have that ##b_2 \in \{ a_k , b_k \}##. Similarly, ##b_3 \in \{ a_k , b_k \}## as ##(1,k)## and ##(1,3)## don't commute. Therefore, ##(a_k , b_k) = (b_2 , b_3)##. But then we would have

\begin{align*}
\sigma (2,3) = \sigma ( (1,3) (1,2) (1,3)) = (a , b_3) (a , b_2) (a , b_3) = (b_2 , b_3) = \sigma (1,k)
\end{align*}

which is in contradiction with the fact that ##\sigma## is injective.

Thus the transpositions ##(a_k , b_k)## transforms the element ##a##, and WLOG we can assume ##a_k = a## for all ##k##. All the integers ##b_k \not= a## and are all distinct. So we have that ##\sigma (1 , k) (a) = (a , b_k) (a) = b_k##, ##\sigma (1 , k) (b_k) = (a , b_k) (b_k) = a##, and ##\sigma (1 , k) (b_{k'}) = (a , b_k) (b_{k'}) = b_{k'}## for ##k' \not= k##.

Consider the element ##\tau \in S_n## defined by ##\tau (1) = a## and ##\tau (k) = b_k##. We have

\begin{align*}
\tau (1,k) \tau^{-1} (a) & = \tau (1,k) (1)
\nonumber \\
& = \tau (k)
\nonumber \\
& = b_k
\end{align*}

and

\begin{align*}
\tau (1,k) \tau^{-1} (b_k) & = \tau (1,k) (k)
\nonumber \\
& = \tau (1)
\nonumber \\
& = a
\end{align*}

and when ##k' \not= k##:

\begin{align*}
\tau (1,k) \tau^{-1} (b_{k'}) & = \tau (1,k) (k')
\nonumber \\
& = \tau (k')
\nonumber \\
& = b_{k'}
\end{align*}

Thus we have shown that ##\sigma## agrees with the inner automorphism ##\tau t \tau^{-1}## on all the generators ##t = (1,k)## of the group ##S_n## and so it is an inner automorphism of ##S_n##.

QED

Lemma 1: Let ##G## be a group with centre ##Z (G)##. Then ##\text{Inn} (G) \simeq G / Z (G)##.

Proof: Let ##h \in G##. An inner automorphism ##\phi_h## of ##G## is given by ##\phi_h (g) = h g h^{-1}##

Define a group homomorphism:

\begin{align*}
\varphi : G \rightarrow \text{Inn} (G)
\end{align*}

by

\begin{align*}
\varphi (h) = \phi_h .
\end{align*}

##\varphi## is clearly surjective. We identify the kernel of ##\varphi##. We have that:

\begin{align*}
h \in \text{ker} \varphi & \Longleftrightarrow \phi_h (g) = g \quad \text{for all } g \in G
\nonumber \\
& \Longleftrightarrow h g h^{-1} = g \quad \text{for all } g \in G
\nonumber \\
& \Longleftrightarrow h g = g h \quad \text{for all } g \in G
\nonumber \\
& \Longleftrightarrow h \in Z (G) .
\end{align*}

We then apply the first isomorphism theorem.

QED

The centre of ##S_n## for ##n \geq 4## is the identity permutation ##e##.

Proof: Suppose that ##\rho## is an arbitrary element of ##S_n## which is not the identity. Choose ##i## such that ##j = \rho (i) \not= i##. Because ##n \geq 4## there is ##k \not= \{ i , j \}## and let ##\tau = (j,k)##. Then:

\begin{align*}
\tau \rho \tau^{-1} (i) = \tau \rho (i) = \tau (j) = k \not= j = \rho (i)
\end{align*}

so that ##\rho## does not belong to the centre.

QED

Using this result in lemma 1 we obtain that

\begin{align*}
\text{Inn} (S_n) \simeq S_n , \qquad \text{for } n \geq 4 .
\end{align*}

The centre of ##A_n## for ##n \geq 4## is the identity permutation ##e##.

Proof: Suppose that ##\rho## is an arbitrary element of ##A_n## which is not the identity. Choose ##i## such that ##j = \rho (i) \not= i##. Because ##n \geq 4## we can choose distinct ##k## and ##l## such that ##k , l \not= \{ i , j \}## and let ##\tau = (j,k,l) = (j,l) (j,k) \in A_n##. Then:

\begin{align*}
\tau \rho \tau^{-1} (i) = \tau \rho (i) = \tau (j) = k \not= j = \rho (i)
\end{align*}

so that ##\rho## does not belong to the centre.

QED

Using this result in lemma 1 we obtain that

\begin{align*}
\text{Inn} (A_n) \simeq A_n , \qquad \text{for } n \geq 4 .
\end{align*}
Right. You could have saved some time in the second part by observing that ##A_n## are simple for ##n>4## and ##S_n\cong A_n \rtimes \mathbb{Z}_2## which have no center. For ##n=4## the Klein four-group adds to the candidates for a center, but ##V_4## isn't Abelian.

julian said:
Problem #8:

By definition

\begin{align*}
H (p) = 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p -1} = \frac{a}{b} .
\end{align*}

If we can show that:

\begin{align*}
(p - 1)! \left( 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{p - 1} \right) = k p^2
\end{align*}

where ##k## is an integer then we would have

\begin{align*}
H (p) = \frac{k p^2}{(p - 1)!} .
\end{align*}

Since none of the factors in ##(p - 1)!## divide ##p## no ##p## in the numerator can be canceled out. Thus ##p^2## would divide the numerator.

In the polynomial

\begin{align*}
g (x) = (x-1)(x-2) \cdots (x - (p - 1)) = x^{p-1} + a_1 x^{p - 2} + \cdots + a_{p -2} x + a_{p - 1} \quad (*)
\end{align*}

the term ##a_{p - 2}## is given by

\begin{align*}
a_{p -2} = - (p -1)! \left( 1 + \frac{1}{2} + \cdots + \frac{1}{p - 1} \right)
\end{align*}

If we can show that ##- a_{p - 2} = k p^2## that will solve the problem.

We have ##a_{p - 1} = (p - 1)!##.

Fermat's little theorem says that ##a^{p - 1} \equiv 1 \; (\text{mod } p)## for any integer ##a##. It follows that modulo ##p##, that ##x^{p - 1} - 1## has the roots ##p - 1## roots ##1,2, \dots , p-1##. So that:

\begin{align*}
h (x) = x^{p - 1} - 1 \equiv (x-1)(x-2) \cdots (x - (p - 1)) \; (\text{mod } p) .
\end{align*}

Note that modulo ##p##, ##h (x)## and ##g (x)## have the same roots.

Also, Wilson's theorem says that:

\begin{align*}
(p - 1)! \equiv -1 \; (\text{mod } p)
\end{align*}

Using all of the above in the polynomial ##g (x) - h (x)## we obtain

\begin{align*}
f (x) & = g (x) - h (x)
\nonumber \\
& = x^{p-1} + a_1 x^{p - 2} + \cdots + a_{p -2} x + a_{p - 1} - x^{p - 1} + 1
\nonumber \\
& \equiv a_1 x^{p - 2} + a_2 x^{p -3} + \cdots + a_{p -2} x \; (\text{mod } p)
\nonumber \\
& \equiv 0 .
\end{align*}

As this is a polynomial of degree ##p - 2## with the ##p - 1## solutions by Lagrange's theorem ##p## divides all the coefficients.

Putting ##x = p## in ##(*)## gives

\begin{align*}
(p-1)! = p^{p-1} + a_1 p^{p - 2} + a_2 p^{p -3} + \cdots + a_{p -2} p + a_{p - 1}
\end{align*}

Subtracting ##(p - 1)!## from both sides, dividing by ##p##, and then rearranging gives

\begin{align*}
- a_{p - 2} = p^{p-2} + a_1 p^{p - 3} + \cdots + a_{p - 3} p .
\end{align*}

As we are taking ##p \geq 5##, and as each of the coefficients ##a_1, a_2, \dots a_{p - 3}## is proportional to ##p## the RHS is equal to ##k p^2## where ##k## is an integer.

If ##p## were equal to 3 we couldn't say that the term ##a_1 p^{p - 3}## is proportional to ##3^2## as ##a_1 p^{p - 3} = a_1##. The result clearly doesn't apply to ##H (3)##:

\begin{align*}
H (3) = 1 + \frac{1}{2} + \frac{1}{3} = \frac{11}{6} .
\end{align*}
Right. The result is known as the Theorem of Wolstenholme.

julian said:
Problem #6:

By definition

\begin{align*}
d = \min_{(x,y) \in C^2 , x \not= y} d (x,y)
\end{align*}

where ##d(x,y)## be the Hamming distance of ##x## and ##y##. A bound is proved by bounding the quantity

\begin{align*}
\sum_{(x,y) \in C^2 , x \not= y} d (x,y)
\end{align*}

in two different ways. On the one hand, there are ##\# C## choices for ##x## and for each such choice, there are ## \# C - 1## choices for ##y##. Since by definition ##d (x,y) \geq d## for all ##x## and ##y## (##x \not= y##), it follows that

\begin{align*}
\sum_{(x,y) \in C^2 , x \not= y} d (x,y) \geq \# C (\# C - 1) d .
\end{align*}

The other inequality is obtained by writing down all the ##\# C## codewords of ##C## into a ##\# C \times n##-matrix. Let ##n_{i , \alpha}## denote the number of times the element ##\alpha## of the alphabet (with ##q## elements) occurs in the ##i##th column. For each choice of ##\alpha## in the ##i##th column there are ##\# C - n_{i , \alpha}## elements not equal to ##\alpha## in the same column. Then

\begin{align*}
\sum_{(x,y) \in C^2 , x \not= y} d (x,y) & = \sum_{i = 1}^n \sum_{\alpha} n_{i , \alpha} (\# C - n_{i , \alpha})
\nonumber \\
& = \sum_{i = 1}^n \left( (\# C)^2 - \sum_\alpha n_{i , \alpha}^2 \right) \qquad ( \sum_{\alpha} n_{i , \alpha} = \# C)
\nonumber \\
& \leq \sum_{i = 1}^n \left( (\# C)^2 - \frac{(\# C)^2}{q} \right) \qquad ( (\# C)^2 = (\sum_{\alpha} n_{i , \alpha} \cdot 1)^2 \leq (\sum_{\alpha} n_{i , \alpha}^2) (\sum_{\alpha} 1^2) )
\nonumber \\
&= n \cdot (\# C)^2 \cdot \left( 1 - \frac{1}{q} \right) .
\end{align*}

Combining the two bounds gives

\begin{align*}
\# C (\# C - 1) d \leq (\# C)^2 \cdot n \cdot \left( \frac{q - 1}{q} \right)
\end{align*}

or

\begin{align*}
d (\# C - 1) \leq \# C \cdot n \cdot \left( \frac{q - 1}{q} \right) \qquad (*)
\end{align*}

If ##d < n \cdot \dfrac{q - 1}{q}## the bound ##(*)## can be rearranged as

\begin{align*}
\# C \leq \frac{d}{d - n \cdot \dfrac{q - 1}{q}} .
\end{align*}
The bound is called Plotkin bound.

• Math Proof Training and Practice
Replies
100
Views
7K
• Math Proof Training and Practice
Replies
114
Views
6K
• Math Proof Training and Practice
Replies
61
Views
8K
• Math Proof Training and Practice
Replies
86
Views
10K
• Math Proof Training and Practice
Replies
67
Views
8K
• Math Proof Training and Practice
Replies
42
Views
7K
• Math Proof Training and Practice
Replies
93
Views
11K
• Math Proof Training and Practice
Replies
56
Views
7K
• Math Proof Training and Practice
Replies
61
Views
9K
• Math Proof Training and Practice
Replies
60
Views
8K