# Basic Math Challenge - September 2018

• Challenge
• Featured
Mentor
2021 Award
Summer is coming and brings a new basic math challenge! Enjoy! For more advanced problems you can check our other intermediate level math challenge thread!

RULES:

a) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored. Solutions will be posted around 15th of the following month.
b) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
c) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
d) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.
e) Mentors, advisors and homework helpers are kindly requested not to post solutions, not even in spoiler tags, for the challenge problems, until 16th of each month. This gives the opportunity to other people including but not limited to students to feel more comfortable in dealing with / solving the challenge problems. In case of an inadvertent posting of a solution the post will be deleted.

We have quite a couple of old problems, which are still open. As we think that most of them aren't too difficult, we want to give them another try. One reason is, we want to find out why they have been untouched. So in case you need a hint or two in order to tackle them, don't hesitate to ask! We've also some new questions. Our special thanks go to @Infrared who contributed some new questions.

QUESTIONS:

1. Let ##f## be a differentiable function in ##\mathbb{R}##. If ##f'## is invertible and ##(f')^{-1}## is differentiable in ##\mathbb{R}##, show that ##[I_A (f')^{-1} - f \circ [(f')^{-1} ]]' = (f')^{-1}## where ##I_A## with ##I_A(x) = x## is the identity function ##I_A : A \to A##

Hint: Let ##y = f'(x)## then ##(f')^{-1}(y) = x##. By differentiating, we can get to a useful result including the second derivative of ##f(x)##. Next, we can utilize ##[f\circ (f')^{-1}]'## and incorporate the identity function ##I_A##.

2. (solved by @nuuskur ) Given a nonnegative, monotone decreasing sequence ##(a_n)_{n \in \mathbb{N}}\subseteq \mathbb{R}\,.## Prove that ##\sum_{n \in \mathbb{N}}a_n## converges if and only if ##\sum_{n \in \mathbb{N}_0}2^na_{2^n}## converges.

3. Let's consider complex functions in one variable and especially the involutions
$$\mathcal{I}=\{\, z\stackrel{p}{\mapsto} z\; , \; z\stackrel{q}{\mapsto} -z\; , \;z\stackrel{r}{\mapsto} z^{-1}\; , \;z\stackrel{s}{\mapsto}-z^{-1}\,\}$$
We also consider the two functions $$\mathcal{J}=\{\,z\stackrel{u}{\longmapsto}\frac{1}{2}(-1+i \sqrt{3})z\; , \;z\stackrel{v}{\longmapsto}-\frac{1}{2}(1+i \sqrt{3})z\,\}$$
and the set ##\mathcal{F}## of functions which we get, if we combine any of them: ##\mathcal{F}=\langle\mathcal{I},\mathcal{J} \rangle## by consecutive applications. We now define for ##\mathcal{K}\in \{\mathcal{I},\mathcal{J}\}## a relation on ##\mathcal{F}## by
$$f(z) \sim_\mathcal{K} g(z)\, :\Longleftrightarrow \, (\forall \,h_1\in \mathcal{K})\,(\exists\,h_2\in \mathcal{K})\,: f(h_1(z))=g(h_2(z))$$
• Show that ##\sim_\mathcal{K}## defines an equivalence relation.
• Show that ##\mathcal{F}/\sim_\mathcal{I}## admits a group structure on its equivalence classes by consecutive application.
• Show that ##\mathcal{F}/\sim_\mathcal{J}## does not admit a group structure on its equivalence classes by consecutive applications.
Hint: Determine the groups ##F=\langle \mathcal{F}\rangle\, , \,I=\langle \mathcal{I} \rangle\, , \,J=\langle \mathcal{J} \rangle## and what distinguishes ##F/I## from ##F/J\,.##

4. There are ##r## sports 'enthusiasts' in a certain city. They are forming various teams to bet on upcoming events. A pair of people dominated last year, so there are new rules in place this year. The peculiar rules are:
• each team must have an odd number of members
• each and every 2 teams must have an even number of members in common.
For avoidance of doubt, nothing in the rules say a given player can only be on one team.
With these rules in place, is it possible to form more than ##r## teams?

Hint: Model these rules with matrix multiplication and select a suitable field.

5. We define an equivalence relation on the topological two-dimensional unit sphere ##\mathbb{S}^2\subseteq \mathbb{R}^3## by ##x \sim y \Longleftrightarrow x \in \{\,\pm y\,\}## and the projection ##q\, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2/\sim \,.## Furthermore we consider the homeomorphism ##\tau \, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2## defined by ##\tau (x)=-x\,.## Note that for ##A \subseteq \mathbb{S}^2## we have ##q^{-1}(q(A))= A \cup \tau(A)\,.## Show that
• ##q## is open and closed.
• ##\mathbb{S}^2/\sim ## is compact, i.e. Hausdorff and covering compact.
• Let ##U_x=\{\,y\in \mathbb{S}^2\,:\,||y-x||<1\,\}## be an open neighborhood of ##x \in \mathbb{S}^2\,.## Show that ##U_x \cap U_{-x} = \emptyset \; , \;U_{-x}=\tau(U_x)\; , \;q(U_x)=q(U_{-x})## and ##q|_{U_{x}}## is injective. Conclude that ##q## is a covering.
Hint: For (a) consider ##O\cup \tau(O)## for an open set ##O\,.## For (b) use part (a) and that ##\mathbb{S}^2## is Hausdorff. For (c) a covering map is a local homeomorphism, so we need continuity, openess, closedness and bijectivity. Again use the previous parts.

6.
(solved by @PeroK ) Two Tennis players have a match that consists of just one set. Larry will serve in the first game and the first player to get ##12## games wins the match (whether ahead by only ##1## game or ##2+## games - either way wins the match).
Larry, and Moe have chances of ##0.75## and ##0.70##, respectively, of winning when they serve. Since Moe doesn't serve in the very first game, he gets to choose: should the servers alternate game by game or should the winner of a given game serve in the next game. What's the best choice for Moe?

7. How many times a day is it impossible to determine what time it is, if you have a clock with same length (identical looking) hour and minutes hands, supposing that we always know if it's morning or evening (i.e. we know whether it's am or pm).

8. For the class of ##n \times n## matrices whose entries, (if flattened and sorted) would be ##1, 2, 3, ..., n^2 -1 ,n^2## prove that there always exists two neighboring entries (in same row or same column) that must differ by at least ##n##.

9. (solved by @Math_QED ) A fixed point of a function ##f:X\to X## is an element ##x\in X## satisfying ##f(x)=x##. Let ##f:\mathbb{R}\to\mathbb{R}## be a continuous function with no fixed points. Show that ##f\circ f## also has no fixed points.

10. (solved by @nuuskur ) Let ##f:\mathbb{R}\to\mathbb{R}## be differentiable and let ##a,b## be real numbers with ##a<b##. Show that if ##f'(a)<m<f'(b)##, then there exists some ##c\in (a,b)## such that ##f'(c)=m.##

Last edited:
nuuskur, Greg Bernhardt, YoungPhysicist and 1 other person

## Answers and Replies

lpetrich
I will solve Problem 7
I combine the hours and minutes to form a time that ranges from 0 to 1 over 12 hours: t = (h + m/60)/12 for h hours and m minutes. I will also set the range of hour-hand and minute-hand positions to be 0 to 1. Thus,
• Hour hand: position a = t
• Minute hand: position a = 12 t
I will generalize the minute hand's position to use an arbitrary positive integer R: a = R t.

If same-sized hour and minute hands give two valid times, then the hour hand for time t1 is the minute hand for time t2, and vice versa. Thus
• ## t_1 = R t_2 \mod 1##
• ## R t_1 = t_2 \mod 1##
Modulo-1 arithmetic is being able to add an arbitrary integer:
• ## t_1 = R t_2 + n_1##
• ## R t_1 = t_2 + n_2##
This is a system of linear equations in the two times, and it has the solution
• ## t_1 = (R n_2 - n_1)/(R^2 - 1)##
• ## t_2 = (n_2 - R n_1)/(R^2 - 1)##
To summarize, ##t = N/(R^2 - 1)##, where N is an integer at least 0 and less than ##R^2 - 1##. It can thus have ##R^2 - 1## values.

We must exclude times of day when the hands' positions coincide. That happens when ##t = N/(R - 1)##, where N is an integer at least 0 and less than ##R - 1##. There are thus ##R - 1## such values, and the number of ambiguous-interpretation times is thus ##(R^2 - 1) - (R - 1) = R(R - 1)##.

For R = 12, the number of ambiguous-interpretation times is 132, and the number of coincident-hand times is 11.

Gold Member
I will solve Problem 7

In general the train of thought seems to work though the stated answer is wrong. There are simpler approaches.

If same-sized hour and minute hands give two valid times, then the hour hand for time t1 is the minute hand for time t2, and vice versa.

Incidentally, I read this statement many times and could not figure out what it means. These are hour and minute hands on a non-broken clock so I don't understand the purpose of "valid times". I couldn't figure out how to interpret ##t_1## vs ##t_2## vs ##t##. Defining ##t_1## and ##t_2## before using them, would be quite helpful here.

lpetrich
lpetrich said:
If same-sized hour and minute hands give two valid times, then the hour hand for time t1 is the minute hand for time t2, and vice versa.
Incidentally, I read this statement many times and could not figure out what it means. These are hour and minute hands on a non-broken clock so I don't understand the purpose of "valid times". I couldn't figure out how to interpret ##t_1## vs ##t_2## vs ##t##. Defining ##t_1## and ##t_2## before using them, would be quite helpful here.

In this problem, we want to find times for which the two hands' positions can be interpreted as being for two different times. These times I called t1 and t2. Let us call one of the hands h1 and the other one h2. At time t1, h1 is the hour hand and h2 is the minute hand, and at time t2, h1 is the minute hand and h2 is the hour hand.

nuuskur
Suppose $a_n\geq 0, n\in\mathbb N$ is monotone decreasing. Assume $\sum _{k=0}^\infty 2^k a_{2^k} <\infty$. Due to monotonicity we have an upper bound
\begin{align*} \sum_{k=1}^\infty a_k &= a_1 + (a_2 + a_3) + \ldots + \left (a_{2^n} + a_{2^n+1} +\ldots + a_{2^{n+1}-1}\right ) + \ldots \\ &\leq a_1 + (a_2 + a_2) + \ldots + 2^n a_{2^n} + \ldots = \sum_{k=0}^\infty 2^ḱ a_{2^k}<\infty . \end{align*}
Thus $\sum _{k=1}^\infty a_k<\infty$. Now, assume $\sum _{k=1}^\infty a_k <\infty$. By further manipulating and using monotonicity
\begin{align*} \sum_{k=0}^\infty 2^ḱ a_{2^k} &= a_1 + (a_2 + a_2) + \ldots + \left (\underbrace{a_{2^{n+1}} + \ldots + a_{2^{n+1}}}_{2^{n+1} \text{ additives}}\right )+\ldots\\ &(\text{now move brackets to the left by one unit if that makes sense}) \\ &= (a_1 + a_2) + (a_2 + a_4 + a_4 + a_4) + \ldots + \left ( \underbrace{a_{2^n} + a_{2^{n+1}} + \ldots + a_{2^{n+1}} }_{2^{n+1} \text{ additives}}\right )+\ldots \\ &\leq (a_1 + a_1) + (a_2 + a_2 + a_3 + a_3) + \ldots + \left ( 2a_{2^n} + 2a_{2^n +1} + \ldots + 2a_{2^{n+1}-1}\right ) + \ldots = 2\sum _{k=1}^\infty a_k <\infty. \end{align*}
Not sure if the sum of the augmented series can be better estimated than with a factor of two.
This is a fun one. Spent the whole lecture on some numerical methods mumbo jumbo trying to manipulate with some series. Just as I was finished the lector walked past and said "..aah, Cauchy condensation". Sure enough, then I googled it and it gave me exactly the same thing. Could have saved quite a bit of time :(

Last edited:
Mentor
2021 Award
Suppose $a_n\geq 0, n\in\mathbb N$ is monotone decreasing. Assume $\sum _{k=0}^\infty 2^k a_{2^k} <\infty$. Due to monotonicity we have an upper bound
\begin{align*} \sum_{k=1}^\infty a_k &= a_1 + (a_2 + a_3) + \ldots + \left (a_{2^n} + a_{2^n+1} +\ldots + a_{2^{n+1}-1}\right ) + \ldots \\ &\leq a_1 + (a_2 + a_2) + \ldots + 2^n a_{2^n} + \ldots = \sum_{k=0}^\infty 2^ḱ a_{2^k}. \end{align*}
Thus $\sum _{k=1}^\infty a_k<\infty$. Now, assume $\sum _{k=1}^\infty a_k <\infty$. By further manipulating and using monotonicity
\begin{align*} \sum_{k=0}^\infty 2^ḱ a_{2^k} &= a_1 + (a_2 + a_2) + \ldots + \left (\underbrace{a_{2^{k+1}} + \ldots + a_{2^{k+1}}}_{2^{k+1} \text{ additives}}\right )\\ &(\text{now move brackets to the left by one unit if that makes sense}) \\ &= (a_1 + a_2) + (a_2 + a_4 + a_4 + a_4) + \ldots + \left ( \underbrace{a_{2^k} + a_{2^{k+1}} + \ldots + a_{2^{k+1}} }_{2^{k+1} \text{ additives}}\right )+\ldots \\ &\leq (a_1 + a_1) + (a_2 + a_2 + a_3 + a_3) + \ldots + \left ( 2a_{2^k} + 2a_{2^k +1} + \ldots + 2a_{2^{k+1}-1}\right ) + \ldots = 2\sum _{k=1}^\infty a_k <\infty. Not sure if the sum of the augmented series can be better estimated than with a factor of two. \end{align*}
This is a fun one. Spent the whole lecture on some numerical methods mumbo jumbo trying to manipulate with some series. Just as I was finished the lector walked past and said "..aah, Cauchy condensation". Sure enough, then I googled it and it gave me exactly the same thing. Could have saved quite a bit of time :(
I could have said that it's Cauchy's condensation criterion, but I didn't want to,

Your proof is correct, although I think that working with partial sums instead of infinite many dots would have been a bit more professional.

nuuskur
@fresh_42
It was easier to visualise if I had all the additives written out. By partial sums do you mean something like
$$\sum_{k=1}^\infty a_k = \sum _{j = 0}^\infty \sum_{k=2^j}^{2^{j+1} -1} a_{2^k}\quad ?$$

Mentor
2021 Award
@fresh_42
It was easier to visualise if I had all the additives written out. By partial sums do you mean something like
$$\sum_{k=1}^\infty a_k = \sum _{j = 0}^\infty \sum_{k=2^j}^{2^{j+1} -1} a_{2^k}\quad ?$$
No I mean ##S_{n}=\sum_{k=1}^{n}a_k## and ##S_{2^n}## so convergence of the series is convergence of the sequence of partial sums.

nuuskur
Suppose $a_n\geq 0, n\in\mathbb N$ is monotone decreasing. Assume $\sum _{k=0}^\infty 2^k a_{2^k} <\infty$. Due to monotonicity we have an upper bound
\begin{align*} \sum_{k=1}^\infty a_k &= a_1 + (a_2 + a_3) + \ldots + \left (a_{2^n} + a_{2^n+1} +\ldots + a_{2^{n+1}-1}\right ) + \ldots \\ &\leq a_1 + (a_2 + a_2) + \ldots + 2^n a_{2^n} + \ldots = \sum_{k=0}^\infty 2^ḱ a_{2^k}<\infty . \end{align*}
Thus $\sum _{k=1}^\infty a_k<\infty$. Now, assume $\sum _{k=1}^\infty a_k <\infty$. By further manipulating and using monotonicity
\begin{align*} \sum_{k=0}^\infty 2^ḱ a_{2^k} &= a_1 + (a_2 + a_2) + \ldots + \left (\underbrace{a_{2^{n+1}} + \ldots + a_{2^{n+1}}}_{2^{n+1} \text{ additives}}\right )+\ldots\\ &(\text{now move brackets to the left by one unit if that makes sense}) \\ &= (a_1 + a_2) + (a_2 + a_4 + a_4 + a_4) + \ldots + \left ( \underbrace{a_{2^n} + a_{2^{n+1}} + \ldots + a_{2^{n+1}} }_{2^{n+1} \text{ additives}}\right )+\ldots \\ &\leq (a_1 + a_1) + (a_2 + a_2 + a_3 + a_3) + \ldots + \left ( 2a_{2^n} + 2a_{2^n +1} + \ldots + 2a_{2^{n+1}-1}\right ) + \ldots = 2\sum _{k=1}^\infty a_k <\infty. \end{align*}
Not sure if the sum of the augmented series can be better estimated than with a factor of two.
This is a fun one. Spent the whole lecture on some numerical methods mumbo jumbo trying to manipulate with some series. Just as I was finished the lector walked past and said "..aah, Cauchy condensation". Sure enough, then I googled it and it gave me exactly the same thing. Could have saved quite a bit of time :(

If you would have googled on that term, you would have found a proof without any doubt. It is a classic theorem, and is often used to show convergence/divergence of the hypergeometric series (IMO more elegant that integral test).

Mentor
2021 Award
If you would have googled on that term, you would have found a proof without any doubt. It is a classic theorem, and is often used to show convergence/divergence of the hypergeometric series (IMO more elegant that integral test).
You should have added that geometric series are basically the only application - at least what I have read.

You should have added that geometric series are basically the only application - at least what I have read.

It is also useful to deduce convergence/divergence of series with logarithms. For example, it can also be used to show convergence/divergence of the Bertrand series.

Mentor
2021 Award
It is also useful to deduce convergence/divergence of series with logarithms. For example, it can also be used to show convergence/divergence of the Bertrand series.
Nice idea for a future problem, thanks.

nuuskur and member 587159
Solution for (9):

Assume ##f## has no fixed point. Then the graph of ##f## must lie either above or either below the line ##y=x##, or otherwise the intermediate value theorem for continuous functions would imply the existence of a fixed point of ##f##. More formally, we can apply this theorem to ##g: x \mapsto f(x) - x## to deduce the existence of a zero.

Wlog, assume the graph of ##f## lies above the line ##y=x##. I.e., ##g >0##. Then ##f\circ f(x) = f(f(x)) > f(x) > x## for every x. In particular, ##f \circ f(x) = x## for no x.

nuuskur
Gold Member
Solution for (9):

Assume ##f## has no fixed point. Then the graph of ##f## must lie either above or either below the line ##y=x##, or otherwise the intermediate value theorem for continuous functions would imply the existence of a fixed point of ##f##. More formally, we can apply this theorem to ##g: x \mapsto f(x) - x## to deduce the existence of a zero.

Wlog, assume the graph of ##f## lies above the line ##y=x##. I.e., ##g >0##. Then ##f\circ f(x) = f(f(x)) > f(x) > x## for every x. In particular, ##f \circ f(x) = x## for no x.
Yep, this is correct.

try at problem 10
##f## is differentiable on ##\mathbb{R}## hence ##f'## is continuous on ##\mathbb{R}## particularly on ##[a,b]## and since ##f'(a) < m < f'(b)## then there must exist a ##c \in ]a,b[## such that ##f'(c) = m##

This is wrong. ##f ## is differentiable does not imply that ##f'## is continuous.

For a counterexample, see ##f: \mathbb{R} \to \mathbb{R}: x \mapsto \begin{cases} x^2 \sin(\frac{1}{x}) \quad x \neq 0 \\ 0 \quad x = 0 \end{cases}##

archaic
It seems to me that they mean continuously differentiable. Otherwise, this claim need not hold.

Continuously differentiable isn't needed in (10). Differentiability is enough.

nuuskur
nuuskur
Continuously differentiable isn't needed in (10). Differentiability is enough.
Yes, I realised that, my bad

archaic
This is wrong. ##f ## is differentiable does not imply that ##f'## is continuous.

For a counterexample, see ##f: \mathbb{R} \to \mathbb{R}: x \mapsto \begin{cases} x^2 \sin(\frac{1}{x}) \quad x \neq 0 \\ 0 \quad x = 0 \end{cases}##
oh thanks, i'll delete my solution

oh thanks, i'll delete my solution

It seems to be a common mistake, so maybe let it be so other users can learn from it :)

archaic
It seems to be a common mistake, so maybe let it be so other users can learn from it :)
too late wops

nuuskur
Number 10 is a cute one.
Define ##g(x) = f(x) - mx##, then ##g## is differentiable (therefore continuous). For ##x=a## we have ##g'(a)<0 ##. Because of this ##g ## can't attain minimum at ##x=a## for otherwise it would hold that for every ##x\in (a,b]##
$$0\leq \frac{g(x)-g(a)}{x-a}\xrightarrow [x\to a+]{} g'(a) \implies g'(a)\geq 0.\quad (\text{impossible})$$
For analagous reason ##x=b## can't be a minimum point. The function ##g ## must attain its minimum value (by Weierstrass theorem) somewhere in ##(a,b) ##, therefore for some ## c\in (a,b)## we have ##g'(c) = f'(c) - m = 0 ##.

So, we don't need continuous differentiability for the derivative to have intermediate value property.

Last edited:
Number 10 is a cute one.
Define ##g(x) = f(x) - mx##, then ##g## is differentiable (therefore continuous). For ##x=a## we have ##g'(a)<0 ## and for ##x=b## we have ##g'(b) > 0 ##. The function ##g ## must attain its extremum somewhere in ##(a,b) ##, therefore for some ## c\in (a,b)## we have ##g'(c) = f'(c) - m = 0 ##.

So, we don't need continuous differentiability for the derivative to have intermediate value property.

Why can't the extremum be achieved at either ##a## or ##b##? This needs justification.

archaic
another try at 10 though
##f## differentiable on ##\mathbb{R}## ##\Rightarrow## ##f## is continous on ##\mathbb{R}## particularly on ##[a,b]##.
##a<b## and ##f'(a) < f'(b) \Rightarrow f(a) < f(b)##
and since ##f## is continuous on ##[a,b]## and ##f'(a) < m < f'(b)## then there exists a ##c \in]a,b[## such that ##a<c<b \Rightarrow f(a) < f(c) < f(b) \Rightarrow f'(a) < f'(c) < f'(b)## with ##f'(c) = m##.

edit : ah i think i made the same mistake

Gold Member
Number 10 is a cute one.
Define ##g(x) = f(x) - mx##, then ##g## is differentiable (therefore continuous). For ##x=a## we have ##g'(a)<0 ## and for ##x=b## we have ##g'(b) > 0 ##. The function ##g ## must attain its extremum somewhere in ##(a,b) ##, therefore for some ## c\in (a,b)## we have ##g'(c) = f'(c) - m = 0 ##.

So, we don't need continuous differentiability for the derivative to have intermediate value property.
Close, but $g$ can actually attain its maximum on the endpoints. The derivative information rules out that $g$ achieves its minimum on the endpoints. I'll give you credit as soon as you say why this is.

Gold Member
another try at 10 though
\Rightarrow f(a) < f(c) < f(b) \Rightarrow f'(a) < f'(c) < f'(b)## with ##f'(c) = m##.
This implication doesn't hold. $f(p)>f(q)$ doesn't imply $f'(p)>f'(q)$.

member 587159 and archaic
archaic
This implication doesn't hold. $f(p)>f(q)$ doesn't imply $f'(p)>f'(q)$.
right right right...

nuuskur
Close, but $g$ can actually attain its maximum on the endpoints. The derivative information rules out that $g$ achieves its minimum on the endpoints. I'll give you credit as soon as you say why this is.
Well, indeed, the derivative information rules out that possibility. I made an attempt at explaining it in #21.

Gold Member
Well, indeed, the derivative information rules out that possibility. I made an attempt at explaining it in #21.
Okay, I didn't see your edit. It's right except that you should say $x=b$ can't be a minimum, not maximum. So, "extremum" should be replaced by just "minimum" (and in fact if all you could conclude was that one endpoint can't be a maximum and the other not a minimum, then you wouldn't have enough to solve the problem).

nuuskur
Okay, I didn't see your edit. It's right except that you should say $x=b$ can't be a minimum, not maximum. So, "extremum" should be replaced by just "minimum" (and in fact if all you could conclude was that one endpoint can't be a maximum and the other not a minimum, then you wouldn't have enough to solve the problem).
Indeed, I fixed those problems. For a similar reason minimum can't be attained at ##x=b##, therefore by Weierstrass and so on and so forth.

The hint in problem (1) uses that ##f## must be twice differentiable, so shouldn't this be added to the hypothesis?

Gold Member
The hint in problem (1) uses that fff must be twice differentiable, so shouldn't this be added to the hypothesis?

No. The hints are just some steps of a proposed route, in order to give some help to the potential solver(s). In order for the problem to be what is intended to, its wording is the one originally given.

Last edited:
Gold Member
In this problem, we want to find times for which the two hands' positions can be interpreted as being for two different times. These times I called t1 and t2. Let us call one of the hands h1 and the other one h2. At time t1, h1 is the hour hand and h2 is the minute hand, and at time t2, h1 is the minute hand and h2 is the hour hand.

belatedly I'll mention your original post was quite close -- the question asked for the result with 24 hours in a day but your answer didn't address this... I meant to say this much earlier but was on the road for a while at the beginning of the month and this slipped my mind.

Homework Helper
Gold Member
2021 Award
6. Two Tennis players have a match that consists of just one set. Larry will serve in the first game and the first player to get ##12## games wins the match (whether ahead by only ##1## game or ##2+## games - either way wins the match).
Larry, and Moe have chances of ##0.75## and ##0.70##, respectively, of winning when they serve. Since Moe doesn't serve in the very first game, he gets to choose: should the servers alternate game by game or should the winner of a given game serve in the next game. What's the best choice for Moe?

I'll call the match where they serve alternately "normal" rules and the match where the winner of the previous game serves "special" rules.

First, an observation that under the special rules the better server is more likely to serve on a given game and so the overall probability of winning a given game must go up for him. In general, therefore, in terms of points/games won the special rules must favour the better server.

But, interestingly, it makes no difference to the likelihood of being first to a set number of games. So, the answer is that neither choice is better than the other for Moe.

Lets take ##p## as the probability Larry holds serve and ##q## the probability that Moe holds serve. And, imagine it's first to ##N## games.

First, I'll calculate the probability of Larry winning under normal rules. To do this, I'll extend the match to ##2N - 1## games, if necessary, so that Larry serves ##N## times and Moe ##N-1## times. The only criterion is that Larry holds serve more often than Moe. If Larry holds serve ##l## times and Moe ##m## times, then we have the probability that Larry wins the match:
$$P_a = \sum_{l=1}^N \binom{N}{l} p^l (1-p)^{N-l} \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
Note that for ##l =N## we have:
$$P_a (l = N) = p^N \sum_{m=0}^{N-1} \binom{N-1}{m} q^m (1-q)^{N-1-m} = p^N (q + 1-q)^{N-1} = p^N$$
Which gives us a slightly revised format for comparison with the second case.

Now, under the special rules (note that here I'll consider the match ending when a player gets to ##N## games) if Larry wins then he wins the last game and, therefore, both players must break serve an equal number of times. I.e. every time Larry loses serve, he must break back at some stage to win the match.

A typical Larry win then looks like ##l, m, N-l##, where ##N-l## is the number of breaks of serve each. First we consider the case where ##m=0##. Here we have ##N-l## pairs of games of the form ##ML## where ##M## is a Moe service break and ##L## is Larry immediately breaking back (as Moe never holds serve when ##m =0##). In addition we have another ##l## service holds. The probability, therefore, of a match of the form ##l, 0, N-l## is:
$$P_b(m=0;l) = \binom{N}{l} p^l (1-p)^{N-l} (1-q)^{N-l}$$
Now, if Larry wins we have the constraint that ##m < l##. And, for ##m \ne 0## we have ##m## Moe service holds that we can distribute along with any of the ##N-l## pairs of service breaks. E.g. if ##N = 5## and ##l = 3##, then we have ##2## pairs of games ##ML## in a typical Larry win with ##m = 0##. For ##m=2## we can place another two ##M##'s along with any of these pairs to give a sequence ##MML## or ##MMML## etc.

The number of ways of distributing ##m## games into ##N-l## "buckets" is ##\binom{N-l-1+m}{m}##.

Note that the case where Larry holds serve throughout is a degenerative case, where Moe never serves and has a probability of ##p^N##.

This gives us the total probability of Larry winning under the special rules as:
$$P_b = p^N + \sum_{l=1}^{N-1} \binom{N}{l} p^l (1-p)^{N-l} \sum_{m=0}^{l-1} \binom{N-l-1+m}{m} q^m (1-q)^{N-l}$$
We can compare this with the revised formula above:
$$P_a = p^N + \sum_{l=1}^{N-1} \binom{N}{l} p^l (1-p)^{N-l} \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
To show these two are equal, we need the following binomial identity (for ##l = 1 \dots N-1##):
$$\sum_{m=0}^{l-1} \binom{N-l-1+m}{m} q^m (1-q)^{N-l} = \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
I have a slightly tedious proof by induction of this, which I can post if required.

Otherwise, we can see that Moe's chances of winning the match are the same with each set of rules.

StoneTemplePython
Homework Helper
Gold Member
2021 Award
Supplement for question 6:

We need the following binomial identity (for ##l = 1 \dots N-1##):
$$\sum_{m=0}^{l-1} \binom{N-l-1+m}{m} q^m (1-q)^{N-l} = \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
To simplify this slightly, we let ##n = N-1, k = l-1## and take out a common factor of ##(1-q)^{N-l}##, to give the required identity:
$$B(k) = \sum_{m=0}^{k} \binom{n-k+m-1}{m} q^m = \sum_{m=0}^{k} \binom{n}{m} q^m (1-q)^{k-m} = A(k)$$
We can see that ##A(0) = B(0) = 1## and ##A(1) = B(1) = 1 + (n-1)q##

For the inductive step we have:
$$A(k+1) = \sum_{m=0}^{k+1} \binom{n}{m} q^m (1-q)^{k+1-m} = (1-q)A(k) + \binom{n}{k+1} q^{k+1}$$
And:
$$B(k+1) = \sum_{m=0}^{k+1} \binom{n-k-1+m-1}{m} q^m = 1 + \sum_{m=1}^{k} \binom{n-k-1+m-1}{m} q^m + \binom{n-1}{k+1} q^{k+1}$$
$$= 1 + \sum_{m=1}^{k} \binom{n-k+m-1}{m}q^m - \sum_{m=1}^{k} \binom{n-k-1+m-1}{m-1} q^m + \binom{n-1}{k+1} q^{k+1}$$
$$= B(k) - \sum_{m=0}^{k-1} \binom{n-k-1+m}{m} q^{m+1} + \binom{n-1}{k+1} q^{k+1}$$
$$= B(k) - q\sum_{m=0}^{k} \binom{n-k-1+m}{m} q^{m} + \binom{n-1}{k}q^{k+1} + \binom{n-1}{k+1} q^{k+1}$$
$$\therefore B(k+1) = (1-q)B(k) + \binom{n}{k+1}q^{k+1}$$
Hence, assuming ##A(k) = B(k)##, we have ##A(k+1) = B(k+1)##. QED

Last edited: