Basic Math Challenge - September 2018

In summary: If a fixed point exists, it is unique. If a fixed point exists, it is stable. Consider the map ##T_a(x)=a\cdot x\cdot (1-x)\,## for ##a\in \left[0,4\right]\,.## But what about the reverse direction?In summary, summer is coming and brings a new basic math challenge! Enjoy! For more advanced problems you can check our other intermediate level math challenge thread! This conversation discusses various math problems and their solutions, including a special set of rules for participating, a discussion on complex functions, a question about team formation, an equivalence relation on a topological sphere, a
  • #1
fresh_42
Mentor
Insights Author
2023 Award
18,994
23,992
Summer is coming and brings a new basic math challenge! Enjoy! For more advanced problems you can check our other intermediate level math challenge thread!

RULES:

a) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored. Solutions will be posted around 15th of the following month.
b) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
c) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
d) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.
e) Mentors, advisors and homework helpers are kindly requested not to post solutions, not even in spoiler tags, for the challenge problems, until 16th of each month. This gives the opportunity to other people including but not limited to students to feel more comfortable in dealing with / solving the challenge problems. In case of an inadvertent posting of a solution the post will be deleted.

We have quite a couple of old problems, which are still open. As we think that most of them aren't too difficult, we want to give them another try. One reason is, we want to find out why they have been untouched. So in case you need a hint or two in order to tackle them, don't hesitate to ask! We've also some new questions. Our special thanks go to @Infrared who contributed some new questions.

QUESTIONS:

1. Let ##f## be a differentiable function in ##\mathbb{R}##. If ##f'## is invertible and ##(f')^{-1}## is differentiable in ##\mathbb{R}##, show that ##[I_A (f')^{-1} - f \circ [(f')^{-1} ]]' = (f')^{-1}## where ##I_A## with ##I_A(x) = x## is the identity function ##I_A : A \to A##

Hint: Let ##y = f'(x)## then ##(f')^{-1}(y) = x##. By differentiating, we can get to a useful result including the second derivative of ##f(x)##. Next, we can utilize ##[f\circ (f')^{-1}]'## and incorporate the identity function ##I_A##.

2. (solved by @nuuskur ) Given a nonnegative, monotone decreasing sequence ##(a_n)_{n \in \mathbb{N}}\subseteq \mathbb{R}\,.## Prove that ##\sum_{n \in \mathbb{N}}a_n## converges if and only if ##\sum_{n \in \mathbb{N}_0}2^na_{2^n}## converges.

3. Let's consider complex functions in one variable and especially the involutions
$$
\mathcal{I}=\{\, z\stackrel{p}{\mapsto} z\; , \; z\stackrel{q}{\mapsto} -z\; , \;z\stackrel{r}{\mapsto} z^{-1}\; , \;z\stackrel{s}{\mapsto}-z^{-1}\,\}
$$
We also consider the two functions $$\mathcal{J}=\{\,z\stackrel{u}{\longmapsto}\frac{1}{2}(-1+i \sqrt{3})z\; , \;z\stackrel{v}{\longmapsto}-\frac{1}{2}(1+i \sqrt{3})z\,\}$$
and the set ##\mathcal{F}## of functions which we get, if we combine any of them: ##\mathcal{F}=\langle\mathcal{I},\mathcal{J} \rangle## by consecutive applications. We now define for ##\mathcal{K}\in \{\mathcal{I},\mathcal{J}\}## a relation on ##\mathcal{F}## by
$$
f(z) \sim_\mathcal{K} g(z)\, :\Longleftrightarrow \, (\forall \,h_1\in \mathcal{K})\,(\exists\,h_2\in \mathcal{K})\,: f(h_1(z))=g(h_2(z))
$$
  • Show that ##\sim_\mathcal{K}## defines an equivalence relation.
  • Show that ##\mathcal{F}/\sim_\mathcal{I}## admits a group structure on its equivalence classes by consecutive application.
  • Show that ##\mathcal{F}/\sim_\mathcal{J}## does not admit a group structure on its equivalence classes by consecutive applications.
Hint: Determine the groups ##F=\langle \mathcal{F}\rangle\, , \,I=\langle \mathcal{I} \rangle\, , \,J=\langle \mathcal{J} \rangle## and what distinguishes ##F/I## from ##F/J\,.##

4. There are ##r## sports 'enthusiasts' in a certain city. They are forming various teams to bet on upcoming events. A pair of people dominated last year, so there are new rules in place this year. The peculiar rules are:
  • each team must have an odd number of members
  • each and every 2 teams must have an even number of members in common.
For avoidance of doubt, nothing in the rules say a given player can only be on one team.
With these rules in place, is it possible to form more than ##r## teams?

Hint: Model these rules with matrix multiplication and select a suitable field.

5. We define an equivalence relation on the topological two-dimensional unit sphere ##\mathbb{S}^2\subseteq \mathbb{R}^3## by ##x \sim y \Longleftrightarrow x \in \{\,\pm y\,\}## and the projection ##q\, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2/\sim \,.## Furthermore we consider the homeomorphism ##\tau \, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2## defined by ##\tau (x)=-x\,.## Note that for ##A \subseteq \mathbb{S}^2## we have ##q^{-1}(q(A))= A \cup \tau(A)\,.## Show that
  • ##q## is open and closed.
  • ##\mathbb{S}^2/\sim ## is compact, i.e. Hausdorff and covering compact.
  • Let ##U_x=\{\,y\in \mathbb{S}^2\,:\,||y-x||<1\,\}## be an open neighborhood of ##x \in \mathbb{S}^2\,.## Show that ##U_x \cap U_{-x} = \emptyset \; , \;U_{-x}=\tau(U_x)\; , \;q(U_x)=q(U_{-x})## and ##q|_{U_{x}}## is injective. Conclude that ##q## is a covering.
Hint: For (a) consider ##O\cup \tau(O)## for an open set ##O\,.## For (b) use part (a) and that ##\mathbb{S}^2## is Hausdorff. For (c) a covering map is a local homeomorphism, so we need continuity, openess, closedness and bijectivity. Again use the previous parts.

6.
(solved by @PeroK ) Two Tennis players have a match that consists of just one set. Larry will serve in the first game and the first player to get ##12## games wins the match (whether ahead by only ##1## game or ##2+## games - either way wins the match).
Larry, and Moe have chances of ##0.75## and ##0.70##, respectively, of winning when they serve. Since Moe doesn't serve in the very first game, he gets to choose: should the servers alternate game by game or should the winner of a given game serve in the next game. What's the best choice for Moe?

7. How many times a day is it impossible to determine what time it is, if you have a clock with same length (identical looking) hour and minutes hands, supposing that we always know if it's morning or evening (i.e. we know whether it's am or pm).

8. For the class of ##n \times n## matrices whose entries, (if flattened and sorted) would be ##1, 2, 3, ..., n^2 -1 ,n^2## prove that there always exists two neighboring entries (in same row or same column) that must differ by at least ##n##.

9. (solved by @Math_QED ) A fixed point of a function ##f:X\to X## is an element ##x\in X## satisfying ##f(x)=x##. Let ##f:\mathbb{R}\to\mathbb{R}## be a continuous function with no fixed points. Show that ##f\circ f## also has no fixed points.

10. (solved by @nuuskur ) Let ##f:\mathbb{R}\to\mathbb{R}## be differentiable and let ##a,b## be real numbers with ##a<b##. Show that if ##f'(a)<m<f'(b)##, then there exists some ##c\in (a,b)## such that ##f'(c)=m.##
 
Last edited:
  • Like
Likes nuuskur, Greg Bernhardt, YoungPhysicist and 1 other person
Physics news on Phys.org
  • #2
I will solve Problem 7
I combine the hours and minutes to form a time that ranges from 0 to 1 over 12 hours: t = (h + m/60)/12 for h hours and m minutes. I will also set the range of hour-hand and minute-hand positions to be 0 to 1. Thus,
  • Hour hand: position a = t
  • Minute hand: position a = 12 t
I will generalize the minute hand's position to use an arbitrary positive integer R: a = R t.

If same-sized hour and minute hands give two valid times, then the hour hand for time t1 is the minute hand for time t2, and vice versa. Thus
  • ## t_1 = R t_2 \mod 1##
  • ## R t_1 = t_2 \mod 1##
Modulo-1 arithmetic is being able to add an arbitrary integer:
  • ## t_1 = R t_2 + n_1##
  • ## R t_1 = t_2 + n_2##
This is a system of linear equations in the two times, and it has the solution
  • ## t_1 = (R n_2 - n_1)/(R^2 - 1)##
  • ## t_2 = (n_2 - R n_1)/(R^2 - 1)##
To summarize, ##t = N/(R^2 - 1)##, where N is an integer at least 0 and less than ##R^2 - 1##. It can thus have ##R^2 - 1## values.

We must exclude times of day when the hands' positions coincide. That happens when ##t = N/(R - 1)##, where N is an integer at least 0 and less than ##R - 1##. There are thus ##R - 1## such values, and the number of ambiguous-interpretation times is thus ##(R^2 - 1) - (R - 1) = R(R - 1)##.

For R = 12, the number of ambiguous-interpretation times is 132, and the number of coincident-hand times is 11.
 
  • #3
lpetrich said:
I will solve Problem 7

In general the train of thought seems to work though the stated answer is wrong. There are simpler approaches.

lpetrich said:
If same-sized hour and minute hands give two valid times, then the hour hand for time t1 is the minute hand for time t2, and vice versa.

Incidentally, I read this statement many times and could not figure out what it means. These are hour and minute hands on a non-broken clock so I don't understand the purpose of "valid times". I couldn't figure out how to interpret ##t_1## vs ##t_2## vs ##t##. Defining ##t_1## and ##t_2## before using them, would be quite helpful here.
 
  • #4
lpetrich said:
If same-sized hour and minute hands give two valid times, then the hour hand for time t1 is the minute hand for time t2, and vice versa.
StoneTemplePython said:
Incidentally, I read this statement many times and could not figure out what it means. These are hour and minute hands on a non-broken clock so I don't understand the purpose of "valid times". I couldn't figure out how to interpret ##t_1## vs ##t_2## vs ##t##. Defining ##t_1## and ##t_2## before using them, would be quite helpful here.

In this problem, we want to find times for which the two hands' positions can be interpreted as being for two different times. These times I called t1 and t2. Let us call one of the hands h1 and the other one h2. At time t1, h1 is the hour hand and h2 is the minute hand, and at time t2, h1 is the minute hand and h2 is the hour hand.
 
  • #5
Suppose [itex]a_n\geq 0, n\in\mathbb N[/itex] is monotone decreasing. Assume [itex]\sum _{k=0}^\infty 2^k a_{2^k} <\infty[/itex]. Due to monotonicity we have an upper bound
[tex]
\begin{align*}
\sum_{k=1}^\infty a_k &= a_1 + (a_2 + a_3) + \ldots + \left (a_{2^n} + a_{2^n+1} +\ldots + a_{2^{n+1}-1}\right ) + \ldots \\
&\leq a_1 + (a_2 + a_2) + \ldots + 2^n a_{2^n} + \ldots = \sum_{k=0}^\infty 2^ḱ a_{2^k}<\infty .
\end{align*}
[/tex]
Thus [itex]\sum _{k=1}^\infty a_k<\infty[/itex]. Now, assume [itex]\sum _{k=1}^\infty a_k <\infty[/itex]. By further manipulating and using monotonicity
[tex]
\begin{align*}
\sum_{k=0}^\infty 2^ḱ a_{2^k} &= a_1 + (a_2 + a_2) + \ldots + \left (\underbrace{a_{2^{n+1}} + \ldots + a_{2^{n+1}}}_{2^{n+1} \text{ additives}}\right )+\ldots\\ &(\text{now move brackets to the left by one unit if that makes sense}) \\
&= (a_1 + a_2) + (a_2 + a_4 + a_4 + a_4) + \ldots + \left ( \underbrace{a_{2^n} + a_{2^{n+1}} + \ldots + a_{2^{n+1}} }_{2^{n+1} \text{ additives}}\right )+\ldots \\
&\leq (a_1 + a_1) + (a_2 + a_2 + a_3 + a_3) + \ldots + \left ( 2a_{2^n} + 2a_{2^n +1} + \ldots + 2a_{2^{n+1}-1}\right ) + \ldots = 2\sum _{k=1}^\infty a_k <\infty.
\end{align*}
[/tex]
Not sure if the sum of the augmented series can be better estimated than with a factor of two.
This is a fun one. Spent the whole lecture on some numerical methods mumbo jumbo trying to manipulate with some series. Just as I was finished the lector walked past and said "..aah, Cauchy condensation". Sure enough, then I googled it and it gave me exactly the same thing. :confused: Could have saved quite a bit of time :(
 
Last edited:
  • #6
nuuskur said:
Suppose [itex]a_n\geq 0, n\in\mathbb N[/itex] is monotone decreasing. Assume [itex]\sum _{k=0}^\infty 2^k a_{2^k} <\infty[/itex]. Due to monotonicity we have an upper bound
[tex]
\begin{align*}
\sum_{k=1}^\infty a_k &= a_1 + (a_2 + a_3) + \ldots + \left (a_{2^n} + a_{2^n+1} +\ldots + a_{2^{n+1}-1}\right ) + \ldots \\
&\leq a_1 + (a_2 + a_2) + \ldots + 2^n a_{2^n} + \ldots = \sum_{k=0}^\infty 2^ḱ a_{2^k}.
\end{align*}
[/tex]
Thus [itex]\sum _{k=1}^\infty a_k<\infty[/itex]. Now, assume [itex]\sum _{k=1}^\infty a_k <\infty[/itex]. By further manipulating and using monotonicity
[tex]
\begin{align*}
\sum_{k=0}^\infty 2^ḱ a_{2^k} &= a_1 + (a_2 + a_2) + \ldots + \left (\underbrace{a_{2^{k+1}} + \ldots + a_{2^{k+1}}}_{2^{k+1} \text{ additives}}\right )\\ &(\text{now move brackets to the left by one unit if that makes sense}) \\
&= (a_1 + a_2) + (a_2 + a_4 + a_4 + a_4) + \ldots + \left ( \underbrace{a_{2^k} + a_{2^{k+1}} + \ldots + a_{2^{k+1}} }_{2^{k+1} \text{ additives}}\right )+\ldots \\
&\leq (a_1 + a_1) + (a_2 + a_2 + a_3 + a_3) + \ldots + \left ( 2a_{2^k} + 2a_{2^k +1} + \ldots + 2a_{2^{k+1}-1}\right ) + \ldots = 2\sum _{k=1}^\infty a_k <\infty.
Not sure if the sum of the augmented series can be better estimated than with a factor of two.
\end{align*}
[/tex]
This is a fun one. Spent the whole lecture on some numerical methods mumbo jumbo trying to manipulate with some series. Just as I was finished the lector walked past and said "..aah, Cauchy condensation". Sure enough, then I googled it and it gave me exactly the same thing. :confused: Could have saved quite a bit of time :(
I could have said that it's Cauchy's condensation criterion, but I didn't want to, :-p

Your proof is correct, although I think that working with partial sums instead of infinite many dots would have been a bit more professional.
 
  • #7
@fresh_42
It was easier to visualise if I had all the additives written out. By partial sums do you mean something like
[tex]
\sum_{k=1}^\infty a_k = \sum _{j = 0}^\infty \sum_{k=2^j}^{2^{j+1} -1} a_{2^k}\quad ?
[/tex]
 
  • #8
nuuskur said:
@fresh_42
It was easier to visualise if I had all the additives written out. By partial sums do you mean something like
[tex]
\sum_{k=1}^\infty a_k = \sum _{j = 0}^\infty \sum_{k=2^j}^{2^{j+1} -1} a_{2^k}\quad ?
[/tex]
No I mean ##S_{n}=\sum_{k=1}^{n}a_k## and ##S_{2^n}## so convergence of the series is convergence of the sequence of partial sums.
 
  • Like
Likes nuuskur
  • #9
nuuskur said:
Suppose [itex]a_n\geq 0, n\in\mathbb N[/itex] is monotone decreasing. Assume [itex]\sum _{k=0}^\infty 2^k a_{2^k} <\infty[/itex]. Due to monotonicity we have an upper bound
[tex]
\begin{align*}
\sum_{k=1}^\infty a_k &= a_1 + (a_2 + a_3) + \ldots + \left (a_{2^n} + a_{2^n+1} +\ldots + a_{2^{n+1}-1}\right ) + \ldots \\
&\leq a_1 + (a_2 + a_2) + \ldots + 2^n a_{2^n} + \ldots = \sum_{k=0}^\infty 2^ḱ a_{2^k}<\infty .
\end{align*}
[/tex]
Thus [itex]\sum _{k=1}^\infty a_k<\infty[/itex]. Now, assume [itex]\sum _{k=1}^\infty a_k <\infty[/itex]. By further manipulating and using monotonicity
[tex]
\begin{align*}
\sum_{k=0}^\infty 2^ḱ a_{2^k} &= a_1 + (a_2 + a_2) + \ldots + \left (\underbrace{a_{2^{n+1}} + \ldots + a_{2^{n+1}}}_{2^{n+1} \text{ additives}}\right )+\ldots\\ &(\text{now move brackets to the left by one unit if that makes sense}) \\
&= (a_1 + a_2) + (a_2 + a_4 + a_4 + a_4) + \ldots + \left ( \underbrace{a_{2^n} + a_{2^{n+1}} + \ldots + a_{2^{n+1}} }_{2^{n+1} \text{ additives}}\right )+\ldots \\
&\leq (a_1 + a_1) + (a_2 + a_2 + a_3 + a_3) + \ldots + \left ( 2a_{2^n} + 2a_{2^n +1} + \ldots + 2a_{2^{n+1}-1}\right ) + \ldots = 2\sum _{k=1}^\infty a_k <\infty.
\end{align*}
[/tex]
Not sure if the sum of the augmented series can be better estimated than with a factor of two.
This is a fun one. Spent the whole lecture on some numerical methods mumbo jumbo trying to manipulate with some series. Just as I was finished the lector walked past and said "..aah, Cauchy condensation". Sure enough, then I googled it and it gave me exactly the same thing. :confused: Could have saved quite a bit of time :(

If you would have googled on that term, you would have found a proof without any doubt. It is a classic theorem, and is often used to show convergence/divergence of the hypergeometric series (IMO more elegant that integral test).
 
  • #10
Math_QED said:
If you would have googled on that term, you would have found a proof without any doubt. It is a classic theorem, and is often used to show convergence/divergence of the hypergeometric series (IMO more elegant that integral test).
You should have added that geometric series are basically the only application - at least what I have read.
 
  • #11
fresh_42 said:
You should have added that geometric series are basically the only application - at least what I have read.

It is also useful to deduce convergence/divergence of series with logarithms. For example, it can also be used to show convergence/divergence of the Bertrand series.
 
  • #12
Math_QED said:
It is also useful to deduce convergence/divergence of series with logarithms. For example, it can also be used to show convergence/divergence of the Bertrand series.
Nice idea for a future problem, thanks. :smile:
 
  • Like
Likes nuuskur and member 587159
  • #13
Solution for (9):

Assume ##f## has no fixed point. Then the graph of ##f## must lie either above or either below the line ##y=x##, or otherwise the intermediate value theorem for continuous functions would imply the existence of a fixed point of ##f##. More formally, we can apply this theorem to ##g: x \mapsto f(x) - x## to deduce the existence of a zero.

Wlog, assume the graph of ##f## lies above the line ##y=x##. I.e., ##g >0##. Then ##f\circ f(x) = f(f(x)) > f(x) > x## for every x. In particular, ##f \circ f(x) = x## for no x.
 
  • Like
Likes nuuskur
  • #14
Math_QED said:
Solution for (9):

Assume ##f## has no fixed point. Then the graph of ##f## must lie either above or either below the line ##y=x##, or otherwise the intermediate value theorem for continuous functions would imply the existence of a fixed point of ##f##. More formally, we can apply this theorem to ##g: x \mapsto f(x) - x## to deduce the existence of a zero.

Wlog, assume the graph of ##f## lies above the line ##y=x##. I.e., ##g >0##. Then ##f\circ f(x) = f(f(x)) > f(x) > x## for every x. In particular, ##f \circ f(x) = x## for no x.
Yep, this is correct.
 
  • #15
archaic said:
try at problem 10
##f## is differentiable on ##\mathbb{R}## hence ##f'## is continuous on ##\mathbb{R}## particularly on ##[a,b]## and since ##f'(a) < m < f'(b)## then there must exist a ##c \in ]a,b[## such that ##f'(c) = m##

This is wrong. ##f ## is differentiable does not imply that ##f'## is continuous.

For a counterexample, see ##f: \mathbb{R} \to \mathbb{R}: x \mapsto \begin{cases} x^2 \sin(\frac{1}{x}) \quad x \neq 0 \\ 0 \quad x = 0 \end{cases}##
 
  • Like
Likes archaic
  • #16
nuuskur said:
It seems to me that they mean continuously differentiable. Otherwise, this claim need not hold.

Continuously differentiable isn't needed in (10). Differentiability is enough.
 
  • Like
Likes nuuskur
  • #17
Math_QED said:
Continuously differentiable isn't needed in (10). Differentiability is enough.
Yes, I realized that, my bad :sorry:
 
  • #18
Math_QED said:
This is wrong. ##f ## is differentiable does not imply that ##f'## is continuous.

For a counterexample, see ##f: \mathbb{R} \to \mathbb{R}: x \mapsto \begin{cases} x^2 \sin(\frac{1}{x}) \quad x \neq 0 \\ 0 \quad x = 0 \end{cases}##
oh thanks, i'll delete my solution
 
  • #19
archaic said:
oh thanks, i'll delete my solution

It seems to be a common mistake, so maybe let it be so other users can learn from it :)
 
  • #20
Math_QED said:
It seems to be a common mistake, so maybe let it be so other users can learn from it :)
too late wops
 
  • #21
Number 10 is a cute one.
Define ##g(x) = f(x) - mx##, then ##g## is differentiable (therefore continuous). For ##x=a## we have ##g'(a)<0 ##. Because of this ##g ## can't attain minimum at ##x=a## for otherwise it would hold that for every ##x\in (a,b]##
[tex]
0\leq \frac{g(x)-g(a)}{x-a}\xrightarrow [x\to a+]{} g'(a) \implies g'(a)\geq 0.\quad (\text{impossible})
[/tex]
For analagous reason ##x=b## can't be a minimum point. The function ##g ## must attain its minimum value (by Weierstrass theorem) somewhere in ##(a,b) ##, therefore for some ## c\in (a,b)## we have ##g'(c) = f'(c) - m = 0 ##.

So, we don't need continuous differentiability for the derivative to have intermediate value property.
 
Last edited:
  • #22
nuuskur said:
Number 10 is a cute one.
Define ##g(x) = f(x) - mx##, then ##g## is differentiable (therefore continuous). For ##x=a## we have ##g'(a)<0 ## and for ##x=b## we have ##g'(b) > 0 ##. The function ##g ## must attain its extremum somewhere in ##(a,b) ##, therefore for some ## c\in (a,b)## we have ##g'(c) = f'(c) - m = 0 ##.

So, we don't need continuous differentiability for the derivative to have intermediate value property.

Why can't the extremum be achieved at either ##a## or ##b##? This needs justification.
 
  • #23
another try at 10 though
##f## differentiable on ##\mathbb{R}## ##\Rightarrow## ##f## is continuous on ##\mathbb{R}## particularly on ##[a,b]##.
##a<b## and ##f'(a) < f'(b) \Rightarrow f(a) < f(b)##
and since ##f## is continuous on ##[a,b]## and ##f'(a) < m < f'(b)## then there exists a ##c \in]a,b[## such that ##a<c<b \Rightarrow f(a) < f(c) < f(b) \Rightarrow f'(a) < f'(c) < f'(b)## with ##f'(c) = m##.

edit : ah i think i made the same mistake
 
  • #24
nuuskur said:
Number 10 is a cute one.
Define ##g(x) = f(x) - mx##, then ##g## is differentiable (therefore continuous). For ##x=a## we have ##g'(a)<0 ## and for ##x=b## we have ##g'(b) > 0 ##. The function ##g ## must attain its extremum somewhere in ##(a,b) ##, therefore for some ## c\in (a,b)## we have ##g'(c) = f'(c) - m = 0 ##.

So, we don't need continuous differentiability for the derivative to have intermediate value property.
Close, but [itex]g[/itex] can actually attain its maximum on the endpoints. The derivative information rules out that [itex]g[/itex] achieves its minimum on the endpoints. I'll give you credit as soon as you say why this is.
 
  • #25
archaic said:
another try at 10 though
\Rightarrow f(a) < f(c) < f(b) \Rightarrow f'(a) < f'(c) < f'(b)## with ##f'(c) = m##.
This implication doesn't hold. [itex]f(p)>f(q)[/itex] doesn't imply [itex]f'(p)>f'(q)[/itex].
 
  • Like
Likes member 587159 and archaic
  • #26
Infrared said:
This implication doesn't hold. [itex]f(p)>f(q)[/itex] doesn't imply [itex]f'(p)>f'(q)[/itex].
right right right...
 
  • #27
Infrared said:
Close, but [itex]g[/itex] can actually attain its maximum on the endpoints. The derivative information rules out that [itex]g[/itex] achieves its minimum on the endpoints. I'll give you credit as soon as you say why this is.
Well, indeed, the derivative information rules out that possibility. I made an attempt at explaining it in #21.
 
  • #28
nuuskur said:
Well, indeed, the derivative information rules out that possibility. I made an attempt at explaining it in #21.
Okay, I didn't see your edit. It's right except that you should say [itex]x=b[/itex] can't be a minimum, not maximum. So, "extremum" should be replaced by just "minimum" (and in fact if all you could conclude was that one endpoint can't be a maximum and the other not a minimum, then you wouldn't have enough to solve the problem).
 
  • #29
Infrared said:
Okay, I didn't see your edit. It's right except that you should say [itex]x=b[/itex] can't be a minimum, not maximum. So, "extremum" should be replaced by just "minimum" (and in fact if all you could conclude was that one endpoint can't be a maximum and the other not a minimum, then you wouldn't have enough to solve the problem).
Indeed, I fixed those problems. For a similar reason minimum can't be attained at ##x=b##, therefore by Weierstrass and so on and so forth.
 
  • #30
The hint in problem (1) uses that ##f## must be twice differentiable, so shouldn't this be added to the hypothesis?
 
  • #31
Math_QED said:
The hint in problem (1) uses that fff must be twice differentiable, so shouldn't this be added to the hypothesis?

No. The hints are just some steps of a proposed route, in order to give some help to the potential solver(s). In order for the problem to be what is intended to, its wording is the one originally given.
 
Last edited:
  • #32
lpetrich said:
In this problem, we want to find times for which the two hands' positions can be interpreted as being for two different times. These times I called t1 and t2. Let us call one of the hands h1 and the other one h2. At time t1, h1 is the hour hand and h2 is the minute hand, and at time t2, h1 is the minute hand and h2 is the hour hand.

belatedly I'll mention your original post was quite close -- the question asked for the result with 24 hours in a day but your answer didn't address this... I meant to say this much earlier but was on the road for a while at the beginning of the month and this slipped my mind.
 
  • #33
fresh_42 said:
6. Two Tennis players have a match that consists of just one set. Larry will serve in the first game and the first player to get ##12## games wins the match (whether ahead by only ##1## game or ##2+## games - either way wins the match).
Larry, and Moe have chances of ##0.75## and ##0.70##, respectively, of winning when they serve. Since Moe doesn't serve in the very first game, he gets to choose: should the servers alternate game by game or should the winner of a given game serve in the next game. What's the best choice for Moe?

I'll call the match where they serve alternately "normal" rules and the match where the winner of the previous game serves "special" rules.

First, an observation that under the special rules the better server is more likely to serve on a given game and so the overall probability of winning a given game must go up for him. In general, therefore, in terms of points/games won the special rules must favour the better server.

But, interestingly, it makes no difference to the likelihood of being first to a set number of games. So, the answer is that neither choice is better than the other for Moe.

Lets take ##p## as the probability Larry holds serve and ##q## the probability that Moe holds serve. And, imagine it's first to ##N## games.

First, I'll calculate the probability of Larry winning under normal rules. To do this, I'll extend the match to ##2N - 1## games, if necessary, so that Larry serves ##N## times and Moe ##N-1## times. The only criterion is that Larry holds serve more often than Moe. If Larry holds serve ##l## times and Moe ##m## times, then we have the probability that Larry wins the match:
$$P_a = \sum_{l=1}^N \binom{N}{l} p^l (1-p)^{N-l} \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
Note that for ##l =N## we have:
$$P_a (l = N) = p^N \sum_{m=0}^{N-1} \binom{N-1}{m} q^m (1-q)^{N-1-m} = p^N (q + 1-q)^{N-1} = p^N$$
Which gives us a slightly revised format for comparison with the second case.

Now, under the special rules (note that here I'll consider the match ending when a player gets to ##N## games) if Larry wins then he wins the last game and, therefore, both players must break serve an equal number of times. I.e. every time Larry loses serve, he must break back at some stage to win the match.

A typical Larry win then looks like ##l, m, N-l##, where ##N-l## is the number of breaks of serve each. First we consider the case where ##m=0##. Here we have ##N-l## pairs of games of the form ##ML## where ##M## is a Moe service break and ##L## is Larry immediately breaking back (as Moe never holds serve when ##m =0##). In addition we have another ##l## service holds. The probability, therefore, of a match of the form ##l, 0, N-l## is:
$$P_b(m=0;l) = \binom{N}{l} p^l (1-p)^{N-l} (1-q)^{N-l}$$
Now, if Larry wins we have the constraint that ##m < l##. And, for ##m \ne 0## we have ##m## Moe service holds that we can distribute along with any of the ##N-l## pairs of service breaks. E.g. if ##N = 5## and ##l = 3##, then we have ##2## pairs of games ##ML## in a typical Larry win with ##m = 0##. For ##m=2## we can place another two ##M##'s along with any of these pairs to give a sequence ##MML## or ##MMML## etc.

The number of ways of distributing ##m## games into ##N-l## "buckets" is ##\binom{N-l-1+m}{m}##.

Note that the case where Larry holds serve throughout is a degenerative case, where Moe never serves and has a probability of ##p^N##.

This gives us the total probability of Larry winning under the special rules as:
$$P_b = p^N + \sum_{l=1}^{N-1} \binom{N}{l} p^l (1-p)^{N-l} \sum_{m=0}^{l-1} \binom{N-l-1+m}{m} q^m (1-q)^{N-l}$$
We can compare this with the revised formula above:
$$P_a = p^N + \sum_{l=1}^{N-1} \binom{N}{l} p^l (1-p)^{N-l} \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
To show these two are equal, we need the following binomial identity (for ##l = 1 \dots N-1##):
$$\sum_{m=0}^{l-1} \binom{N-l-1+m}{m} q^m (1-q)^{N-l} = \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
I have a slightly tedious proof by induction of this, which I can post if required.

Otherwise, we can see that Moe's chances of winning the match are the same with each set of rules.
 
  • Like
Likes StoneTemplePython
  • #34
Supplement for question 6:

We need the following binomial identity (for ##l = 1 \dots N-1##):
$$\sum_{m=0}^{l-1} \binom{N-l-1+m}{m} q^m (1-q)^{N-l} = \sum_{m=0}^{l-1} \binom{N-1}{m} q^m (1-q)^{N-1-m}$$
To simplify this slightly, we let ##n = N-1, k = l-1## and take out a common factor of ##(1-q)^{N-l}##, to give the required identity:
$$B(k) = \sum_{m=0}^{k} \binom{n-k+m-1}{m} q^m = \sum_{m=0}^{k} \binom{n}{m} q^m (1-q)^{k-m} = A(k)$$
We can see that ##A(0) = B(0) = 1## and ##A(1) = B(1) = 1 + (n-1)q##

For the inductive step we have:
$$A(k+1) = \sum_{m=0}^{k+1} \binom{n}{m} q^m (1-q)^{k+1-m} = (1-q)A(k) + \binom{n}{k+1} q^{k+1}$$
And:
$$B(k+1) = \sum_{m=0}^{k+1} \binom{n-k-1+m-1}{m} q^m = 1 + \sum_{m=1}^{k} \binom{n-k-1+m-1}{m} q^m + \binom{n-1}{k+1} q^{k+1}$$
$$ = 1 + \sum_{m=1}^{k} \binom{n-k+m-1}{m}q^m - \sum_{m=1}^{k} \binom{n-k-1+m-1}{m-1} q^m + \binom{n-1}{k+1} q^{k+1}$$
$$ = B(k) - \sum_{m=0}^{k-1} \binom{n-k-1+m}{m} q^{m+1} + \binom{n-1}{k+1} q^{k+1}$$
$$ = B(k) - q\sum_{m=0}^{k} \binom{n-k-1+m}{m} q^{m} + \binom{n-1}{k}q^{k+1} + \binom{n-1}{k+1} q^{k+1}$$
$$\therefore B(k+1) = (1-q)B(k) + \binom{n}{k+1}q^{k+1}$$
Hence, assuming ##A(k) = B(k)##, we have ##A(k+1) = B(k+1)##. QED
 
Last edited:
  • #35
PeroK said:
I'll call the match where they serve alternately "normal" rules and the match where the winner of the previous game serves "special" rules.

First, an observation that under the special rules the better server is more likely to serve on a given game and so the overall probability of winning a given game must go up for him. In general, therefore, in terms of points/games won the special rules must favour the better server.

But, interestingly, it makes no difference to the likelihood of being first to a set number of games. So, the answer is that neither choice is better than the other for Moe.

Your conclusion is right and it's a very subtle and peculiar problem. At a glance, the solution looks complete -- I'll work through the details of your solution tomorrow I think.

I will tell you that I'm not that good at combinatiorial manipulations -- so there is in fact (at least one) much simpler way of getting at this solution.
 

Similar threads

  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
6
Replies
175
Views
20K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
Replies
16
Views
5K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
2
Replies
46
Views
4K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
Replies
28
Views
5K
Back
Top