# Math Challenge - October 2020

• Challenge
• fresh_42
• Featured
In summary, the conversation covers topics in functional analysis, algebras, measure theory, differential geometry, calculus, optimization, algorithms, integration, and Lie algebras. The conversation also includes a discussion on a sequence of real numbers, the concept of a quotient map, the pseudosphere, and the Schwarz-Pick theorem. The conversation concludes with problems on bounded open intervals, sequences and primes, and a mathematical inequality.

#### fresh_42

Mentor
2023 Award
Summary:: Functional Analysis, Algebras, Measure Theory, Differential Geometry, Calculus, Optimization, Algorithm, Integration. Lie Algebras.

1. (solved by @julian ) Let ##(a_n)\subseteq\mathbb{R}## be a sequence of real numbers such that ##a_n \leq n^{-3}## for all ##n\in \mathbb{N}.## Given the family ##\mathcal{A}## of functions ##f_n\, : \,[0,1]\longrightarrow \mathbb{R}## defined by ##f_n(x)=\sum_{k=n}^\infty a_k\sin(kx)## for ##n\in \mathbb{N},## show that every sequence ##(g_n)\subseteq\mathcal{A}## contains a subsequence ##(g_{n_k})## which converges uniformly on ##[0,1].##

2. (a) Show that if a ##*##-algebra ##A## admits a complete ##C^*##-norm, then it is the only ##C^*##-norm on ##A##.
(b) Let ##A## be a ##*##-algebra. Show that there is a ##*##-isomorphism ##M_n(\Bbb{C}) \otimes A \cong M_n(A)##.
(c) Deduce that the ##C^*##-algebra ##M_n(\Bbb{C})## is nuclear for all ##n \geq 1##. (MQ)

3. Give an example of a Riemann integrable function that is not Borel measurable. (MQ)

4. Let ##C## be the Cantor set. Show that ##\frac{1}{2}C + \frac{1}{2}C = [0,1]##. Deduce that the sum of sets of measure ##0## must not have measure ##0##. (MQ)

5. Let ##\pi\, : \,\mathbb{R}^n\longrightarrow \mathbb{T}^n## be the canonical projection and ##f:=\pi|_{[0,1]^n}## its restriction on the closed unit cube. Show with the help of ##f\, : \,[0,1]^n\longrightarrow\mathbb{T}^n##, that a quotient map in general doesn't have to be open.

6. Let ##D=\{\,z\in \mathbb{C}\, : \,|z|<1\,\}## be the complex open unit disk and let ##0<a<1## be a real number. Suppose ##f\, : \,D\longrightarrow \mathbb{C}## is a holomorphic function such that ##f(a)=1## and ##f(-a)=-1.##
(a) Prove that ##\sup_{z\in D}\{|f(z)|\}\geq \dfrac{1}{a}.##
(b) Prove that if ##f## has no root, then ##\sup_{z\in D}\{|f(z)|\}\geq \exp\left(\dfrac{1-a^2}{4a}\,\pi\right).##

7. (solved by @julian )
(a) Let ##0<p\leq a,b,c,d,e \leq q## and show that
$$(a+b+c+d+e)\left(\dfrac{1}{a}+\dfrac{1}{b}+\dfrac{1}{c}+\dfrac{1}{d}+\dfrac{1}{e}\right) \leq 25+6\left(\sqrt{\dfrac{p}{q}}-\sqrt{\dfrac{q}{p}}\right)^2.$$
(b) This is a special case of a general inequality. Which is the general case and how is it proven?

8. Let ##n>1## be an integer. There are ##n## lamps ##L_0,\ldots,L_{n-1}## arranged in a circle. Each lamp is either ON ##(1)## or OFF ##(0)##. A sequence of steps ##S_0,\ldots,S_i,\ldots## is carried out. Step ##S_j## affects the state of ##L_j## only (leaving the states of all other lamps unaltered) as follows:
If ##L_{j-1}## is ON, ##S_j## changes the state of ##L_j## from ON to OFF or from OFF to ON.
If ##L_{j-1}## is OFF, ##S_j## leaves the state of ##L_j## unchanged.
The lamps are labeled modulo ##n##, that is ##L_{-1}=L_{n-1}, L_0=L_n,## etc. Initially all lamps are ON.
Show that
(a) (solved by @Jarvis323 ) there is a positive integer ##M(n)## such that after ##M(n)## steps all the lamps are ON again;
(b) if ##n=2^k,## then all lamps are ON after ##(n^2-1)## steps;
(c) if ##n=2^k+1,## then all lamps are ON after ##(n^2-n+1)## steps.

9. (solved by @Fred Wright ) The pseudosphere is the rotational surface of the tractrix, e.g. parameterized by
$$f\, : \,\mathbb{R}^2\longrightarrow \mathbb{R}^3\; , \;f(x,y)=\begin{bmatrix} \cos (y)/ \cosh (x)\\ \sin (y)/ \cosh (x)\\x- \tanh (x)\end{bmatrix}.$$
Show that the pseudosphere has a constant negative Gauß curvature.

10. Let ##\mathfrak{g}## be a Lie algebra with trivial center ##\mathfrak{Z(g)}=\{0\}## over a field of characteristic not equal two and
\begin{align*}
\mathfrak{A(g)}&=\{\varphi:\mathfrak{g}\stackrel{linear}{\longrightarrow}\mathfrak{g}\,|\,[\varphi(X),Y]=[\varphi(Y),X]\text{ for all }X,Y\in \mathfrak{g}\}\\
&=\operatorname{lin}\{\alpha,\beta\neq 0\,|\,[\alpha,\beta]=\alpha\beta-\beta\alpha=\beta\}
\end{align*}
Show that image ##\operatorname{im}\beta## and kernel ##\operatorname{ker}\beta## of ##\beta## are ideals in ##\mathfrak{g}.##
Hint: ##\mathfrak{A(g)}## is a ##\mathfrak{g}##-module by ##X.\varphi =[\operatorname{ad}X,\varphi].##
High Schoolers only

11.
(solved by @etotheipi ) Let ##I## and ##J## be bounded open intervals in ##\Bbb{R}## with ##I \cap J \neq \emptyset## and the length of ##J## is greater than the length of ##I##. Show that ##I \subseteq 3*J##, where ##3*J## is the interval with length ##3## times the length of ##J## and with the same centre as ##J##. (MQ)

12. Given a positive integer ##n##. Assume that ##4^n## and ##5^n## start with the same digit in the decimal system.
Show that this digit has to be ##2## or ##4.##

13. A parcel service charges a price proportional to the sum height plus length plus width per box.
Could it be, that there is a case where it is cheaper to put a more expensive package into a cheaper box?

14. Let ##a## be a positive integer and ##(a_n)_{n\in \mathbb{N}_0}## the sequence defined by
$$a_0:=1\; , \;a_n:=a+\prod_{k=0}^{n-1}a_k \quad (n\geq 1)$$
(a) There are infinitely many primes which divide at least one number of the sequence.
(b) There is a prime which does not divide any of the numbers in the sequence

15. Let ##a,b,c## be positive real numbers such that ##a+b+c+2=abc.##
Show that ##(a+1)(b+1)(c+1)\geq 27.## Under which condition does equality hold?

Last edited:
JD_PM, StoneTemplePython, Jarvis323 and 3 others
Not a high schooler, but very confused by 11

isn't I=(-10,1) and J=(0,2) a counterexample? It feels like you needed to include something about the radius of I vs J

member 587159, PeroK and Infrared
Probl. 1
The described sequence of functions is uniformly continuous sinse all the derivatives are estimated as follows ##|f'_k(x)|\le \sum_n\frac{1}{n^2}##
The Arzelà–Ascoli theorem finishes the proof

wrobel said:
Probl. 1
The described sequence of functions is uniformly continuous sinse all the derivatives are estimated as follows ##|f'_k(x)|\le \sum_n\frac{1}{n^2}##
The Arzelà–Ascoli theorem finishes the proof
Too thin.

Office_Shredder said:
Not a high schooler, but very confused by 11

isn't I=(-10,1) and J=(0,2) a counterexample? It feels like you needed to include something about the radius of I vs J

Thanks. There was an assumption missing: that the length of ##I## is smaller than the length of ##J##.

Probl 6 b
It looks like a consequence from the Schwarz–Pick theorem https://en.wikipedia.org/wiki/Schwarz_lemmaIndeed, take a branch of a function ##w(z)=\log f(z)## such that ##w(-a)=i\pi,\quad w(a)=0##.Applying the Schwarz–Pick theorem to the function $$F(z)=\frac{w(z)}{\sup_{|z|<1}|w(z)|},\quad z_1=a,\quad z_2=-a$$we get$$\sup_{|z|<1}|w(z)|\ge \frac{(1+a^2)\pi}{2a}.$$It looks like the required estimate but it is not the same. I am confused little bit.

wrobel said:
I am confused little bit.
$$f(z)=-i\,\exp\left(\dfrac{iz-a^2}{iz+1} \cdot \dfrac{\pi}{2a}\right).$$

Not sure if I still qualify for high-school, but I'll take a shot at Math's question anyway
fresh_42 said:
11. Let ##I## and ##J## be bounded open intervals in ##\Bbb{R}## with ##I \cap J \neq \emptyset## and the length of ##J## is greater than the length of ##I##.
Show that ##I \subseteq 3*J##, where ##3*J## is the interval with length ##3## times the length of ##J## and with the same centre as ##J##. (MQ)

Let ##J = (a-d, a+d)## and ##3*J = (a-3d, a+3d)##. Also, let ##I = (b-e, b+e)##. Since ##I \cap J \neq \emptyset##, we either have ##b-e < a+d## or ##a-d < b+e##, or both. Then, since it's given that ##d > e##,\begin{align*}\text{Case 1:} \quad b-e < a+d &\implies b+e < a + d + 2e < a + 3d \\ \text{Case 2:} \quad b+e > a-d &\implies b-e > a - d - 2e > a - 3d\end{align*}In the first case ##\text{sup}(I) < \text{sup}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{inf}(I) > a - d - 2e > a - 3d = \text{inf}(3*J)##, we have ##\text{inf}(I) > \text{inf}(3*J)##. Then ##I \subseteq 3*J##.

In the second case ##\text{inf}(I) > \text{inf}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{sup}(I) < a + d + 2e < a + 3d = \text{sup}(3*J)##, we have ##\text{sup}(I) < \text{sup}(3*J)##. Then ##I \subseteq 3*J##.

member 587159
fresh_42 said:
yes, I see the mistake:)

wrobel said:
yes, I see the mistake:)
Schwarz was the correct idea, but the function which he is applied to is a bit tricky.

etotheipi said:
Not sure if I still qualify for high-school, but I'll take a shot at Math's question anyway
Let ##J = (a-d, a+d)## and ##3*J = (a-3d, a+3d)##. Also, let ##I = (b-e, b+e)##. Since ##I \cap J \neq \emptyset##, we either have ##b-e < a+d## or ##a-d < b+e##, or both. Then, since it's given that ##d > e##,\begin{align*}\text{Case 1:} \quad b-e < a+d &\implies b+e < a + d + 2e < a + 3d \\ \text{Case 2:} \quad b+e > a-d &\implies b-e > a - d - 2e > a - 3d\end{align*}In the first case ##\text{sup}(I) < \text{sup}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{inf}(I) > a - d - 2e > a - 3d = \text{inf}(3*J)##, we have ##\text{inf}(I) > \text{inf}(3*J)##. Then ##I \subseteq 3*J##.

In the second case ##\text{inf}(I) > \text{inf}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{sup}(I) < a + d + 2e < a + 3d = \text{sup}(3*J)##, we have ##\text{sup}(I) < \text{sup}(3*J)##. Then ##I \subseteq 3*J##.

Looks ok to me! For reference, you can use this exercise to prove the Vitali covering theorem in ##\mathbb{R}##. More general versions of this exercise can also be proven.

Btw, you still count as a high school student until you have had your first course at the university.

etotheipi
For 4) what is the meaning of a constant times a set? Point-wise multiplication and perhaps a shift?

benorin said:
For 4) what is the meaning of a constant times a set? Point-wise multiplication and perhaps a shift?
$$\frac{1}{2}C + \frac{1}{2}C := \left\{\frac{1}{2}c + \frac{1}{2}c': c,c' \in C\right\}$$

question about 6b: so does the assumption "no root" mean that f has no zero in D?

mathwonk said:
question about 6b: so does the assumption "no root" mean that f has no zero in D?
Yes, non zero on ##D##. This is necessary to have a logarithm.

Hint for 6a:

It seems to me that Schwarz Lemma (plus the fact the automorphism group of the disc acts transitively) implies any holomorphic map from the disc to itself with more than one fix point is the identity. Hence if we multiply the function f by a, and assume the sup (of |a.f|) is < 1, we get a contradiction.

about post #6: do you really get to specify two values of your log (your map w)? I.e. isn't a lift of a map, on a connected space, through a covering map (like exp), determined by the choice of just one value?

mathwonk said:
Hint for 6a:

It seems to me that Schwarz Lemma (plus the fact the automorphism group of the disc acts transitively) implies any holomorphic map from the disc to itself with more than one fix point is the identity. Hence if we multiply the function f by a, and assume the sup (of |a.f|) is < 1, we get a contradiction.
My proof is easier. It essentially uses only the triangle inequality.

Nice. Does the easy proof also identify the function as f(x) = (1/a).z, when the sup equals 1/a?

mathwonk said:
Nice. Does the easy proof also identify the function as f(x) = (1/a).z, when the sup equals 1/a?
The maximum principle is used as well, same as in Schwarz.

mathwonk said:
about post #6: do you really get to specify two values of your log (your map w)? I.e. isn't a lift of a map, on a connected space, through a covering map (like exp), determined by the choice of just one value?
this is exactly the point of mistake I have already acknowledged

Problem 9.
We have the parameterized function for the surface,
$$r(x,y)=\begin{pmatrix}\\ \frac{\cos(y)}{\cosh(x)}\\ \frac{\sin(y)}{\cosh(x)}\\ x-\tanh(x) \end{pmatrix}$$
The Gaussian curvature can be expressed as,
$$K=\frac{eg - f^2}{EG-F^2}$$
where e, f, and g are the coefficients of the first fundamental form and E, F, and G are the coefficients of the second fundamental form, such that,
$$E=r_{xx}\cdot\hat n$$
$$G=r_{yy}\cdot\hat n$$
$$F=r_{xy}\cdot\hat n=f_{yx}\cdot\hat n\\$$
$$e=r_{x}\cdot\hat n_x$$
$$g=r_{y}\cdot\hat n_y$$
$$f=r_{y}\cdot\hat n_x=r_{x}\cdot\hat n_y$$
where the subscripts indicate partial differentiation and the normal to the surface is,
$$\hat n = \frac{r_x\times r_y}{|r_x\times r_y|}$$
We have,
$$r_x=\begin{pmatrix}\\ \frac{-\cos(y)\sinh(x)}{\cosh^2(x)}\\ \frac{-\sin(y)\sinh(x)}{\cosh^2(x)}\\ \tanh^2(x) \end{pmatrix}$$
$$r_y=\begin{pmatrix}\\ \frac{-\sin(y)}{\cosh(x)}\\ \frac{\cos(y)}{\cosh(x)}\\ 0 \end{pmatrix}$$
$$\hat n=\begin{pmatrix}\\ -\cos(y)\tanh(x)\\ -\sin(y)\tanh(x)\\ \frac{1}{\cosh(x)} \end{pmatrix}$$
$$\hat n_x=\begin{pmatrix}\\ \frac{-\cos(y)}{\cosh^2(x)}\\ \frac{-\sin(y)}{\cosh^2(x)}\\ \frac{\sinh(x)}{\cosh^2(x)} \end{pmatrix}$$
$$\hat n_y=\begin{pmatrix}\\ -\sin(y)\tanh(x)\\ \cos(y)\tanh(x)\\ 0 \end{pmatrix}$$
$$r_{xx}=\begin{pmatrix}\\ \frac{\cos(y)(\sinh^2(x)-1)}{\cosh^3(x)}\\ \frac{\sin(y)(\sinh^2(x)-1)}{\cosh^3(x)}\\ \frac{2\sinh(x)}{\cosh^3(x)} \end{pmatrix}$$
$$r_{yy}=\begin{pmatrix}\\ \frac{-\cos(y)}{\cosh(x)}\\ \frac{-\sin(y)}{\cosh(x)}\\ 0 \end{pmatrix}$$
$$r_{yx}=r_{xy}=\begin{pmatrix}\\ \frac{\sin(y)\sinh(x)}{\cosh^2(x)}\\ \frac{-\cos(y)\sinh(x)}{\cosh^2(x)}\\ 0 \end{pmatrix}$$
$$E=r_{xx}\cdot\hat n=-\frac{\sinh(x)(\sinh^2(x)+1)}{\cosh^4(x)}$$
$$e=r_{x}\cdot\hat n_x=\frac{\sinh(x)(\sinh^2(x)+1)}{\cosh^4(x)}=-E$$
$$G=r_{yy}\cdot\hat n=\frac{\sinh(x)}{\cosh^2(x)}$$
$$g=r_{y}\cdot\hat n_y=\frac{\sinh(x)}{\cosh^2(x)}=G$$
$$F=r_{xy}\cdot\hat n=f_{yx}\cdot\hat n=0$$
$$f=r_{y}\cdot\hat n_x=r_{x}\cdot\hat n_y=0$$
and thus the Gaussian curvature,
$$K=-1$$

benorin and fresh_42
I'm confused by problem 1.
What's the difference between $f_0$ and $f_1$?

pop_ianosd said:
I'm confused by problem 1.
What's the difference between $f_0$ and $f_1$?
You are right. This is a typo. I corrected it.

There must be a simpler way to prove this, but I think this works.

Problem 8a.
Let a configuration ##C_t## (of the given process) be defined by the ON/OFF states of each lamp before applying the ##t^{th}## step, combined with the current lamp index ##I_t = t \mod n##.

Because there is a finite number of configurations, some configuration must be repeated eventually.

Let ##C_a## be the first repeating configuration and assume for contradiction that ##a \neq 0##. Let ##C_b## be the configuration preceding the second occurrence of a configuration equal to ##C_a##. Then ##C_{a} = C_{b+1}## but ##C_{a-1} \neq C_b## because if ##C_{a-1} = C_{b}##, then ##C_{a-1}## is a repeating configuration that appears before ##C_a##.

Since ##C_{b}## and ##C_{a-1}## both precede a configuration equal to ##C_a##, ##I_{(b \mod n)} = I_{((a-1) \mod n)}##. Thus ##C_b## and ##C_{a-1}## must have different lamp states. Since a single step can only change the state of at most ##1## lamp, and both configurations are ##1## step away from being equal, if follows that they must differ only in ##L_{((a-1) \mod n)} \equiv L_{(b \mod n)}##. Thus ##L_{((a-2) \mod n)} \equiv L_{((b-1) \mod n)}## must be in the same state in both ##C_{b-1}## and ##C_{a-2}##.

Case 1: That state is OFF. Then neither lamp states (neither in ##C_{b}## nor ##C_{a-1}##) would change in their next step. But they both differ, and are both assumed to directly precede a configuration equal to ##C_{a}##, which is a contradiction.

Case 2: That state is ON. Then, the next step would change both lamp states ( both in ##C_{b}## and ##C_{a-1}##), namely by flipping ON/OFF ##L_{ (b \mod n} )##, but since they both differed in ##L_{ b (\mod n} )## prior to this step, afterwards they would still differ in ##L_{ (b \mod n} )##.

All cases are covered, and each contradict the assumption that ##a\neq 0##

It follows that the first repeating configuration must be the initial configuration.

##Q.E.D.##

EDIT: Tried to make the notation and definitions more consistent.

Last edited:
fresh_42
Jarvis323 said:
There must be a simpler way to prove this, but I think this works.

Problem 8a.
Let a configuration ##C_t## (of the given process) be defined by the ON/OFF states of each lamp before applying the ##t^{th}## step, combined with the current lamp index ##I_t = t \mod n##.

Because there is a finite number of configurations, some configuration must be repeated eventually.

Let ##C_a## be the first repeating configuration and assume for contradiction that ##a \neq 0##. Let ##C_b## be the configuration preceding the second occurrence of a configuration equal to ##C_a##. Then ##C_{a} = C_{b+1}## but ##C_{a-1} \neq C_b## because if ##C_{a-1} = C_{b}##, then ##C_{a-1}## is a repeating configuration that appears before ##C_a##.

Since ##C_{b}## and ##C_{a-1}## both precede a configuration equal to ##C_a##, ##I_{(b \mod n)} = I_{((a-1) \mod n)}##. Thus ##C_b## and ##C_{a-1}## must have different lamp states. Since a single step can only change the state of at most ##1## lamp, and both configurations are ##1## step away from being equal, if follows that they must differ only in ##L_{((a-1) \mod n)} \equiv L_{(b \mod n)}##. Thus ##L_{((a-2) \mod n)} \equiv L_{((b-1) \mod n)}## must be in the same state in both ##C_{b-1}## and ##C_{a-2}##.

Case 1: That state is OFF. Then neither lamp states (neither in ##C_{b}## nor ##C_{a-1}##) would change in their next step. But they both differ, and are both assumed to directly precede a configuration equal to ##C_{a}##, which is a contradiction.

Case 2: That state is ON. Then, the next step would change both lamp states ( both in ##C_{b}## and ##C_{a-1}##), namely by flipping ON/OFF ##L_{ (b \mod n} )##, but since they both differed in ##L_{ b (\mod n} )## prior to this step, afterwards they would still differ in ##L_{ (b \mod n} )##.

All cases are covered, and each contradict the assumption that ##a\neq 0##

It follows that the first repeating configuration must be the initial configuration.

##Q.E.D.##

EDIT: Tried to make the notation and definitions more consistent.
If we phrase it positively, then you have proven that each step can be reversed. Hence any repetition can be brought into a repetition of the initial state of finitely many possible states.

As you mentioned the notation, here is which I have used:
Lamp ##L_j## is in state ##x_j\in \{0,1\}##
initial state: ##v_0=(1,1,\ldots,1)\text{ since }x_{-n}=x_{-n+1}=\ldots=x_{-1}=1##
configuration before step ##S_j## is applied: ##v_j=(x_{j-n},\ldots,x_{j-1})##

The trick is to find a formula ##x_j=x_j(x_{j-n},\ldots,x_{j-1}).##

Jarvis323
7 seems to be a strengthening of Kantorovich's Inequality... interesting.

Problem #7:

We take ##0 < p \leq a_1 \leq a_2 \leq a_3 \leq a_4 \leq a_5 \leq q##. Write

\begin{align*}
M = (a_1 + a_2 + a_3 + a_4 + a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\end{align*}

This is increased when all the weights are on ##a_1## and ##a_5##, i.e. when:

\begin{align*}
M \leq (m a_1 + (5-m) a_5) \left( \frac{m}{a_1} + \frac{5-m}{a_5} \right)
\end{align*}

where ##1 \leq m \leq 4##. We prove this by induction. Consider the general formula

\begin{align*}
M = \left( \sum_i \lambda_i a_i \right) \left( \sum_i \frac{\lambda_i}{a_i} \right) .
\end{align*}

where the ##\lambda_i##'s are integers, some of which are zero, such that ##\sum_{i=1}^n \lambda_i = n##. Consider the ##a_j## term with next term to the left ##a_{j-l}## and next term to the right ##a_{j+l'}##. Define the function

\begin{align*}
M (x) = \left( \sum_{i \not= j} \lambda_i a_i + \lambda_j x \right) \left( \sum_{i \not= j} \frac{\lambda_i}{a_i} + \frac{\lambda_j}{x} \right) .
\end{align*}

where ##x## varies between ##a_{j-l}## and ##a_{j+l'}##. We find that

\begin{align*}
\frac{d}{dx} M (x) & = \left( \sum_{i \not= j} \frac{\lambda_i}{a_i} + \frac{1}{x} \right) - \left( \sum_{i \not= j} \lambda_i a_i + x \right) \frac{1}{x^2}
\\
& = \left( \sum_{i \not= j} \frac{\lambda_i}{a_i} \right) - \left( \sum_{i \not= j} \lambda_i a_i \right) \frac{1}{x^2}
\end{align*}

and

\begin{align*}
\frac{d^2}{dx^2} M (x) = \left( \sum_{i \not= j} \lambda_i a_i \right) \frac{2}{x^3} > 0 .
\end{align*}

So ##M(x)## is convex, which means it has a maximum value at one of the end points. Therefore, ##M## is increased by replacing ##a_j## by either ##a_{j-l}## or ##a_{j+l'}##. This completes the inductive argument.

So that

\begin{align*}
& \; (a_1 + a_2 + a_3 + a_4 +a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\\
& \leq (m a_1 + (5-m) a_5) \left( \frac{m}{a_1} + \frac{5-m}{a_5} \right)
\\
& = m^2 + (5-m)^2 + m (5-m) \left( \frac{a_1}{a_5} + \frac{a_5}{a_1} \right)
\\
& = m^2 + (5-m)^2 + 2 m (5-m) + m (5-m) \left( \frac{a_1}{a_5} + \frac{a_5}{a_1} - 2 \right)
\\
& = 25 + m (5-m) \left( \sqrt{\frac{a_1}{a_5}}- \sqrt{\frac{a_5}{a_1}} \right)^2
\end{align*}

This is maximised by taking ##m=2##:

\begin{align*}
(a_1 + a_2 + a_3 + a_4 +a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\leq 25 + 6 \left( \sqrt{\frac{a_1}{a_5}}- \sqrt{\frac{a_5}{a_1}} \right)^2 \quad (*)
\end{align*}

Define

\begin{align*}
f (x) = \left( \sqrt{\frac{a_1}{x}} - \sqrt{\frac{x}{a_1}} \right)^2
\end{align*}

and take the derivative

\begin{align*}
\frac{d}{dx} f (x) & = \left( \sqrt{\frac{a_1}{x}}- \sqrt{\frac{x}{a_1}} \right) \left( - \frac{1}{x} \sqrt{\frac{a_1}{x}} - \sqrt{\frac{1}{a_1 x}} \right)
\\
& = - \frac{1}{x} \left( \sqrt{\frac{a_1}{x}} - \sqrt{\frac{x}{a_1}} \right) \left( \sqrt{\frac{a_1}{x}} + \sqrt{\frac{x}{a_1}} \right)
\\
& = \frac{1}{x} \left( \frac{x}{a_1} - \frac{a_1}{x} \right)
\end{align*}

is ##\geq 0## for ##a_1 \leq x < \infty##. Which means we can replace ##a_5## in ##(*)## with any number, denote it ##q##, larger than or equal to ##a_5##.

Define

\begin{align*}
g (x) = \left( \sqrt{\frac{x}{q}} - \sqrt{\frac{q}{x}} \right)^2
\end{align*}

and take the derivative

\begin{align*}
\frac{d}{dx} g (x) = \frac{1}{x} \left( \frac{x}{q} - \frac{q}{x} \right)
\end{align*}

is ##\leq 0## for ##0 < x \leq q##. Which means that, after having replaced ##a_5## with ##q##, we can replace ##a_1## in ##(*)## with any number, denote it ##p > 0##, smaller than or equal to ##a_1##. So finally we have

\begin{align*}
(a_1 + a_2 + a_3 + a_4 +a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\leq 25 + 6 \left( \sqrt{\frac{p}{q}}- \sqrt{\frac{q}{p}} \right)^2 .
\end{align*}

where ##0 < p \leq a_1, a_2 , a_3 , a_4 , a_5 \leq q##.

It is easy to generalise these arguments. We have

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right)
& \leq (m a_1 + (n-m) a_n) \left( \frac{m}{a_1} + \frac{n-m}{a_n} \right)
\\
& = m^2 + (n-m)^2 + m (n-m) \left( \frac{a_1}{a_n} + \frac{a_n}{a_1} \right)
\\
& = m^2 + (n-m)^2 + 2 m (n-m) + m (n-m) \left( \frac{a_1}{a_n} + \frac{a_n}{a_1} - 2 \right)
\\
& = n^2 + m (n-m) \left( \sqrt{\frac{a_1}{a_n}}- \sqrt{\frac{a_n}{a_1}} \right)^2 .
\end{align*}

For ##n## odd this is maximised by choosing ##m= \frac{n-1}{2}##,

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right) \leq n^2 + \frac{n^2 - 1}{4} \left( \sqrt{\frac{a_1}{a_n}}- \sqrt{\frac{a_n}{a_1}} \right)^2
\end{align*}

For ##n## even this is maximised by choosing ##m= \frac{n}{2}##,

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right) \leq n^2 + \frac{n^2}{4} \left( \sqrt{\frac{a_1}{a_n}}- \sqrt{\frac{a_n}{a_1}} \right)^2
\end{align*}

We further maximise, as before, by introducing ##p## and ##q## such that ##0 < p \leq a_1, a_2 , \dots, a_n \leq q##. So the general formula are (##n## odd):

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right)
\leq n^2 + \frac{n^2-1}{4} \left( \sqrt{\frac{p}{q}}- \sqrt{\frac{q}{p}} \right)^2
\end{align*}

and (##n## even):

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right)
\leq n^2 + \frac{n^2}{4} \left( \sqrt{\frac{p}{q}}- \sqrt{\frac{q}{p}} \right)^2 .
\end{align*}

Last edited:
StoneTemplePython
Well done, @julian. The last estimation allows a final step
$$\left(\sum_{i=1}^{n}a_i\right)\left(\sum_{i=1}^n\dfrac{1}{a_i}\right)\leq \dfrac{n^2}{4}\cdot \dfrac{(p+q)^2}{pq}$$
It is indeed a special case of Kantorovich's inequality (with weights all one) as @StoneTemplePython rightly noted in the previous post. The only difference is a closer look at the optimization problem, i.e. the shape of the feasible set, a polyhedron.

StoneTemplePython
Problem #1

First let us consider an individual function ##f_n \in \mathcal{A}## for arbitrary ##n##:

##
f_n = \lim_{m \rightarrow \infty} f_m^{(n)} (x) = \lim_{m \rightarrow \infty} \sum_{k=n}^m \tilde{f}_k (x) .
##

Let ##\{ f_m^{(n)} \}_{m \in \mathbb{N} , m \geq n}## be a sequence of functions on ##I = [0, 1]## such that each ##f_m^{(n)}## has continuous derivative on ##I##. If ##\sum_{k=n}^\infty \tilde{f}_k (x)## converges for some ##x_0 \in I## and the series of derivatives ##\sum_{k=n}^\infty \tilde{f}_k'## converges uniformly on ##I## to a sum function ##s_n##, then ##\sum_{k=n}^\infty \tilde{f}_k## converges uniformly on ##I## to a differentiable sum function ##f_n## on ##I##. Moreover, ##s_n = f_n'##. That is, ##\{ f_m^{(n)'} \}## converges uniformly to ##f_n'## on ##I##.

In our case ##f_m^{(n)}## corresponds to

##
f_m^{(n)} = \sum_{k=n}^m a_k \sin (kx) .
##

First ##\sum_{k=n}^\infty a_k \sin (kx)## converges for ##x=0##. We need to show that ##\sum_{n=1}^\infty \tilde{f}_n'## converges uniformly, which is easy to do using the Weierstrass M-test: If ##|\tilde{f}_k' (x)| < M_k## for each ##x \in I## and the sequence ##{M_k}## is summable, then the series ##\sum_{k=n}^\infty \tilde{f}_k'## converges uniformly on ##I##. Write ##M_k = {1 \over k^2}##. Obviously, ##| k a_k \cos (kx)| < M_k## and ##\sum_{k=n}^\infty M_k## converges.

We have established that ##f_n (x) = \sum_{k=n}^\infty a_k \sin (kx)## is differentiable on ##I##, converges uniformly to a sum function ##f_n (x)##, and that its derivatives converge uniformly to ##f_n' (x)##.

Now let us turn to the sequence ##\{ f_n \}_{n \in \mathbb{N}}## where:

##
f_n (x) = \sum_{k=n}^\infty \tilde{f}_k (x) = \sum_{k=1}^\infty \tilde{f}_k (x) - \sum_{k=1}^{n-1} \tilde{f}_k (x) = f_1(x) - \sum_{k=1}^{n-1} \tilde{f}_k (x)
##

First the ##f_n## are a sequence of functions on ##I = [0, 1]## such that each ##f_n## is differentiable on ##I##. The sequence ##f_n## converges uniformly to the zero function on ##I##. The derivatives are uniformly convergent to the zero function on ##I## because:

##
f_n'(x) = f_1' (x) - \sum_{k=1}^{n-1} \tilde{f}_k' (x)
##

Note: A continuous function on a closed bounded interval is bounded and attains its bounds. Every uniformly convergent sequence of bounded functions is uniformly bounded.
Suppose that ##f_n \rightarrow f## uniformly on ##[0,1]## and that for each ##n##, there exists ##C_n## such that ##|f_n (x)| \leq C_n## for any ##x \in I##. Let ##N## be such that

##
|f_n (x) - f (x)| = |f_n (x)| < 1 ,
##

whenever ##n \geq N## and ##x \in I## (in our case ##f(x)## is the zero function). Put

##
C = \max \{ C_1, C_2 , \dots , C_{N-1} , 1 \}
##

Then, for any ##x \in I##,

##
|f_n (x)| \leq C \quad \text{for } n=1,2,3, \dots
##

Thus ##f_n## is uniformly bounded.

Definition: a sequence of functions is uniformly equicontinuous if, for every ##\epsilon > 0##, there exists a ##\delta > 0## such that

##
|f_n (x) - f_n (y)| < \epsilon
##

whenever ##|x - y| < \delta## for all functions ##f_n## in the sequence.

We now use the above results proved for ##\{ f_n \}## to prove that they are uniformly equicontiniuous. By the mean value theorem

##
|f_n (y) - f_n (x)| = {df_n \over dx} (c) |y - x| \quad \text{for some } c \text{ where } 0 \leq x < c < y \leq 1 .
##

The derivatives being uniformly bounded implies

##
|f_n (y) - f_n (x)| \leq M |y - x|
##

where ##M## is the supremum of the derivatives of the functions in the sequence and is independent of ##n##. Take ##\epsilon > 0## and choose ##\delta = \epsilon / M##, then for ##|x - y| < \delta## we have

##
|f_n (y) - f_n (x)| \leq M |y - x| < M \delta = \epsilon .
##

That is, we have shown that a uniformly bounded sequence ##\{ f_n \}## of differentiable functions with uniformly bounded derivatives are uniformly equicontiniuous.

We then apply the Ascoli-Arzela theorem, which states that: Given a sequence of functions ##\{ f_n \}_{n \in \mathbb{N}}## defined on a closed bounded interval ##[a,b]## of the real line. If the sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence ##\{ f_{n_k} \}_{k \in \mathbb{N}}## that converges uniformly.

Last edited:
fresh_42
@julian I think this was a bit long, but you are right: Arzelà-Ascoli was the crucial point and checking why it can be applied the task. Well done.

##4^n## and ##5^n## are positive integers. Any positive integer ##x## can be written as the product of a positive real number between 1 (inclusive) and 10 (exclusive) and an integer (non-negative) power of 10, i.e. ##x = A \times 10^k## where ##1 \leq A \lt 9## and ##k \in \mathbb{N}_{0}##. Taking the base-10 logarithm, we get ##\log_{10} {x} = k + \log_{10} {A}##. It is easy to see that the first digit of ##x## is determined entirely by ##A## and hence knowing ##\log_{10} {A}## (the mantissa) means knowing the that first digit.

Hence, if ##x = 5^n## and ##y = 4^n## have the same first digit, then their respective mantissas ##a, b## too should be such that they yield the same first digit, i.e. the integer parts of ##10^{a}## and ##10^{b}## should be the same.

##\log_{10} {5^n} = n \log_{10} {5}## (Eq. 1)
##\log_{10} {4^n} = 2n \log_{10} {2} = 2n \log_{10} {\frac {10} {5}} = 2n (1 - \log_{10} {5}) = 2n - 2n \log_{10} {5}## (Eq. 2)

##n \log_{10} {5}## must be a real number and can therefore be written as ##n \log_{10} {5} = m + a## (Eq. 3)

where ##m \in \mathbb{N}_{0}## and ##0 \lt a \lt 1##. Here ##a## is the mantissa corresponding to ##x##. Note that ##a = 0## does not arise since ##5^n## is never expressible as an integer power of 10.

Combining equations (1), (2) and (3), we find that ##\log_{10} {4^n} = 2n - 2(m + a) = 2n - 2m - 2a## (Eq. 4)

Since ##\log_{10} {5} \approx 0.699##, we get ##0 \lt n \log_{10} {5} \lt n \Rightarrow m \in \{0, 1, ..., n-1\}##, i.e., ##m## is a non-negative integer less than ##n##, i.e. ##m \leq (n - 1) \Rightarrow 2n - 2m \geq 2##. Using this property in (Eq. 4), we see that ##\log_{10} {4^n}## can be written as ##p + 2 - 2a## where ##p## is a non-negative integer given by ##(2n - 2m - 2)##.

Given that ##0 \leq a \lt 1##, we consider two subcases and express the mantissa of ##4^n## in terms of the mantissa of ##5^n##:
1. ##0 < a < 0.5 \Rightarrow 0 < 2a < 1##. In this case, ##\log_{10} {4^n}## can be expressed as ##(p + 1) + b## where ##b = (1-2a) \in (0, 1)##. Here the fraction ##b = (1-2a)## is the mantissa
2. ##0.5 \leq a < 1 \Rightarrow 1 \leq 2a < 2##. In this case, ##\log_{10} {4^n}## can be expressed as ##p + b## where ##b = (2-2a) \in (0, 1)##. Here the fraction ##b = (2-2a)## is clearly the mantissa
Thus, if we know the mantissa corresponding to ##5^n##, then the mantissa corresponding to ##4^n## is uniquely determined.

We now consider various possible values for the first digit of ##5^n## (obviously that digit is determined uniquely by ##n##), find what the corresponding range of values for mantissa ##a## would be, derive the possible range of values of ##b## (mantissa of ##4^n##) for the given range of values of ##a## and therefore find what are the possible values for the first digit of ##4^n##. Note: The table gives only approximate values for the fractions in the range (as most of those values are irrational numbers), but the ranges are such that even with the approximation, the first digits can be determined without error.

First digit of ##5^n##Range of ##a##Range of ##b##Possible values for first digit of ##4^n##
1##(\log_{10} {1}, \log_{10} {2}) = (0, 0.301)####(1 - 2 \times 0.301, 1 - 2 \times 0) = (0.3979, 1)##, i.e. ##b > 0.3979##Integers less than 10 and greater than ##10^{0.3979}##, i.e. ##\{2, 3, 4, ..., 9\}##
2##(\log_{10} {2}, \log_{10} {3}) = (0.301, 0.477)####(1 - 2 \times 0.477, 1 - 2 \times 0.301) = (0.046, 0.3979)####\{1, 2\}##
3##(\log_{10} {3}, \log_{10} {4}) = (0.477, 0.602)####(1 - 2 \times 0.5, 1 - 2 \times 0.477)## and ##(2 - 2 \times 0.602, 2 - 2 \times 0.5)##
i.e. ##(0, 0.046)## and ##(0.7959, 1)##
##\{1, 6, 7, 8, 9\}##
4##(\log_{10} {4}, \log_{10} {5}) = (0.602, 0.699)####(2 - 2 \times 0.699, 2 - 2 \times 0.602) = (0.602, 0.7959)####\{4, 5, 6\}##
5##(\log_{10} {5}, \log_{10} {6}) = (0.699, 0.778)####(2 - 2 \times 0.778, 2 - 2 \times 0.699) = (0.4437, 0.602)####\{2, 3, 4\}##
6##(\log_{10} {6}, \log_{10} {7}) = (0.778, 0.845)####(2 - 2 \times 0.845, 2 - 2 \times 0.778) = (0.3098, 0.4437)####\{2\}##
7##(\log_{10} {7}, \log_{10} {8}) = (0.845, 0.903)####(0.1938, 0.3098)####\{1, 2\}##
8##(\log_{10} {8}, \log_{10} {9}) = (0.903, 0.954)####(0.0915, 0.1938)####\{1\}##
9##\log_{10} {9}, \log_{10} {10}) = (0.954, 1)####(0, 0.0915)####\{1\}##

From the above table, it is seen that only in cases where the first digit of ##5^n## is either 2 or 4 can the first digit of ##4^n## too take the same value (2 and 4 respectively). For values of ##n## that lead to first digit of ##5^n## being neither 2 nor 4, the values of ##4^n## would be such that their first digit will always be different from the first digit of ##5^n##. Hence the claim stated in the the question is proved.

Not anonymous said:
##4^n## and ##5^n## are positive integers. Any positive integer ##x## can be written as the product of a positive real number between 1 (inclusive) and 10 (exclusive) and an integer (non-negative) power of 10, i.e. ##x = A \times 10^k## where ##1 \leq A \lt 9## and ##k \in \mathbb{N}_{0}##. Taking the base-10 logarithm, we get ##\log_{10} {x} = k + \log_{10} {A}##. It is easy to see that the first digit of ##x## is determined entirely by ##A## and hence knowing ##\log_{10} {A}## (the mantissa) means knowing the that first digit.

Hence, if ##x = 5^n## and ##y = 4^n## have the same first digit, then their respective mantissas ##a, b## too should be such that they yield the same first digit, i.e. the integer parts of ##10^{a}## and ##10^{b}## should be the same.

##\log_{10} {5^n} = n \log_{10} {5}## (Eq. 1)
##\log_{10} {4^n} = 2n \log_{10} {2} = 2n \log_{10} {\frac {10} {5}} = 2n (1 - \log_{10} {5}) = 2n - 2n \log_{10} {5}## (Eq. 2)

##n \log_{10} {5}## must be a real number and can therefore be written as ##n \log_{10} {5} = m + a## (Eq. 3)

where ##m \in \mathbb{N}_{0}## and ##0 \lt a \lt 1##. Here ##a## is the mantissa corresponding to ##x##. Note that ##a = 0## does not arise since ##5^n## is never expressible as an integer power of 10.

Combining equations (1), (2) and (3), we find that ##\log_{10} {4^n} = 2n - 2(m + a) = 2n - 2m - 2a## (Eq. 4)

Since ##\log_{10} {5} \approx 0.699##, we get ##0 \lt n \log_{10} {5} \lt n \Rightarrow m \in \{0, 1, ..., n-1\}##, i.e., ##m## is a non-negative integer less than ##n##, i.e. ##m \leq (n - 1) \Rightarrow 2n - 2m \geq 2##. Using this property in (Eq. 4), we see that ##\log_{10} {4^n}## can be written as ##p + 2 - 2a## where ##p## is a non-negative integer given by ##(2n - 2m - 2)##.

Given that ##0 \leq a \lt 1##, we consider two subcases and express the mantissa of ##4^n## in terms of the mantissa of ##5^n##:
1. ##0 < a < 0.5 \Rightarrow 0 < 2a < 1##. In this case, ##\log_{10} {4^n}## can be expressed as ##(p + 1) + b## where ##b = (1-2a) \in (0, 1)##. Here the fraction ##b = (1-2a)## is the mantissa
2. ##0.5 \leq a < 1 \Rightarrow 1 \leq 2a < 2##. In this case, ##\log_{10} {4^n}## can be expressed as ##p + b## where ##b = (2-2a) \in (0, 1)##. Here the fraction ##b = (2-2a)## is clearly the mantissa
Thus, if we know the mantissa corresponding to ##5^n##, then the mantissa corresponding to ##4^n## is uniquely determined.

We now consider various possible values for the first digit of ##5^n## (obviously that digit is determined uniquely by ##n##), find what the corresponding range of values for mantissa ##a## would be, derive the possible range of values of ##b## (mantissa of ##4^n##) for the given range of values of ##a## and therefore find what are the possible values for the first digit of ##4^n##. Note: The table gives only approximate values for the fractions in the range (as most of those values are irrational numbers), but the ranges are such that even with the approximation, the first digits can be determined without error.

First digit of ##5^n##Range of ##a##Range of ##b##Possible values for first digit of ##4^n##
1##(\log_{10} {1}, \log_{10} {2}) = (0, 0.301)####(1 - 2 \times 0.301, 1 - 2 \times 0) = (0.3979, 1)##, i.e. ##b > 0.3979##Integers less than 10 and greater than ##10^{0.3979}##, i.e. ##\{2, 3, 4, ..., 9\}##
2##(\log_{10} {2}, \log_{10} {3}) = (0.301, 0.477)####(1 - 2 \times 0.477, 1 - 2 \times 0.301) = (0.046, 0.3979)####\{1, 2\}##
3##(\log_{10} {3}, \log_{10} {4}) = (0.477, 0.602)####(1 - 2 \times 0.5, 1 - 2 \times 0.477)## and ##(2 - 2 \times 0.602, 2 - 2 \times 0.5)##
i.e. ##(0, 0.046)## and ##(0.7959, 1)##
##\{1, 6, 7, 8, 9\}##
4##(\log_{10} {4}, \log_{10} {5}) = (0.602, 0.699)####(2 - 2 \times 0.699, 2 - 2 \times 0.602) = (0.602, 0.7959)####\{4, 5, 6\}##
5##(\log_{10} {5}, \log_{10} {6}) = (0.699, 0.778)####(2 - 2 \times 0.778, 2 - 2 \times 0.699) = (0.4437, 0.602)####\{2, 3, 4\}##
6##(\log_{10} {6}, \log_{10} {7}) = (0.778, 0.845)####(2 - 2 \times 0.845, 2 - 2 \times 0.778) = (0.3098, 0.4437)####\{2\}##
7##(\log_{10} {7}, \log_{10} {8}) = (0.845, 0.903)####(0.1938, 0.3098)####\{1, 2\}##
8##(\log_{10} {8}, \log_{10} {9}) = (0.903, 0.954)####(0.0915, 0.1938)####\{1\}##
9##\log_{10} {9}, \log_{10} {10}) = (0.954, 1)####(0, 0.0915)####\{1\}##

From the above table, it is seen that only in cases where the first digit of ##5^n## is either 2 or 4 can the first digit of ##4^n## too take the same value (2 and 4 respectively). For values of ##n## that lead to first digit of ##5^n## being neither 2 nor 4, the values of ##4^n## would be such that their first digit will always be different from the first digit of ##5^n##. Hence the claim stated in the the question is proved.
This looks like the published solution, only that you made your life artificially difficult by dealing with logarithms and real numbers. You can do the same by setting
\begin{align*}
z\cdot 10^r &\leq 4^n=2^{2n}< (z+1)\cdot 10^r\\
z\cdot 10^s &\leq 5^n < (z+1)\cdot 10^s
\end{align*}
Then square the second and multiply both.

##a+b+c+2 = abc \Rightarrow c = \dfrac{a+b+2}{ab-1}## (Eq. 1)

##(a+1)(b+1)(c+1)## can therefore be expressed as a function ##f## of variables ##a, b## alone.

##(a+1)(b+1)(c+1) \equiv f(a, b) = (a+1)(b+1)\left(\dfrac{a+b+2}{ab-1} + 1\right)##
##= (a+1)(b+1)\dfrac{(a+1)(b+1)}{ab-1} = \dfrac{(a+1)^2(b+1)^2}{ab-1}## (Eq. 2)

Since ##a, b > 0##, it follows that the numerator of (Eq. 1) is positive. Since we also need ##c > 0##, we also need denominator in that equation to be positive and hence we must have ##ab > 1##.

To find the minimum value of ##f(a, b)## for a fixed value of ##b##, find the partial derivative of this function w.r.t ##a## and equate to 0.##\dfrac {\partial f} {\partial a} = 0 \equiv \dfrac{(b+1)^2 (a+1) (ab - 2 - b)}{(ab-1)^2} = 0 \Rightarrow a = \dfrac{2+b}{b}## (Eq. 3)

That the above is a minimum, not a maximum, can be proven by looking at the sign of ##\dfrac{\partial^2 f} {\partial a^2}## at ##a = \frac{2+b}{b}##.
##\dfrac{\partial^2 f} {\partial a^2} = \dfrac{(b+1)^2}{(ab-1)^2} \left( \dfrac{-2b(a+1)(ab - 2 - b)}{ab - 1} + (2ab - 2) \right)##, which when evaluated at ##a = \dfrac{2+b}{b}## is equal to
##\dfrac{1}{b+1} \left( -2(2+2b)(2+b-2-b) + 2(b+1)^2 \right) = 2(b+1)##. Since this value is positive for any positive value of ##b## (in fact it is positive for ##b > -1##), ##a = \frac{2+b}{b}## must correspond to a local minimum, not a maximum.
##f(\frac{2+b}{b}, b)## is therefore minimum value of ##f(a, b)## for a specific value of ##b##, and for variable ##b##, these minima can be expressed as a function of ##b##.

##g(b) \equiv f(\frac{2+b}{b}, b) = \dfrac{4(b+1)^3}{b^2}##.
The minimum possible value of ##(a+1)(b+1)(c+1)## is the minimum value of ##f(a, b)## across all allowed values of ##a, b## and this is ##\min_{a>0, b>0} f(a, b) = \min_{b>0} f(\frac{2+b}{b}, b) = \min_{b>0} g(b)##

The minimum value of ##g(b)## for ##b > 0## can be found by finding the local minima of ##g(b)##.
##g'(b) = 0 \Rightarrow \dfrac{-8(b+1)^3}{b^3} + \dfrac{12(b+1)^2}{b^2} \Rightarrow \dfrac{(b+1)^2}{b^3}(4b-8) = 0 \Rightarrow b=2## (Eq. 4)

That the above is a local minimum, not a maximum, can be seen by evaluating ##g''(b)## at ##b=2## and noting that ##g''(b=2)## is a positive value (##\frac{9}{2} = 4.5## if I calculated correctly).

Thus, the minimum value of ##(a+1)(b+1)(c+1)## that meets the conditions stated in the question is ##g(2) = \dfrac{4(2+1)^3}{2^2} = 3^3 = 27##. This proves that ##(a+1)(b+1)(c+1) \geq 27## for the stated conditions and that the minimum value of 27 is achieved when ##b=2## whereby using (Eq. 2) and (Eq. 1) we get the corresponding values of ##a, c## both to be 2, i.e. ##a=b=c=2## gives the minimum value for the expression.