Math Challenge - October 2020

Click For Summary
SUMMARY

The forum discussion centers on advanced mathematical concepts including Functional Analysis, Algebras, Measure Theory, Differential Geometry, and Optimization. Key problems addressed include uniform convergence of sequences of functions, properties of *-algebras, Riemann integrability, and Gaussian curvature of the pseudosphere. Notable contributions were made by users such as @julian, @Jarvis323, and @Fred Wright, who provided solutions and insights into complex mathematical proofs and theorems.

PREREQUISITES
  • Understanding of Functional Analysis and its applications
  • Familiarity with *-algebras and C*-norms
  • Knowledge of Riemann integration and Borel measurability
  • Concepts of Differential Geometry, particularly Gaussian curvature
NEXT STEPS
  • Study the properties of C*-algebras and their applications in quantum mechanics
  • Explore the implications of the Schwarz–Pick theorem in complex analysis
  • Investigate the relationship between measure theory and functional analysis
  • Learn about the applications of Gaussian curvature in modern geometry
USEFUL FOR

Mathematicians, graduate students in mathematics, and researchers focusing on advanced topics in analysis, geometry, and algebra.

fresh_42
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
2025 Award
Messages
20,815
Reaction score
28,447
Summary:: Functional Analysis, Algebras, Measure Theory, Differential Geometry, Calculus, Optimization, Algorithm, Integration. Lie Algebras.

1. (solved by @julian ) Let ##(a_n)\subseteq\mathbb{R}## be a sequence of real numbers such that ##a_n \leq n^{-3}## for all ##n\in \mathbb{N}.## Given the family ##\mathcal{A}## of functions ##f_n\, : \,[0,1]\longrightarrow \mathbb{R}## defined by ##f_n(x)=\sum_{k=n}^\infty a_k\sin(kx)## for ##n\in \mathbb{N},## show that every sequence ##(g_n)\subseteq\mathcal{A}## contains a subsequence ##(g_{n_k})## which converges uniformly on ##[0,1].##

2. (a) Show that if a ##*##-algebra ##A## admits a complete ##C^*##-norm, then it is the only ##C^*##-norm on ##A##.
(b) Let ##A## be a ##*##-algebra. Show that there is a ##*##-isomorphism ##M_n(\Bbb{C}) \otimes A \cong M_n(A)##.
(c) Deduce that the ##C^*##-algebra ##M_n(\Bbb{C})## is nuclear for all ##n \geq 1##. (MQ)

3. Give an example of a Riemann integrable function that is not Borel measurable. (MQ)

4. Let ##C## be the Cantor set. Show that ##\frac{1}{2}C + \frac{1}{2}C = [0,1]##. Deduce that the sum of sets of measure ##0## must not have measure ##0##. (MQ)

5. Let ##\pi\, : \,\mathbb{R}^n\longrightarrow \mathbb{T}^n## be the canonical projection and ##f:=\pi|_{[0,1]^n}## its restriction on the closed unit cube. Show with the help of ##f\, : \,[0,1]^n\longrightarrow\mathbb{T}^n##, that a quotient map in general doesn't have to be open.

6. Let ##D=\{\,z\in \mathbb{C}\, : \,|z|<1\,\}## be the complex open unit disk and let ##0<a<1## be a real number. Suppose ##f\, : \,D\longrightarrow \mathbb{C}## is a holomorphic function such that ##f(a)=1## and ##f(-a)=-1.##
(a) Prove that ##\sup_{z\in D}\{|f(z)|\}\geq \dfrac{1}{a}.##
(b) Prove that if ##f## has no root, then ##\sup_{z\in D}\{|f(z)|\}\geq \exp\left(\dfrac{1-a^2}{4a}\,\pi\right).##

7. (solved by @julian )
(a) Let ##0<p\leq a,b,c,d,e \leq q## and show that
$$
(a+b+c+d+e)\left(\dfrac{1}{a}+\dfrac{1}{b}+\dfrac{1}{c}+\dfrac{1}{d}+\dfrac{1}{e}\right) \leq 25+6\left(\sqrt{\dfrac{p}{q}}-\sqrt{\dfrac{q}{p}}\right)^2.
$$
(b) This is a special case of a general inequality. Which is the general case and how is it proven?

8. Let ##n>1## be an integer. There are ##n## lamps ##L_0,\ldots,L_{n-1}## arranged in a circle. Each lamp is either ON ##(1)## or OFF ##(0)##. A sequence of steps ##S_0,\ldots,S_i,\ldots## is carried out. Step ##S_j## affects the state of ##L_j## only (leaving the states of all other lamps unaltered) as follows:
If ##L_{j-1}## is ON, ##S_j## changes the state of ##L_j## from ON to OFF or from OFF to ON.
If ##L_{j-1}## is OFF, ##S_j## leaves the state of ##L_j## unchanged.
The lamps are labeled modulo ##n##, that is ##L_{-1}=L_{n-1}, L_0=L_n,## etc. Initially all lamps are ON.
Show that
(a) (solved by @Jarvis323 ) there is a positive integer ##M(n)## such that after ##M(n)## steps all the lamps are ON again;
(b) if ##n=2^k,## then all lamps are ON after ##(n^2-1)## steps;
(c) if ##n=2^k+1,## then all lamps are ON after ##(n^2-n+1)## steps.

9. (solved by @Fred Wright ) The pseudosphere is the rotational surface of the tractrix, e.g. parameterized by
$$
f\, : \,\mathbb{R}^2\longrightarrow \mathbb{R}^3\; , \;f(x,y)=\begin{bmatrix} \cos (y)/ \cosh (x)\\ \sin (y)/ \cosh (x)\\x- \tanh (x)\end{bmatrix}.
$$
Show that the pseudosphere has a constant negative Gauß curvature.

Rotationstraktrix.png

10. Let ##\mathfrak{g}## be a Lie algebra with trivial center ##\mathfrak{Z(g)}=\{0\}## over a field of characteristic not equal two and
\begin{align*}
\mathfrak{A(g)}&=\{\varphi:\mathfrak{g}\stackrel{linear}{\longrightarrow}\mathfrak{g}\,|\,[\varphi(X),Y]=[\varphi(Y),X]\text{ for all }X,Y\in \mathfrak{g}\}\\
&=\operatorname{lin}\{\alpha,\beta\neq 0\,|\,[\alpha,\beta]=\alpha\beta-\beta\alpha=\beta\}
\end{align*}
Show that image ##\operatorname{im}\beta## and kernel ##\operatorname{ker}\beta## of ##\beta## are ideals in ##\mathfrak{g}.##
Hint: ##\mathfrak{A(g)}## is a ##\mathfrak{g}##-module by ##X.\varphi =[\operatorname{ad}X,\varphi].##
1596234944639-png-png.png
High Schoolers only

11.
(solved by @etotheipi ) Let ##I## and ##J## be bounded open intervals in ##\Bbb{R}## with ##I \cap J \neq \emptyset## and the length of ##J## is greater than the length of ##I##. Show that ##I \subseteq 3*J##, where ##3*J## is the interval with length ##3## times the length of ##J## and with the same centre as ##J##. (MQ)

12. Given a positive integer ##n##. Assume that ##4^n## and ##5^n## start with the same digit in the decimal system.
Show that this digit has to be ##2## or ##4.##

13. A parcel service charges a price proportional to the sum height plus length plus width per box.
Could it be, that there is a case where it is cheaper to put a more expensive package into a cheaper box?

14. Let ##a## be a positive integer and ##(a_n)_{n\in \mathbb{N}_0}## the sequence defined by
$$
a_0:=1\; , \;a_n:=a+\prod_{k=0}^{n-1}a_k \quad (n\geq 1)
$$
(a) There are infinitely many primes which divide at least one number of the sequence.
(b) There is a prime which does not divide any of the numbers in the sequence

15. Let ##a,b,c## be positive real numbers such that ##a+b+c+2=abc.##
Show that ##(a+1)(b+1)(c+1)\geq 27.## Under which condition does equality hold?
 
Last edited:
  • Like
Likes   Reactions: JD_PM, StoneTemplePython, Jarvis323 and 3 others
Physics news on Phys.org
Not a high schooler, but very confused by 11

isn't I=(-10,1) and J=(0,2) a counterexample? It feels like you needed to include something about the radius of I vs J
 
  • Like
Likes   Reactions: member 587159, PeroK and Infrared
Probl. 1
The described sequence of functions is uniformly continuous sinse all the derivatives are estimated as follows ##|f'_k(x)|\le \sum_n\frac{1}{n^2}##
The Arzelà–Ascoli theorem finishes the proof
 
wrobel said:
Probl. 1
The described sequence of functions is uniformly continuous sinse all the derivatives are estimated as follows ##|f'_k(x)|\le \sum_n\frac{1}{n^2}##
The Arzelà–Ascoli theorem finishes the proof
Too thin.
 
Office_Shredder said:
Not a high schooler, but very confused by 11

isn't I=(-10,1) and J=(0,2) a counterexample? It feels like you needed to include something about the radius of I vs J

Thanks. There was an assumption missing: that the length of ##I## is smaller than the length of ##J##.
 
Probl 6 b
It looks like a consequence from the Schwarz–Pick theorem https://en.wikipedia.org/wiki/Schwarz_lemmaIndeed, take a branch of a function ##w(z)=\log f(z)## such that ##w(-a)=i\pi,\quad w(a)=0##.Applying the Schwarz–Pick theorem to the function $$F(z)=\frac{w(z)}{\sup_{|z|<1}|w(z)|},\quad z_1=a,\quad z_2=-a$$we get$$\sup_{|z|<1}|w(z)|\ge \frac{(1+a^2)\pi}{2a}.$$It looks like the required estimate but it is not the same. I am confused little bit.
 
wrobel said:
I am confused little bit.
Yes. Check your solution with
$$
f(z)=-i\,\exp\left(\dfrac{iz-a^2}{iz+1} \cdot \dfrac{\pi}{2a}\right).
$$
 
Not sure if I still qualify for high-school, but I'll take a shot at Math's question anyway
fresh_42 said:
11. Let ##I## and ##J## be bounded open intervals in ##\Bbb{R}## with ##I \cap J \neq \emptyset## and the length of ##J## is greater than the length of ##I##.
Show that ##I \subseteq 3*J##, where ##3*J## is the interval with length ##3## times the length of ##J## and with the same centre as ##J##. (MQ)

Let ##J = (a-d, a+d)## and ##3*J = (a-3d, a+3d)##. Also, let ##I = (b-e, b+e)##. Since ##I \cap J \neq \emptyset##, we either have ##b-e < a+d## or ##a-d < b+e##, or both. Then, since it's given that ##d > e##,$$\begin{align*}\text{Case 1:} \quad b-e < a+d &\implies b+e < a + d + 2e < a + 3d \\ \text{Case 2:} \quad b+e > a-d &\implies b-e > a - d - 2e > a - 3d\end{align*}$$In the first case ##\text{sup}(I) < \text{sup}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{inf}(I) > a - d - 2e > a - 3d = \text{inf}(3*J)##, we have ##\text{inf}(I) > \text{inf}(3*J)##. Then ##I \subseteq 3*J##.

In the second case ##\text{inf}(I) > \text{inf}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{sup}(I) < a + d + 2e < a + 3d = \text{sup}(3*J)##, we have ##\text{sup}(I) < \text{sup}(3*J)##. Then ##I \subseteq 3*J##.
 
  • Like
Likes   Reactions: member 587159
fresh_42 said:
Check your solution with
yes, I see the mistake:)
 
  • #10
wrobel said:
yes, I see the mistake:)
Schwarz was the correct idea, but the function which he is applied to is a bit tricky.
 
  • #11
etotheipi said:
Not sure if I still qualify for high-school, but I'll take a shot at Math's question anyway
Let ##J = (a-d, a+d)## and ##3*J = (a-3d, a+3d)##. Also, let ##I = (b-e, b+e)##. Since ##I \cap J \neq \emptyset##, we either have ##b-e < a+d## or ##a-d < b+e##, or both. Then, since it's given that ##d > e##,$$\begin{align*}\text{Case 1:} \quad b-e < a+d &\implies b+e < a + d + 2e < a + 3d \\ \text{Case 2:} \quad b+e > a-d &\implies b-e > a - d - 2e > a - 3d\end{align*}$$In the first case ##\text{sup}(I) < \text{sup}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{inf}(I) > a - d - 2e > a - 3d = \text{inf}(3*J)##, we have ##\text{inf}(I) > \text{inf}(3*J)##. Then ##I \subseteq 3*J##.

In the second case ##\text{inf}(I) > \text{inf}(3*J)##, and also since ##I \cap J \neq \emptyset \implies \text{sup}(I) < a + d + 2e < a + 3d = \text{sup}(3*J)##, we have ##\text{sup}(I) < \text{sup}(3*J)##. Then ##I \subseteq 3*J##.

Looks ok to me! For reference, you can use this exercise to prove the Vitali covering theorem in ##\mathbb{R}##. More general versions of this exercise can also be proven.

Btw, you still count as a high school student until you have had your first course at the university.
 
  • Like
Likes   Reactions: etotheipi
  • #12
For 4) what is the meaning of a constant times a set? Point-wise multiplication and perhaps a shift?
 
  • #13
benorin said:
For 4) what is the meaning of a constant times a set? Point-wise multiplication and perhaps a shift?
$$\frac{1}{2}C + \frac{1}{2}C := \left\{\frac{1}{2}c + \frac{1}{2}c': c,c' \in C\right\}$$
 
  • #14
question about 6b: so does the assumption "no root" mean that f has no zero in D?
 
  • #15
mathwonk said:
question about 6b: so does the assumption "no root" mean that f has no zero in D?
Yes, non zero on ##D##. This is necessary to have a logarithm.
 
  • #16
Hint for 6a:

It seems to me that Schwarz Lemma (plus the fact the automorphism group of the disc acts transitively) implies any holomorphic map from the disc to itself with more than one fix point is the identity. Hence if we multiply the function f by a, and assume the sup (of |a.f|) is < 1, we get a contradiction.
 
  • #17
about post #6: do you really get to specify two values of your log (your map w)? I.e. isn't a lift of a map, on a connected space, through a covering map (like exp), determined by the choice of just one value?
 
  • #18
mathwonk said:
Hint for 6a:

It seems to me that Schwarz Lemma (plus the fact the automorphism group of the disc acts transitively) implies any holomorphic map from the disc to itself with more than one fix point is the identity. Hence if we multiply the function f by a, and assume the sup (of |a.f|) is < 1, we get a contradiction.
My proof is easier. It essentially uses only the triangle inequality.
 
  • #19
Nice. Does the easy proof also identify the function as f(x) = (1/a).z, when the sup equals 1/a?
 
  • #20
mathwonk said:
Nice. Does the easy proof also identify the function as f(x) = (1/a).z, when the sup equals 1/a?
The maximum principle is used as well, same as in Schwarz.
 
  • #21
mathwonk said:
about post #6: do you really get to specify two values of your log (your map w)? I.e. isn't a lift of a map, on a connected space, through a covering map (like exp), determined by the choice of just one value?
this is exactly the point of mistake I have already acknowledged
 
  • #22
Problem 9.
We have the parameterized function for the surface,
$$
r(x,y)=\begin{pmatrix}\\
\frac{\cos(y)}{\cosh(x)}\\
\frac{\sin(y)}{\cosh(x)}\\
x-\tanh(x)
\end{pmatrix}

$$
The Gaussian curvature can be expressed as,
$$
K=\frac{eg - f^2}{EG-F^2}
$$
where e, f, and g are the coefficients of the first fundamental form and E, F, and G are the coefficients of the second fundamental form, such that,
$$
E=r_{xx}\cdot\hat n
$$
$$
G=r_{yy}\cdot\hat n
$$
$$
F=r_{xy}\cdot\hat n=f_{yx}\cdot\hat n\\
$$
$$
e=r_{x}\cdot\hat n_x
$$
$$
g=r_{y}\cdot\hat n_y
$$
$$
f=r_{y}\cdot\hat n_x=r_{x}\cdot\hat n_y
$$
where the subscripts indicate partial differentiation and the normal to the surface is,
$$
\hat n = \frac{r_x\times r_y}{|r_x\times r_y|}
$$
We have,
$$
r_x=\begin{pmatrix}\\
\frac{-\cos(y)\sinh(x)}{\cosh^2(x)}\\
\frac{-\sin(y)\sinh(x)}{\cosh^2(x)}\\
\tanh^2(x)
\end{pmatrix}
$$
$$
r_y=\begin{pmatrix}\\
\frac{-\sin(y)}{\cosh(x)}\\
\frac{\cos(y)}{\cosh(x)}\\
0
\end{pmatrix}
$$
$$
\hat n=\begin{pmatrix}\\
-\cos(y)\tanh(x)\\
-\sin(y)\tanh(x)\\
\frac{1}{\cosh(x)}
\end{pmatrix}
$$
$$
\hat n_x=\begin{pmatrix}\\
\frac{-\cos(y)}{\cosh^2(x)}\\
\frac{-\sin(y)}{\cosh^2(x)}\\
\frac{\sinh(x)}{\cosh^2(x)}
\end{pmatrix}
$$
$$
\hat n_y=\begin{pmatrix}\\
-\sin(y)\tanh(x)\\
\cos(y)\tanh(x)\\
0
\end{pmatrix}
$$
$$
r_{xx}=\begin{pmatrix}\\
\frac{\cos(y)(\sinh^2(x)-1)}{\cosh^3(x)}\\
\frac{\sin(y)(\sinh^2(x)-1)}{\cosh^3(x)}\\
\frac{2\sinh(x)}{\cosh^3(x)}
\end{pmatrix}
$$
$$
r_{yy}=\begin{pmatrix}\\
\frac{-\cos(y)}{\cosh(x)}\\
\frac{-\sin(y)}{\cosh(x)}\\
0
\end{pmatrix}
$$
$$
r_{yx}=r_{xy}=\begin{pmatrix}\\
\frac{\sin(y)\sinh(x)}{\cosh^2(x)}\\
\frac{-\cos(y)\sinh(x)}{\cosh^2(x)}\\
0
\end{pmatrix}
$$
$$
E=r_{xx}\cdot\hat n=-\frac{\sinh(x)(\sinh^2(x)+1)}{\cosh^4(x)}
$$
$$
e=r_{x}\cdot\hat n_x=\frac{\sinh(x)(\sinh^2(x)+1)}{\cosh^4(x)}=-E
$$
$$
G=r_{yy}\cdot\hat n=\frac{\sinh(x)}{\cosh^2(x)}
$$
$$
g=r_{y}\cdot\hat n_y=\frac{\sinh(x)}{\cosh^2(x)}=G
$$
$$
F=r_{xy}\cdot\hat n=f_{yx}\cdot\hat n=0
$$
$$
f=r_{y}\cdot\hat n_x=r_{x}\cdot\hat n_y=0

$$
and thus the Gaussian curvature,
$$
K=-1
$$
 
  • Like
Likes   Reactions: benorin and fresh_42
  • #23
I'm confused by problem 1.
What's the difference between f_0 and f_1?
 
  • #24
pop_ianosd said:
I'm confused by problem 1.
What's the difference between f_0 and f_1?
You are right. This is a typo. I corrected it.
 
  • #25
There must be a simpler way to prove this, but I think this works.

Problem 8a.
Let a configuration ##C_t## (of the given process) be defined by the ON/OFF states of each lamp before applying the ##t^{th}## step, combined with the current lamp index ##I_t = t \mod n##.

Because there is a finite number of configurations, some configuration must be repeated eventually.

Let ##C_a## be the first repeating configuration and assume for contradiction that ##a \neq 0##. Let ##C_b## be the configuration preceding the second occurrence of a configuration equal to ##C_a##. Then ##C_{a} = C_{b+1}## but ##C_{a-1} \neq C_b## because if ##C_{a-1} = C_{b}##, then ##C_{a-1}## is a repeating configuration that appears before ##C_a##.

Since ##C_{b}## and ##C_{a-1}## both precede a configuration equal to ##C_a##, ##I_{(b \mod n)} = I_{((a-1) \mod n)}##. Thus ##C_b## and ##C_{a-1}## must have different lamp states. Since a single step can only change the state of at most ##1## lamp, and both configurations are ##1## step away from being equal, if follows that they must differ only in ##L_{((a-1) \mod n)} \equiv L_{(b \mod n)}##. Thus ##L_{((a-2) \mod n)} \equiv L_{((b-1) \mod n)}## must be in the same state in both ##C_{b-1}## and ##C_{a-2}##.

Case 1: That state is OFF. Then neither lamp states (neither in ##C_{b}## nor ##C_{a-1}##) would change in their next step. But they both differ, and are both assumed to directly precede a configuration equal to ##C_{a}##, which is a contradiction.

Case 2: That state is ON. Then, the next step would change both lamp states ( both in ##C_{b}## and ##C_{a-1}##), namely by flipping ON/OFF ##L_{ (b \mod n} )##, but since they both differed in ##L_{ b (\mod n} )## prior to this step, afterwards they would still differ in ##L_{ (b \mod n} )##.

All cases are covered, and each contradict the assumption that ##a\neq 0##

It follows that the first repeating configuration must be the initial configuration.

##Q.E.D.##

EDIT: Tried to make the notation and definitions more consistent.
 
Last edited:
  • Like
Likes   Reactions: fresh_42
  • #26
Jarvis323 said:
There must be a simpler way to prove this, but I think this works.

Problem 8a.
Let a configuration ##C_t## (of the given process) be defined by the ON/OFF states of each lamp before applying the ##t^{th}## step, combined with the current lamp index ##I_t = t \mod n##.

Because there is a finite number of configurations, some configuration must be repeated eventually.

Let ##C_a## be the first repeating configuration and assume for contradiction that ##a \neq 0##. Let ##C_b## be the configuration preceding the second occurrence of a configuration equal to ##C_a##. Then ##C_{a} = C_{b+1}## but ##C_{a-1} \neq C_b## because if ##C_{a-1} = C_{b}##, then ##C_{a-1}## is a repeating configuration that appears before ##C_a##.

Since ##C_{b}## and ##C_{a-1}## both precede a configuration equal to ##C_a##, ##I_{(b \mod n)} = I_{((a-1) \mod n)}##. Thus ##C_b## and ##C_{a-1}## must have different lamp states. Since a single step can only change the state of at most ##1## lamp, and both configurations are ##1## step away from being equal, if follows that they must differ only in ##L_{((a-1) \mod n)} \equiv L_{(b \mod n)}##. Thus ##L_{((a-2) \mod n)} \equiv L_{((b-1) \mod n)}## must be in the same state in both ##C_{b-1}## and ##C_{a-2}##.

Case 1: That state is OFF. Then neither lamp states (neither in ##C_{b}## nor ##C_{a-1}##) would change in their next step. But they both differ, and are both assumed to directly precede a configuration equal to ##C_{a}##, which is a contradiction.

Case 2: That state is ON. Then, the next step would change both lamp states ( both in ##C_{b}## and ##C_{a-1}##), namely by flipping ON/OFF ##L_{ (b \mod n} )##, but since they both differed in ##L_{ b (\mod n} )## prior to this step, afterwards they would still differ in ##L_{ (b \mod n} )##.

All cases are covered, and each contradict the assumption that ##a\neq 0##

It follows that the first repeating configuration must be the initial configuration.

##Q.E.D.##

EDIT: Tried to make the notation and definitions more consistent.
If we phrase it positively, then you have proven that each step can be reversed. Hence any repetition can be brought into a repetition of the initial state of finitely many possible states.

As you mentioned the notation, here is which I have used:
Lamp ##L_j## is in state ##x_j\in \{0,1\}##
initial state: ##v_0=(1,1,\ldots,1)\text{ since }x_{-n}=x_{-n+1}=\ldots=x_{-1}=1##
configuration before step ##S_j## is applied: ##v_j=(x_{j-n},\ldots,x_{j-1})##

The trick is to find a formula ##x_j=x_j(x_{j-n},\ldots,x_{j-1}).##
 
  • Like
Likes   Reactions: Jarvis323
  • #27
7 seems to be a strengthening of Kantorovich's Inequality... interesting.
 
  • #28
Problem #7:

We take ##0 < p \leq a_1 \leq a_2 \leq a_3 \leq a_4 \leq a_5 \leq q##. Write

\begin{align*}
M = (a_1 + a_2 + a_3 + a_4 + a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\end{align*}

This is increased when all the weights are on ##a_1## and ##a_5##, i.e. when:

\begin{align*}
M \leq (m a_1 + (5-m) a_5) \left( \frac{m}{a_1} + \frac{5-m}{a_5} \right)
\end{align*}

where ##1 \leq m \leq 4##. We prove this by induction. Consider the general formula

\begin{align*}
M = \left( \sum_i \lambda_i a_i \right) \left( \sum_i \frac{\lambda_i}{a_i} \right) .
\end{align*}

where the ##\lambda_i##'s are integers, some of which are zero, such that ##\sum_{i=1}^n \lambda_i = n##. Consider the ##a_j## term with next term to the left ##a_{j-l}## and next term to the right ##a_{j+l'}##. Define the function

\begin{align*}
M (x) = \left( \sum_{i \not= j} \lambda_i a_i + \lambda_j x \right) \left( \sum_{i \not= j} \frac{\lambda_i}{a_i} + \frac{\lambda_j}{x} \right) .
\end{align*}

where ##x## varies between ##a_{j-l}## and ##a_{j+l'}##. We find that

\begin{align*}
\frac{d}{dx} M (x) & = \left( \sum_{i \not= j} \frac{\lambda_i}{a_i} + \frac{1}{x} \right) - \left( \sum_{i \not= j} \lambda_i a_i + x \right) \frac{1}{x^2}
\\
& = \left( \sum_{i \not= j} \frac{\lambda_i}{a_i} \right) - \left( \sum_{i \not= j} \lambda_i a_i \right) \frac{1}{x^2}
\end{align*}

and

\begin{align*}
\frac{d^2}{dx^2} M (x) = \left( \sum_{i \not= j} \lambda_i a_i \right) \frac{2}{x^3} > 0 .
\end{align*}

So ##M(x)## is convex, which means it has a maximum value at one of the end points. Therefore, ##M## is increased by replacing ##a_j## by either ##a_{j-l}## or ##a_{j+l'}##. This completes the inductive argument.

So that

\begin{align*}
& \; (a_1 + a_2 + a_3 + a_4 +a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\\
& \leq (m a_1 + (5-m) a_5) \left( \frac{m}{a_1} + \frac{5-m}{a_5} \right)
\\
& = m^2 + (5-m)^2 + m (5-m) \left( \frac{a_1}{a_5} + \frac{a_5}{a_1} \right)
\\
& = m^2 + (5-m)^2 + 2 m (5-m) + m (5-m) \left( \frac{a_1}{a_5} + \frac{a_5}{a_1} - 2 \right)
\\
& = 25 + m (5-m) \left( \sqrt{\frac{a_1}{a_5}}- \sqrt{\frac{a_5}{a_1}} \right)^2
\end{align*}

This is maximised by taking ##m=2##:

\begin{align*}
(a_1 + a_2 + a_3 + a_4 +a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\leq 25 + 6 \left( \sqrt{\frac{a_1}{a_5}}- \sqrt{\frac{a_5}{a_1}} \right)^2 \quad (*)
\end{align*}

Define

\begin{align*}
f (x) = \left( \sqrt{\frac{a_1}{x}} - \sqrt{\frac{x}{a_1}} \right)^2
\end{align*}

and take the derivative

\begin{align*}
\frac{d}{dx} f (x) & = \left( \sqrt{\frac{a_1}{x}}- \sqrt{\frac{x}{a_1}} \right) \left( - \frac{1}{x} \sqrt{\frac{a_1}{x}} - \sqrt{\frac{1}{a_1 x}} \right)
\\
& = - \frac{1}{x} \left( \sqrt{\frac{a_1}{x}} - \sqrt{\frac{x}{a_1}} \right) \left( \sqrt{\frac{a_1}{x}} + \sqrt{\frac{x}{a_1}} \right)
\\
& = \frac{1}{x} \left( \frac{x}{a_1} - \frac{a_1}{x} \right)
\end{align*}

is ##\geq 0## for ##a_1 \leq x < \infty##. Which means we can replace ##a_5## in ##(*)## with any number, denote it ##q##, larger than or equal to ##a_5##.

Define

\begin{align*}
g (x) = \left( \sqrt{\frac{x}{q}} - \sqrt{\frac{q}{x}} \right)^2
\end{align*}

and take the derivative

\begin{align*}
\frac{d}{dx} g (x) = \frac{1}{x} \left( \frac{x}{q} - \frac{q}{x} \right)
\end{align*}

is ##\leq 0## for ##0 < x \leq q##. Which means that, after having replaced ##a_5## with ##q##, we can replace ##a_1## in ##(*)## with any number, denote it ##p > 0##, smaller than or equal to ##a_1##. So finally we have

\begin{align*}
(a_1 + a_2 + a_3 + a_4 +a_5) \left( \frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3} + \frac{1}{a_4} + \frac{1}{a_5} \right)
\leq 25 + 6 \left( \sqrt{\frac{p}{q}}- \sqrt{\frac{q}{p}} \right)^2 .
\end{align*}

where ##0 < p \leq a_1, a_2 , a_3 , a_4 , a_5 \leq q##.

It is easy to generalise these arguments. We have

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right)
& \leq (m a_1 + (n-m) a_n) \left( \frac{m}{a_1} + \frac{n-m}{a_n} \right)
\\
& = m^2 + (n-m)^2 + m (n-m) \left( \frac{a_1}{a_n} + \frac{a_n}{a_1} \right)
\\
& = m^2 + (n-m)^2 + 2 m (n-m) + m (n-m) \left( \frac{a_1}{a_n} + \frac{a_n}{a_1} - 2 \right)
\\
& = n^2 + m (n-m) \left( \sqrt{\frac{a_1}{a_n}}- \sqrt{\frac{a_n}{a_1}} \right)^2 .
\end{align*}

For ##n## odd this is maximised by choosing ##m= \frac{n-1}{2}##,

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right) \leq n^2 + \frac{n^2 - 1}{4} \left( \sqrt{\frac{a_1}{a_n}}- \sqrt{\frac{a_n}{a_1}} \right)^2
\end{align*}

For ##n## even this is maximised by choosing ##m= \frac{n}{2}##,

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right) \leq n^2 + \frac{n^2}{4} \left( \sqrt{\frac{a_1}{a_n}}- \sqrt{\frac{a_n}{a_1}} \right)^2
\end{align*}

We further maximise, as before, by introducing ##p## and ##q## such that ##0 < p \leq a_1, a_2 , \dots, a_n \leq q##. So the general formula are (##n## odd):

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right)
\leq n^2 + \frac{n^2-1}{4} \left( \sqrt{\frac{p}{q}}- \sqrt{\frac{q}{p}} \right)^2
\end{align*}

and (##n## even):

\begin{align*}
\left( \sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n \frac{1}{a_i} \right)
\leq n^2 + \frac{n^2}{4} \left( \sqrt{\frac{p}{q}}- \sqrt{\frac{q}{p}} \right)^2 .
\end{align*}
 
Last edited:
  • Like
Likes   Reactions: StoneTemplePython
  • #29
Well done, @julian. The last estimation allows a final step
$$
\left(\sum_{i=1}^{n}a_i\right)\left(\sum_{i=1}^n\dfrac{1}{a_i}\right)\leq \dfrac{n^2}{4}\cdot \dfrac{(p+q)^2}{pq}
$$
It is indeed a special case of Kantorovich's inequality (with weights all one) as @StoneTemplePython rightly noted in the previous post. The only difference is a closer look at the optimization problem, i.e. the shape of the feasible set, a polyhedron.
 
  • Like
Likes   Reactions: StoneTemplePython
  • #30
Problem #1

First let us consider an individual function ##f_n \in \mathcal{A}## for arbitrary ##n##:

##
f_n = \lim_{m \rightarrow \infty} f_m^{(n)} (x) = \lim_{m \rightarrow \infty} \sum_{k=n}^m \tilde{f}_k (x) .
##

Let ##\{ f_m^{(n)} \}_{m \in \mathbb{N} , m \geq n}## be a sequence of functions on ##I = [0, 1]## such that each ##f_m^{(n)}## has continuous derivative on ##I##. If ##\sum_{k=n}^\infty \tilde{f}_k (x)## converges for some ##x_0 \in I## and the series of derivatives ##\sum_{k=n}^\infty \tilde{f}_k'## converges uniformly on ##I## to a sum function ##s_n##, then ##\sum_{k=n}^\infty \tilde{f}_k## converges uniformly on ##I## to a differentiable sum function ##f_n## on ##I##. Moreover, ##s_n = f_n'##. That is, ##\{ f_m^{(n)'} \}## converges uniformly to ##f_n'## on ##I##.

In our case ##f_m^{(n)}## corresponds to

##
f_m^{(n)} = \sum_{k=n}^m a_k \sin (kx) .
##

First ##\sum_{k=n}^\infty a_k \sin (kx)## converges for ##x=0##. We need to show that ##\sum_{n=1}^\infty \tilde{f}_n'## converges uniformly, which is easy to do using the Weierstrass M-test: If ##|\tilde{f}_k' (x)| < M_k## for each ##x \in I## and the sequence ##{M_k}## is summable, then the series ##\sum_{k=n}^\infty \tilde{f}_k'## converges uniformly on ##I##. Write ##M_k = {1 \over k^2}##. Obviously, ##| k a_k \cos (kx)| < M_k## and ##\sum_{k=n}^\infty M_k## converges.

We have established that ##f_n (x) = \sum_{k=n}^\infty a_k \sin (kx)## is differentiable on ##I##, converges uniformly to a sum function ##f_n (x)##, and that its derivatives converge uniformly to ##f_n' (x)##.

Now let us turn to the sequence ##\{ f_n \}_{n \in \mathbb{N}}## where:

##
f_n (x) = \sum_{k=n}^\infty \tilde{f}_k (x) = \sum_{k=1}^\infty \tilde{f}_k (x) - \sum_{k=1}^{n-1} \tilde{f}_k (x) = f_1(x) - \sum_{k=1}^{n-1} \tilde{f}_k (x)
##

First the ##f_n## are a sequence of functions on ##I = [0, 1]## such that each ##f_n## is differentiable on ##I##. The sequence ##f_n## converges uniformly to the zero function on ##I##. The derivatives are uniformly convergent to the zero function on ##I## because:

##
f_n'(x) = f_1' (x) - \sum_{k=1}^{n-1} \tilde{f}_k' (x)
##

Note: A continuous function on a closed bounded interval is bounded and attains its bounds. Every uniformly convergent sequence of bounded functions is uniformly bounded.
Suppose that ##f_n \rightarrow f## uniformly on ##[0,1]## and that for each ##n##, there exists ##C_n## such that ##|f_n (x)| \leq C_n## for any ##x \in I##. Let ##N## be such that

##
|f_n (x) - f (x)| = |f_n (x)| < 1 ,
##

whenever ##n \geq N## and ##x \in I## (in our case ##f(x)## is the zero function). Put

##
C = \max \{ C_1, C_2 , \dots , C_{N-1} , 1 \}
##

Then, for any ##x \in I##,

##
|f_n (x)| \leq C \quad \text{for } n=1,2,3, \dots
##

Thus ##f_n## is uniformly bounded.

Definition: a sequence of functions is uniformly equicontinuous if, for every ##\epsilon > 0##, there exists a ##\delta > 0## such that

##
|f_n (x) - f_n (y)| < \epsilon
##

whenever ##|x - y| < \delta## for all functions ##f_n## in the sequence.

We now use the above results proved for ##\{ f_n \}## to prove that they are uniformly equicontiniuous. By the mean value theorem

##
|f_n (y) - f_n (x)| = {df_n \over dx} (c) |y - x| \quad \text{for some } c \text{ where } 0 \leq x < c < y \leq 1 .
##

The derivatives being uniformly bounded implies

##
|f_n (y) - f_n (x)| \leq M |y - x|
##

where ##M## is the supremum of the derivatives of the functions in the sequence and is independent of ##n##. Take ##\epsilon > 0## and choose ##\delta = \epsilon / M##, then for ##|x - y| < \delta## we have

##
|f_n (y) - f_n (x)| \leq M |y - x| < M \delta = \epsilon .
##

That is, we have shown that a uniformly bounded sequence ##\{ f_n \}## of differentiable functions with uniformly bounded derivatives are uniformly equicontiniuous.

We then apply the Ascoli-Arzela theorem, which states that: Given a sequence of functions ##\{ f_n \}_{n \in \mathbb{N}}## defined on a closed bounded interval ##[a,b]## of the real line. If the sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence ##\{ f_{n_k} \}_{k \in \mathbb{N}}## that converges uniformly.
 
Last edited:
  • Like
Likes   Reactions: fresh_42

Similar threads

  • · Replies 93 ·
4
Replies
93
Views
15K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 114 ·
4
Replies
114
Views
11K
  • · Replies 42 ·
2
Replies
42
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 100 ·
4
Replies
100
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
10K
  • · Replies 64 ·
3
Replies
64
Views
16K
  • · Replies 39 ·
2
Replies
39
Views
13K
  • · Replies 60 ·
3
Replies
60
Views
12K