Challenge Math Challenge - April 2020

Click For Summary
The Math Challenge - April 2020 discussion covers various mathematical problems and their solutions, including topics in functional analysis, statistics, differential equations, and number theory. Key problems involve showing the unique continuation of linear operators in normed spaces, testing hypotheses for normally distributed variables, and solving initial value problems for differential equations. Participants also explore properties of continuous functions and polynomials, as well as combinatorial problems related to tiling rectangles. The thread emphasizes the importance of clear, complete solutions and rigorous mathematical reasoning.
  • #61
QuantumQuest said:
I mean that I made a mistake in copying the expression. The one that is written now in post #56 is the one I'm talking about. And in this you took the limit of the left hand side only while you have to take the limits of both sides but as you answer in post #57 it's understood, so it's OK for this. But taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
Was my explanation of why I took only the left side limit (I can do that if we take everything to one side and then we use the rule “limit of a sum is equal to the sum of limits”) unsatisfactory ?
 
Physics news on Phys.org
  • #62
Adesh said:
Yes, I thought writing the limit before the whole equation meant limit was getting applied to everything, but as I have experienced it has caused some ambiguities. I aplogizie for that unclearness, but I really meant limits being on both sides.

I think that you understand my point about taking both limits by now. But, again, taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
 
  • #63
QuantumQuest said:
I think that you understand my point about taking both limits by now. But, again, taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
Am I allowed to explain my solution or do I need to come up with a new one?
 
  • #64
Adesh said:
Am I allowed to explain my solution or do I need to come up with a new one?

It's up to you but if you want to keep the current solution you gave, you must do the appropriate corrections and present it in a more understandable way and, of course, reach the correct result. Alternatively, I'd recommend to take into account my hint and work that way which leads to a nice solution.
 
  • #65
Qustiom7:
This solution is achieved only by the hint given by @QuantumQuest .

By Taylor Expansion:
$$f(a+h) = f(a) + hf ‘(a) + \frac{h^2}{2!} f’’(a) + \frac{h^3}{3!} f’’’(a) +... $$
From the expression which is given we can write $$ f(a+h) = f(a+h) - \frac{h^2}{2!} f”(a +\theta h) + \frac{h^2}{2!} f”(a) \frac{h^3}{3!} + f’’’(a) +... $$ As ##h## goes to zero we can ignore higher terms, hence we can write
$$\frac{h^2}{2!} \left ( f”(a+\theta h) - f”(a)\right) = \frac{h^3}{3!} f”’(a) \\
\frac{6}{2} \left( \frac {
f”(a+\theta h) - f”(a) }
{ \theta h} \right) = \frac{f”’(a)}{\theta} $$
In the limit as ##h## goes to zero we have $$ 3 f”’(a) = \frac{f”’(a)}{\theta} \\
3 \theta = 1 \\
\theta = 1/3 $$
 
  • Like
Likes QuantumQuest
  • #66
Problem 6:

As ##\ln (x)## is a convex up function we have:

##
\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)
##

Taking the exponential, we obtain:

##
m_1 a_1 + m_2 a_2 \geq a_1^{m_1} a_2^{m_2} .
##

Equality holds if ##m_1 = 0## or ##m_2 = 0##. Also, equality holds if ##a_1 = a_2##.

 
Last edited:
  • #67
fresh_42 said:
I assume that means that y(x)=x is the solution.

Have you cross checked this expression? Mine is different.

Edit: This could either mean that there is more than one solution or one of us is wrong. I checked your result and couldn't retrieve the differential equation, but I'm not sure whether I didn't make a mistake. Your shift by 1/2 makes a big mess in the calculations.
Hi fresh_42,
I simplified my expression for ##y(0)=2## to ##y=x + \frac{1}{(1-\frac{1}{2} e^x)}##. My solution gives,$$
y(0)=0\\
y=x\\
y(0)=1\\
y=x+1\\
y(0)=2\\
y=x + \frac{1}{(1-\frac{1}{2} e^x)}
$$I cross checked these solutions and they check out.
 
  • Like
Likes fresh_42
  • #68
Adesh said:
Qustiom7:
This solution is achieved only by the hint given by @QuantumQuest .

By Taylor Expansion:
$$f(a+h) = f(a) + hf ‘(a) + \frac{h^2}{2!} f’’(a) + \frac{h^3}{3!} f’’’(a) +... $$
From the expression which is given we can write $$ f(a+h) = f(a+h) - \frac{h^2}{2!} f”(a +\theta h) + \frac{h^2}{2!} f”(a) \frac{h^3}{3!} + f’’’(a) +... $$ As ##h## goes to zero we can ignore higher terms, hence we can write
$$\frac{h^2}{2!} \left ( f”(a+\theta h) - f”(a)\right) = \frac{h^3}{3!} f”’(a) \\
\frac{6}{2} \left( \frac {
f”(a+\theta h) - f”(a) }
{ \theta h} \right) = \frac{f”’(a)}{\theta} $$
In the limit as ##h## goes to zero we have $$ 3 f”’(a) = \frac{f”’(a)}{\theta} \\
3 \theta = 1 \\
\theta = 1/3 $$

The solution is essentially correct. Just two comments. First, you must correct the typo in the addition and the multiplication in the end of the second expression in order to be correctly written for the people that will read your solution. As a second comment, it's best to write the expansion like this

##f(a+h) = f(a) + hf ‘(a) + \frac{h^2}{2!} f’’(a) + \frac{h^3}{3!} f’’’(a + \theta_1h) ##

as what is given is that ##f## is a three times differentiable function. At the limit where ##h \rightarrow 0## ##\hspace{1cm}## ##\theta_1h## vanishes.
 
  • #69
julian said:
Problem 6:

As ##\ln (x)## is a convex up function we have:

##
\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)
##

Taking the exponential, we obtain:

##
m_1 a_1 + m_2 a_2 \geq a_1^{m_1} a_2^{m_2} .
##

Equality holds if ##m_1 = 0## or ##m_2 = 0##. Also, equality holds if ##a_1 = a_2##.


Yes, I see what you mean but in order for other people to understand it, I think that it is best to show how to utilize the derivatives and the appropriate theorem in order to reach the point where ##\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)##.
 
  • #70
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
 
  • #71
archaic said:
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
I am not exactly sure, but maybe I can use a recursive algorithm. I'll think about it later.
 
  • #72
@archaic If ##n## is even, then ##p_n## achieves a minimum at some point ##x_0##. Think about how to use the fact that ##p_n'(x_0)=0##.
 
  • #73
Infrared said:
@archaic If ##n## is even, then ##p_n## achieves a minimum at some point ##x_0##. Think about how to use the fact that ##p_n'(x_0)=0##.
I think that I have thought of that but didn't know how to prove its uniqueness (the minimum).
 
  • #74
You don't need uniqueness. If ##x_0## is any point where ##p_n## achieves its minimum, then it's enough to show that ##p_n(x_0)>0##

But uniqueness does follow from ##p_n'(x_0)=p_{n-1}(x_0)=0## since you know that ##p_{\text{odd}}## only has one real root.
 
  • #75
Infrared said:
But uniqueness does follow from ##p_n'(x_0)=p_{n-1}(x_0)=0## since you know that ##p_{\text{odd}}## only has one real root.
What? I do? Do you mean that I am allowed to say "since we know that ##p_{odd}(x)## has only one real root"?
I probably am misunderstanding you, since an odd polynomial of degree ##n## has up to ##n## real roots.
 
  • #76
By induction you can assume that ##p_{2n-1}## has exactly one root. Then arguing like this shows that ##p_{2n}## has no roots, which by your post shows that ##p_{2n+1}## has exactly one root, etc.
 
  • #77
Infrared said:
By induction you can assume that ##p_{2n-1}## has exactly one root. Then arguing like this shows that ##p_{2n}## has no roots, which by your post shows that ##p_{2n+1}## has exactly one root, etc.
But, to argue like this, I should have already found out that ##p_{2n}(x)\geq0##, no (*)? Since assuming that ##p_{2n-1}## has just one root only tells me that ##p_{2n}## has just one extremum.
I realized that my before last post made no sense because of (*). Maybe that's why I tried to prove that ##p_{2n}(x)\geq0##.
 
Last edited:
  • #78
Yes, you need to show that if ##x_0## is an extremum for ##p_{2n}##, then ##p_{2n}(x_0)>0##. Think about how to use ##p_{2n}'(x_0)=p_{2n-1}(x_0)=0##.
 
  • #79
Infrared said:
Yes, you need to show that if ##x_0## is an extremum for ##p_{2n}##, then ##p_{2n}(x_0)>0##. Think about how to use ##p_{2n}'(x_0)=p_{2n-1}(x_0)=0##.
I tried supposing that ##p_n(x)## and ##p_{n-2}(x)## have only one root, which let's me say that ##p_{n-1}(x)\geq 0##. But I can't really say much about ##p_{n+1}(x)## (it might be negative in some interval) or ##p_{n-3}(x)## which would be positive, though possibly wavy.
If I only suppose that ##p_n(x)## has a minimum, then ##p_{n-1}(x)## might also be squiggly.
 
  • #80
Do you want another hint? I think if I say much more, I might give away the solution.
 
  • #81
fresh_42 said:
6. If ##a_1,a_2 > 0## and ##m_1,m_2 ≥ 0## with ##m_1 + m_2 = 1## show that ##a_1^{m_1}a_2^{m_2} < m_1 a_1 + m_2 a_2##. (QQ)

I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Without loss of generality, let us assume that ##a_1 \lt a_2##. The proof for the opposite case, ##a_1 \gt a_2## will be similar but with the variable ##x## corresponding to ##\frac {a_2}{a_1}## instead of ##\frac {a_1}{a_2}##.

The given inequality is equivalent to ##\dfrac {a_1^{m_1}a_2^{1-m_1}} {m_1 a_1 + (1-m_1) a_2} < 1##,
or equivalently, obtained by dividing both numerator and denominator by ##a_2 = a_2^{m_1} a_2^{1-m_1}##,
$$\dfrac { \left( \dfrac{a_1}{a_2} \right)^{m_1}} {m_1 \left( \dfrac{a_1}{a_2} \right) + (1 - m_1)} < 1
$$

For a fixed value of ##m_1##, the LHS of the above inequality can be written as a function of ##x=\frac{a_1}{a_2}##, so we need to prove that ##f(x) = \dfrac {x ^ {p}} {x p + (1-p)} < 1## for ##x \in (0, 1)## and ##p = m_1 \in (0, 1)##.
Note that since ##a_1 \lt a_2 \text{ and } a_1, a_2> 0, \text{ we must have } x \in (0,1)##.

We now prove that ##f(x)## is a strictly increasing function of ##x## when ##0 \lt x \lt 1##.
$$
f'(x) = \dfrac {px^{p-1}}{x p + (1-p)} - \dfrac {px^p}{(x p + (1-p))^2} = \dfrac {px^{p-1}} {(x p + (1-p))^2} (px + (1-p) - x)= \\
\dfrac {px^{p-1}(1-p) (1-x)} {(x p + (1-p))^2}
$$

It is obvious that that ##f'(x) \gt 0## for ##x \in (0, 1), p \in (0, 1)## since when those conditions are met, the both the numerator and denominator of the above expression for ##f'(x)## are positive. We also notice that ##f(1) = 1##. Since ##f(x)## is strictly increasing for ##0 \lt x \lt 1## and reaches a value of 1 when ##x=1##, it follows that ##f(x) \lt 1 \, \forall x \in (0, 1)##. Hence, the question's inequality is proved subject to the additional conditions mentioned earlier.
 
  • Like
Likes QuantumQuest
  • #82
@Infrared I think the only possible way for proving that is proof by contradiction. I’m need of some discussion (please no hint), @mathwonk said something but I couldn’t decode what he said.
 
  • #83
Not anonymous said:
I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Without loss of generality, let us assume that ##a_1 \lt a_2##. The proof for the opposite case, ##a_1 \gt a_2## will be similar but with the variable ##x## corresponding to ##\frac {a_2}{a_1}## instead of ##\frac {a_1}{a_2}##.

The given inequality is equivalent to ##\dfrac {a_1^{m_1}a_2^{1-m_1}} {m_1 a_1 + (1-m_1) a_2} < 1##,
or equivalently, obtained by dividing both numerator and denominator by ##a_2 = a_2^{m_1} a_2^{1-m_1}##,
$$\dfrac { \left( \dfrac{a_1}{a_2} \right)^{m_1}} {m_1 \left( \dfrac{a_1}{a_2} \right) + (1 - m_1)} < 1
$$

For a fixed value of ##m_1##, the LHS of the above inequality can be written as a function of ##x=\frac{a_1}{a_2}##, so we need to prove that ##f(x) = \dfrac {x ^ {p}} {x p + (1-p)} < 1## for ##x \in (0, 1)## and ##p = m_1 \in (0, 1)##.
Note that since ##a_1 \lt a_2 \text{ and } a_1, a_2> 0, \text{ we must have } x \in (0,1)##.

We now prove that ##f(x)## is a strictly increasing function of ##x## when ##0 \lt x \lt 1##.
$$
f'(x) = \dfrac {px^{p-1}}{x p + (1-p)} - \dfrac {px^p}{(x p + (1-p))^2} = \dfrac {px^{p-1}} {(x p + (1-p))^2} (px + (1-p) - x)= \\
\dfrac {px^{p-1}(1-p) (1-x)} {(x p + (1-p))^2}
$$

It is obvious that that ##f'(x) \gt 0## for ##x \in (0, 1), p \in (0, 1)## since when those conditions are met, the both the numerator and denominator of the above expression for ##f'(x)## are positive. We also notice that ##f(1) = 1##. Since ##f(x)## is strictly increasing for ##0 \lt x \lt 1## and reaches a value of 1 when ##x=1##, it follows that ##f(x) \lt 1 \, \forall x \in (0, 1)##. Hence, the question's inequality is proved subject to the additional conditions mentioned earlier.

I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Well, I was given this exercise as an undergrad some decades ago just like I'm giving it in this challenge thread - i.e, no modifications. While there are of course the cases you mention, the question is looking for the general case where ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.
 
  • Informative
Likes Not anonymous
  • #84
As the Question ##6## is already solved I find it advisable to post my own solution as well.

Let's take the function ##f: (0,+\infty) \rightarrow \mathbb{R}## with ##f(x) = \ln{x}##.
We have: ##f'(x) = \frac{1}{x}## and ##f''(x) = -\frac{1}{x^2} \lt 0 \hspace{0.5cm} \forall x \in (0,+\infty)##.

Suppose ##a_1 \lt a_2####(^*)##. We apply the Mean Value Theorem over the intervals ##[a_1, \frac{a_1 + ma_2}{1 + m}],[\frac{a_1 + ma_2}{1 + m}, a_2] \hspace{0.5cm} m \gt 0##. So, ##\exists \xi_1 \in (a_1, \frac{a_1 + ma_2}{1 + m}), \xi_2 \in (\frac{a_1 + ma_2}{1 + m}, a_2)## such that

##\frac{f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2) - f(a_1)}{\frac{a_1 + ma_2}{1 + m} - a_1} = f'(\xi_1), \frac{f(a_2) - f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2)}{a_2 - \frac{a_1 + ma_2}{1 + m}} = f'(\xi_2)##.

We also have that ##f'## is a strictly decreasing function in ##[a_1, a_2]## as ##f'' \lt 0## in ##(0,+\infty)##.
Now, because ##\xi_1 \lt \xi_2## we have ##f'(\xi_1) \gt f'(\xi_2)##, so

##\frac{f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2) - f(a_1)}{\frac{m(a_2 - a_1)}{1 + m}} \gt \frac{f(a_2) - f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2)}{\frac{a_2 - a_1}{1 + m}}##.

Putting ##\frac{1}{1 + m} = m_1##, ##\frac{m}{1 + m} = m_2## so ##m_1, m_2 \in \mathbb{R}_+## and ##m_1 + m_2 = 1## are satisfied, we have

##\frac{\ln{(m_1a_1 + m_2a_2)} - \ln{a_1}}{m_2} \gt \frac{\ln{a_2} - \ln{(m_1a_1 + m_2a_2)}}{m_1}## so
##m_1\ln{(m_1a_1 + m_2a_2)} - m_1\ln{a_1} \gt m_2\ln{a_2} - m_2\ln{(m_1a_1 + m_2a_2)}##,

so ##(m_1 + m_2) \ln{(m_1a_1 + m_2a_2)} \gt m_1\ln{a_1} + m_2\ln{a_2}## hence

##\ln{(m_1a_1 + m_2a_2)} \gt \ln{(a_1^{m_1}a_2^{m_2})}##, so finally

##m_1a_1 + m_2a_2 \gt a_1^{m_1}a_2^{m_2}## Q.E.D.##(^*)##: Although it could be ##a_1 = a_2## this would lead to an equality in the expression that is asked to be proved but here only the inequality part is asked, so we implicitly assume that ##a_1 \neq a_2##. Also, if
##m_1 = 0## or ##m_2 = 0##, again, we end up to an equality but here only the inequality part is asked, so we implicitly assume that ##m_1, m_2 \neq 0##.
 
  • #85
QuantumQuest said:
Yes, I see what you mean but in order for other people to understand it, I think that it is best to show how to utilize the derivatives and the appropriate theorem in order to reach the point where ##\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)##.
The ##\ln(x)## function is continuous, with ##d \ln (x) / dx = 1/x## positive for ##x > 0##, and ##d^2 \ln (x) / dx^2 = - 1/x^2## is negative for ##x > 0##. So the derivative is monotonically decreasing and so the function curves down and as a consequence the line segment (chord) between any two points on the graph (for ##x > 0##) lies below or on the graph - see attached figure. Specifically, for any point ##a_1 < x < a_2## the height of the chord is less than the height of ##\ln(x)## (while they are equal at ##x=a_1## and ##x=a_2##). Therefore:

##
\ln (m_1 a_1 + (1-m_1) a_2) \geq m_1 \ln (a_1) + (1-m_1) \ln (a_2)
##

for ##0 \leq m_1 \leq 1##.
 

Attachments

  • convex.jpg
    convex.jpg
    9.4 KB · Views: 158
Last edited:
  • Like
Likes QuantumQuest
  • #86
Question 9 :

Claim 1: ##f(x)## is one - one., i.e., ##f(x)=f(y)\Rightarrow x=y##
Proof: $$f(x)=f(y)$$
$$\Rightarrow f(f(x))=f(f(y))$$
$$\Rightarrow f(f(f(x)))=f(f(f(y)))$$
$$\Rightarrow x=y$$
So, ##f(x)## is one - one.

Claim 2: Since ##f(x)## is continuous in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: Suppose ##f(x)## is a concave downward function. Then, there will be a maximum within the interval, viz., ##f(c)## for some value ##c## such that ##a(=0)<c<b(=1)##. As ##f(x)## is continuous in ##[a,c]## & ##[c,b]##, then ##f(c)>f(a)## and ##f(c)>f(b)##. So, by intermediate value theorem, ##\exists ## ##x_1, x_2## such that ##x_2>x_1## and ##a<x_1<c, c<x_2<b ##but ##f(x_2)>f(x_1)##, which is a contradiction since ##f(x)## is one-one.

It can be similarly proved that ##f(x)## cannot be a concave upward function.

Claim 3: ##f(x)## is strictly increasing.
Proof: If ##f(x)## is strictly decreasing, then for
$$y>x \Rightarrow f(y)<f(x)$$
$$\Rightarrow f(f(y))>f(f(x))$$
$$\Rightarrow f(f(f(y)))<f(f(f(x))) \Rightarrow y<x$$
a contradiction.
So, ##f(x)## is strictly increasing.

Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)\ne x##,
Then either ##f(x)>x## or ##f(x)<x##.

If
\begin{equation} f(x)>x \end{equation}
\begin{equation} \Rightarrow f(f(x))>f(x) \end{equation}
\begin{equation} \Rightarrow f(f(f(x)))>f(f(x)) \end{equation}
\begin{equation} \Rightarrow x>f(f(x)) \end{equation}
From (1), (2) and (4)
\begin{equation} x>f(f(x))>f(x)>x \end{equation}
which is impossible.

Similarly,
If
\begin{equation} f(x)<x \end{equation}
\begin{equation} \Rightarrow f(f(x))<f(x) \end{equation}
\begin{equation} \Rightarrow f(f(f(x)))<f(f(x)) \end{equation}
\begin{equation} \Rightarrow x<f(f(x)) \end{equation}
From (5), (6) and (8)
\begin{equation} x<f(f(x))<f(x)<x \end{equation}
which is impossible.
Hence,
$$f(x)=x$$

As the proof is general, the result is also true for ##f:\mathfrak {R} \rightarrow \mathfrak {R}##.
 
  • #87
It is given that $$ f \left( f
\left (f (x) \right) \right ) = x$$ for all ##x \in [0,1]##. That “for all ##x \in [0,1]##” part is very important.

Let’s assume that ##f(x) \neq x## (that is ##f(x)## is not an identity function). Now, ##f(x)## is either greater than or less than ##f(x) =x## in some sub-interval of ##[0,1]##. $$ f(x) \gt x \\
\textrm{let’s assume that f is an increasing function, well there is no possibility for it to be a constant function, it will either increase or decrease in some sub-interval (that same sub-interval in which it is greater/lesser than x} \\
f \left ( f(x) \right) \gt f(x) \gt x \implies f \left ( f(x) \right ) \gt x \\
f \left (
f \left ( f(x) \right ) \right ) \gt f\left (f (x) \right ) \\
\implies x \gt f \left (f(x) \right)$$ That is a contradiction. Now, I said that ##f(x)## is greater than ##f(x) =x## for some sub-interval, but that’s enough because $$f \left(
f \left( f(x) \right ) \right) $$ is true for all ##x \in [0,1]## and now that we have proved it at least for some ##x## therefore it’s enough.

Similarly, we can reach the contradiction if we assume ##f(x) \lt x## in some sub-interval, or if we assume ##f## to be decreasing, if it’s decreasing that inequality sign would flip.
 
  • #88
Lament said:
Claim 2: Since ##f(x)## is continuous in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: Suppose ##f(x)## is a concave downward function. Then, there will be a maximum within the interval, viz., ##f(c)## for some value ##c## such that ##a(=0)<c<b(=1)##. As ##f(x)## is continuous in ##[a,c]## & ##[c,b]##, then ##f(c)>f(a)## and ##f(c)>f(b)##. So, by intermediate value theorem, ##\exists ## ##x_1, x_2## such that ##x_2>x_1## and ##a<x_1<c, c<x_2<b ##but ##f(x_2)>f(x_1)##, which is a contradiction since ##f(x)## is one-one.

It can be similarly proved that ##f(x)## cannot be a concave upward function.
You have the right idea here, but the wording is a little off. Your argument that having a maxmum or a minimum contradicts the injectivity of ##f## is good, but concavity doesn't have much to do with it. Being "concave down" does not mean that a maximum is achieved on the interior (e.g. the square root function is concave down, but still strictly increasing on its domain). The last detail that you should check for this argument to be airtight is that a continuous function defined on an interval is either monotonic or has (interior) local extrema.

Lament said:
Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)\ne x##,
Then either ##f(x)>x## or ##f(x)<x##.

Why are the only three possible cases that ##f(x)=x, f(x)>x##, and ##f(x)<x## (for all ##x##)? Why can't it be mixed?
 
  • #89
@Adesh Please fix your latex spacing- currently a part of your solution extends very far right and is hard to read.

Anyway, you suppose monotonicity and that ##x<f(x)## in some sub-interval ##J##. This doesn't imply that ##f(x)<f(f(x))## though, because ##f(x)## might not be in ##J##.

Adesh said:
Now, I said that ##f(x)## is greater than ##f(x) =x## for some sub-interval, but that’s enough because $$f \left(
f \left( f(x) \right ) \right) $$ is true for all ##x \in [0,1]## and now that we have proved it at least for some ##x## therefore it’s enough.

I don't understand this part. Why is it enough for ##f(f(f(x)))## to be equal to ##x## for some ##x##, as opposed to all ##x##?

Edit: Sorry, my last sentence should read "I don't understand this part. Why is it enough for ##f(x)## to be equal to ##x## for some ##x##, as opposed to all ##x##?"
 
Last edited:
  • #90
Infrared said:
I don't understand this part. Why is it enough for f(f(f(x)))f(f(f(x)))f(f(f(x))) to be equal to xxx for some xxx, as opposed to all xxx?
Actually I meant something else. I wanted to say that I proved that contradiction for some interval ##J##, in other cases I had to prove for all ##x## but since proving for even one single ##x## will suffice because in question that condition holds for all ##x##.
 

Similar threads

  • · Replies 61 ·
3
Replies
61
Views
11K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 60 ·
3
Replies
60
Views
12K
  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 77 ·
3
Replies
77
Views
15K
  • · Replies 56 ·
2
Replies
56
Views
10K
  • · Replies 102 ·
4
Replies
102
Views
10K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 64 ·
3
Replies
64
Views
15K