Challenge Math Challenge - April 2020

  • #51
Adesh said:
Sir, I tried your hint. I Taylor expanded everything but you know... couldn’t get nothing

So, let me help a little more. What you're interested at, is putting the third derivative into the expansion. Now, from there, you can do some math, also taking into account the given expression.
 
Last edited:
Physics news on Phys.org
  • #52
QuantumQuest said:
Well, to be honest, I don't quite understand. If you have an equality that holds true then either you take the limit of both sides or not. I don't really know if you wanted to write something else and you wrote it like this.
Let’s consider this example (no relation with problem 7) $$ \lim_{h\to 0} \frac{ f(a+h) - f(a)}{h} = \frac{f”(a+h)}{h}$$ I think your argument is that equality above holds only if I take the limit both sides, I have something like this in my mind $$ \lim_{h\to 0} \frac{f(a+h) -f(a)}{h} = \frac{ f”(a+h)}{h} \\
\lim_{h\to 0} \frac{f(a+h) - f(a)}{h} - \frac{ f”(a+h)}{h} =0$$ Now, if we use the law that “limit of a sum is the sum of the limits” then we write $$ f’(a) -\lim_{h\to 0} \frac{f”(a+h)}{h} = 0 $$
 
  • #53
Adesh said:
Question 13 a) (after the error being pointed out by @archaic, he deserves the credits)

$$ \lim_{x\to \pi/2} \frac{x-\pi/2} {\sqrt
{1-\sin x} } $$ Let ##y= x- \pi/2 ## $$ \lim_{y\to 0} \frac{y}{\sqrt {
1- \sin (y+\pi/2)}} $$

$$ \lim_{y\to 0} \frac{y}{ \sqrt {
1- \cos y}} \\
\lim_{y\to 0} \frac{y}{\sqrt { 2 \sin^2 y/2}} \\

\lim_{y\to 0} \frac{1}{\sqrt 2} \frac{y} {\sqrt {\sin^2 (y/2)}} $$
Now, let’s look at a crucial fact ## \sqrt{x^2} = \left ( \sqrt x \right) ^2## but if ##x## is negative then the RHS in the above equality breaks down, therefore, if ##x## can take both +ve and -be values then we consider only +ve values that is the mod value, ##|x|##.
Coming back to our limit $$ \frac{1}{\sqrt 2} \lim_{y\to 0} \frac{y}{|\sin y/2 |} $$
$$ \frac{1}{\sqrt 2} \lim_{y\to 0^-} \frac{ y/ y/2} { - \sin (y/2) / y/2} = -\sqrt 2$$ And $$ \frac {1}{\sqrt 2} \lim_{y\to 0^+} \frac{y/y/2 } { \sin (y/2) / y/2 } = \sqrt 2 $$
Hence, limit doesn’t exist.

Well done @Adesh.
 
  • Like
Likes Adesh
  • #54
Adesh said:
Let’s consider this example (no relation with problem 7) $$ \lim_{h\to 0} \frac{ f(a+h) - f(a)}{h} = \frac{f”(a+h)}{h}$$ I think your argument is that equality above holds only if I take the limit both sides, I have something like this in my mind $$ \lim_{h\to 0} \frac{f(a+h) -f(a)}{h} = \frac{ f”(a+h)}{h} \\
\lim_{h\to 0} \frac{f(a+h) - f(a)}{h} - \frac{ f”(a+h)}{h} =0$$ Now, if we use the law that “limit of a sum is the sum of the limits” then we write $$ f’(a) -\lim_{h\to 0} \frac{f”(a+h)}{h} = 0 $$

No. In the example you give here there is already a limit on the LHS. What I said was about having an equality without limits - as was the case in your solution for question ##7## and you took the limit of the left side only.
 
  • #55
QuantumQuest said:
No. In the example you give here there is already a limit on the LHS. What I said was about having an equality without limits - as was the case in your solution for question ##7## and you took the limit of the left side only.
Are you talking about this step
Adesh said:
limh→0f′(a+h)−f′(a)h=f”(a+θh)+θh2f”′(a+θh)f”(a)=f”(a+θh)+θh2f”′(a+θh)​
Well, it is like this $$\lim_{h\to 0} \frac{f’(a+h) - f’(a) }{h} = f”(a +\theta h) + \frac{\theta h}{2} f”’(a +\theta h) \\

f”(a) = \lim_{h\to 0} f”(a+ \theta h) +\frac{\theta h}{2} f”’(a + \theta h)$$
 
  • #56
Adesh said:
Are you talking about this step

Well, it is like this $$\lim_{h\to 0} \frac{f’(a+h) - f’(a) }{h} = f”(a +\theta h) + \frac{\theta h}{2} f”’(a +\theta h) \\

f”(a) = \lim_{h\to 0} f”(a+ \theta h) +\frac{\theta h}{2} f”’(a + \theta h)$$

So, in fact, you're taking both limits in the expression ##\frac{f’(a+h) - f’(a)}{h} = f”(a+\theta h) + \frac{\theta h}{2} f”’(a+ \theta h) ## and not the way you wrote it in your solution to question ##7##. That's what I'm talking about.
 
Last edited:
  • #57
QuantumQuest said:
So, in fact, you're talking both limits in the expression ##f’(a+h) = f’(a) + h f”(a+ \theta h) + \frac{\theta h^2}{2} f”’(a +\theta h) \\ \frac{f’(a+h) - f’(a)}{h} = f”(a+\theta h) + \frac{\theta h}{2} f”’(a+ \theta h)## and not the way you wrote it in your solution to ##7##. That's what I'm talking about.
Yes, I thought writing the limit before the whole equation meant limit was getting applied to everything, but as I have experienced it has caused some ambiguities. I aplogizie for that unclearness, but I really meant limits being on both sides.
 
  • #58
Adesh said:
Yes, I thought writing the limit before the whole equation meant limit was getting applied to everything, but as I have experienced it has caused some ambiguities. I aplogizie for that unclearness, but I really meant limits being on both sides.

See again my post #56, as there was a mistake as I copied the expression which I corrected.
 
  • #59
QuantumQuest said:
See again my post #56, as there was a mistake as I copied the expression which I corrected.
QuantumQuest said:
See again my post #56, as there was a mistake as I copied the expression which I corrected.
Sorry but I’m unable to spot the mistake. Please help me in seeing it.
 
  • #60
Adesh said:
Sorry but I’m unable to spot the mistake. Please help me in seeing it.

I mean that I made a mistake in copying the expression. The one that is written now in post #56 is the one I'm talking about. And in this you took the limit of the left hand side only while you have to take the limits of both sides but as you answer in post #57 it's understood, so it's OK for this. But taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
 
  • #61
QuantumQuest said:
I mean that I made a mistake in copying the expression. The one that is written now in post #56 is the one I'm talking about. And in this you took the limit of the left hand side only while you have to take the limits of both sides but as you answer in post #57 it's understood, so it's OK for this. But taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
Was my explanation of why I took only the left side limit (I can do that if we take everything to one side and then we use the rule “limit of a sum is equal to the sum of limits”) unsatisfactory ?
 
  • #62
Adesh said:
Yes, I thought writing the limit before the whole equation meant limit was getting applied to everything, but as I have experienced it has caused some ambiguities. I aplogizie for that unclearness, but I really meant limits being on both sides.

I think that you understand my point about taking both limits by now. But, again, taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
 
  • #63
QuantumQuest said:
I think that you understand my point about taking both limits by now. But, again, taking into account both limits and also, as there are further things beyond which I don't understand , I don't know where this solution leads to.
Am I allowed to explain my solution or do I need to come up with a new one?
 
  • #64
Adesh said:
Am I allowed to explain my solution or do I need to come up with a new one?

It's up to you but if you want to keep the current solution you gave, you must do the appropriate corrections and present it in a more understandable way and, of course, reach the correct result. Alternatively, I'd recommend to take into account my hint and work that way which leads to a nice solution.
 
  • #65
Qustiom7:
This solution is achieved only by the hint given by @QuantumQuest .

By Taylor Expansion:
$$f(a+h) = f(a) + hf ‘(a) + \frac{h^2}{2!} f’’(a) + \frac{h^3}{3!} f’’’(a) +... $$
From the expression which is given we can write $$ f(a+h) = f(a+h) - \frac{h^2}{2!} f”(a +\theta h) + \frac{h^2}{2!} f”(a) \frac{h^3}{3!} + f’’’(a) +... $$ As ##h## goes to zero we can ignore higher terms, hence we can write
$$\frac{h^2}{2!} \left ( f”(a+\theta h) - f”(a)\right) = \frac{h^3}{3!} f”’(a) \\
\frac{6}{2} \left( \frac {
f”(a+\theta h) - f”(a) }
{ \theta h} \right) = \frac{f”’(a)}{\theta} $$
In the limit as ##h## goes to zero we have $$ 3 f”’(a) = \frac{f”’(a)}{\theta} \\
3 \theta = 1 \\
\theta = 1/3 $$
 
  • Like
Likes QuantumQuest
  • #66
Problem 6:

As ##\ln (x)## is a convex up function we have:

##
\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)
##

Taking the exponential, we obtain:

##
m_1 a_1 + m_2 a_2 \geq a_1^{m_1} a_2^{m_2} .
##

Equality holds if ##m_1 = 0## or ##m_2 = 0##. Also, equality holds if ##a_1 = a_2##.

 
Last edited:
  • #67
fresh_42 said:
I assume that means that y(x)=x is the solution.

Have you cross checked this expression? Mine is different.

Edit: This could either mean that there is more than one solution or one of us is wrong. I checked your result and couldn't retrieve the differential equation, but I'm not sure whether I didn't make a mistake. Your shift by 1/2 makes a big mess in the calculations.
Hi fresh_42,
I simplified my expression for ##y(0)=2## to ##y=x + \frac{1}{(1-\frac{1}{2} e^x)}##. My solution gives,$$
y(0)=0\\
y=x\\
y(0)=1\\
y=x+1\\
y(0)=2\\
y=x + \frac{1}{(1-\frac{1}{2} e^x)}
$$I cross checked these solutions and they check out.
 
  • Like
Likes fresh_42
  • #68
Adesh said:
Qustiom7:
This solution is achieved only by the hint given by @QuantumQuest .

By Taylor Expansion:
$$f(a+h) = f(a) + hf ‘(a) + \frac{h^2}{2!} f’’(a) + \frac{h^3}{3!} f’’’(a) +... $$
From the expression which is given we can write $$ f(a+h) = f(a+h) - \frac{h^2}{2!} f”(a +\theta h) + \frac{h^2}{2!} f”(a) \frac{h^3}{3!} + f’’’(a) +... $$ As ##h## goes to zero we can ignore higher terms, hence we can write
$$\frac{h^2}{2!} \left ( f”(a+\theta h) - f”(a)\right) = \frac{h^3}{3!} f”’(a) \\
\frac{6}{2} \left( \frac {
f”(a+\theta h) - f”(a) }
{ \theta h} \right) = \frac{f”’(a)}{\theta} $$
In the limit as ##h## goes to zero we have $$ 3 f”’(a) = \frac{f”’(a)}{\theta} \\
3 \theta = 1 \\
\theta = 1/3 $$

The solution is essentially correct. Just two comments. First, you must correct the typo in the addition and the multiplication in the end of the second expression in order to be correctly written for the people that will read your solution. As a second comment, it's best to write the expansion like this

##f(a+h) = f(a) + hf ‘(a) + \frac{h^2}{2!} f’’(a) + \frac{h^3}{3!} f’’’(a + \theta_1h) ##

as what is given is that ##f## is a three times differentiable function. At the limit where ##h \rightarrow 0## ##\hspace{1cm}## ##\theta_1h## vanishes.
 
  • #69
julian said:
Problem 6:

As ##\ln (x)## is a convex up function we have:

##
\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)
##

Taking the exponential, we obtain:

##
m_1 a_1 + m_2 a_2 \geq a_1^{m_1} a_2^{m_2} .
##

Equality holds if ##m_1 = 0## or ##m_2 = 0##. Also, equality holds if ##a_1 = a_2##.


Yes, I see what you mean but in order for other people to understand it, I think that it is best to show how to utilize the derivatives and the appropriate theorem in order to reach the point where ##\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)##.
 
  • #70
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
 
  • #71
archaic said:
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
I am not exactly sure, but maybe I can use a recursive algorithm. I'll think about it later.
 
  • #72
@archaic If ##n## is even, then ##p_n## achieves a minimum at some point ##x_0##. Think about how to use the fact that ##p_n'(x_0)=0##.
 
  • #73
Infrared said:
@archaic If ##n## is even, then ##p_n## achieves a minimum at some point ##x_0##. Think about how to use the fact that ##p_n'(x_0)=0##.
I think that I have thought of that but didn't know how to prove its uniqueness (the minimum).
 
  • #74
You don't need uniqueness. If ##x_0## is any point where ##p_n## achieves its minimum, then it's enough to show that ##p_n(x_0)>0##

But uniqueness does follow from ##p_n'(x_0)=p_{n-1}(x_0)=0## since you know that ##p_{\text{odd}}## only has one real root.
 
  • #75
Infrared said:
But uniqueness does follow from ##p_n'(x_0)=p_{n-1}(x_0)=0## since you know that ##p_{\text{odd}}## only has one real root.
What? I do? Do you mean that I am allowed to say "since we know that ##p_{odd}(x)## has only one real root"?
I probably am misunderstanding you, since an odd polynomial of degree ##n## has up to ##n## real roots.
 
  • #76
By induction you can assume that ##p_{2n-1}## has exactly one root. Then arguing like this shows that ##p_{2n}## has no roots, which by your post shows that ##p_{2n+1}## has exactly one root, etc.
 
  • #77
Infrared said:
By induction you can assume that ##p_{2n-1}## has exactly one root. Then arguing like this shows that ##p_{2n}## has no roots, which by your post shows that ##p_{2n+1}## has exactly one root, etc.
But, to argue like this, I should have already found out that ##p_{2n}(x)\geq0##, no (*)? Since assuming that ##p_{2n-1}## has just one root only tells me that ##p_{2n}## has just one extremum.
I realized that my before last post made no sense because of (*). Maybe that's why I tried to prove that ##p_{2n}(x)\geq0##.
 
Last edited:
  • #78
Yes, you need to show that if ##x_0## is an extremum for ##p_{2n}##, then ##p_{2n}(x_0)>0##. Think about how to use ##p_{2n}'(x_0)=p_{2n-1}(x_0)=0##.
 
  • #79
Infrared said:
Yes, you need to show that if ##x_0## is an extremum for ##p_{2n}##, then ##p_{2n}(x_0)>0##. Think about how to use ##p_{2n}'(x_0)=p_{2n-1}(x_0)=0##.
I tried supposing that ##p_n(x)## and ##p_{n-2}(x)## have only one root, which let's me say that ##p_{n-1}(x)\geq 0##. But I can't really say much about ##p_{n+1}(x)## (it might be negative in some interval) or ##p_{n-3}(x)## which would be positive, though possibly wavy.
If I only suppose that ##p_n(x)## has a minimum, then ##p_{n-1}(x)## might also be squiggly.
 
  • #80
Do you want another hint? I think if I say much more, I might give away the solution.
 
  • #81
fresh_42 said:
6. If ##a_1,a_2 > 0## and ##m_1,m_2 ≥ 0## with ##m_1 + m_2 = 1## show that ##a_1^{m_1}a_2^{m_2} < m_1 a_1 + m_2 a_2##. (QQ)

I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Without loss of generality, let us assume that ##a_1 \lt a_2##. The proof for the opposite case, ##a_1 \gt a_2## will be similar but with the variable ##x## corresponding to ##\frac {a_2}{a_1}## instead of ##\frac {a_1}{a_2}##.

The given inequality is equivalent to ##\dfrac {a_1^{m_1}a_2^{1-m_1}} {m_1 a_1 + (1-m_1) a_2} < 1##,
or equivalently, obtained by dividing both numerator and denominator by ##a_2 = a_2^{m_1} a_2^{1-m_1}##,
$$\dfrac { \left( \dfrac{a_1}{a_2} \right)^{m_1}} {m_1 \left( \dfrac{a_1}{a_2} \right) + (1 - m_1)} < 1
$$

For a fixed value of ##m_1##, the LHS of the above inequality can be written as a function of ##x=\frac{a_1}{a_2}##, so we need to prove that ##f(x) = \dfrac {x ^ {p}} {x p + (1-p)} < 1## for ##x \in (0, 1)## and ##p = m_1 \in (0, 1)##.
Note that since ##a_1 \lt a_2 \text{ and } a_1, a_2> 0, \text{ we must have } x \in (0,1)##.

We now prove that ##f(x)## is a strictly increasing function of ##x## when ##0 \lt x \lt 1##.
$$
f'(x) = \dfrac {px^{p-1}}{x p + (1-p)} - \dfrac {px^p}{(x p + (1-p))^2} = \dfrac {px^{p-1}} {(x p + (1-p))^2} (px + (1-p) - x)= \\
\dfrac {px^{p-1}(1-p) (1-x)} {(x p + (1-p))^2}
$$

It is obvious that that ##f'(x) \gt 0## for ##x \in (0, 1), p \in (0, 1)## since when those conditions are met, the both the numerator and denominator of the above expression for ##f'(x)## are positive. We also notice that ##f(1) = 1##. Since ##f(x)## is strictly increasing for ##0 \lt x \lt 1## and reaches a value of 1 when ##x=1##, it follows that ##f(x) \lt 1 \, \forall x \in (0, 1)##. Hence, the question's inequality is proved subject to the additional conditions mentioned earlier.
 
  • Like
Likes QuantumQuest
  • #82
@Infrared I think the only possible way for proving that is proof by contradiction. I’m need of some discussion (please no hint), @mathwonk said something but I couldn’t decode what he said.
 
  • #83
Not anonymous said:
I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Without loss of generality, let us assume that ##a_1 \lt a_2##. The proof for the opposite case, ##a_1 \gt a_2## will be similar but with the variable ##x## corresponding to ##\frac {a_2}{a_1}## instead of ##\frac {a_1}{a_2}##.

The given inequality is equivalent to ##\dfrac {a_1^{m_1}a_2^{1-m_1}} {m_1 a_1 + (1-m_1) a_2} < 1##,
or equivalently, obtained by dividing both numerator and denominator by ##a_2 = a_2^{m_1} a_2^{1-m_1}##,
$$\dfrac { \left( \dfrac{a_1}{a_2} \right)^{m_1}} {m_1 \left( \dfrac{a_1}{a_2} \right) + (1 - m_1)} < 1
$$

For a fixed value of ##m_1##, the LHS of the above inequality can be written as a function of ##x=\frac{a_1}{a_2}##, so we need to prove that ##f(x) = \dfrac {x ^ {p}} {x p + (1-p)} < 1## for ##x \in (0, 1)## and ##p = m_1 \in (0, 1)##.
Note that since ##a_1 \lt a_2 \text{ and } a_1, a_2> 0, \text{ we must have } x \in (0,1)##.

We now prove that ##f(x)## is a strictly increasing function of ##x## when ##0 \lt x \lt 1##.
$$
f'(x) = \dfrac {px^{p-1}}{x p + (1-p)} - \dfrac {px^p}{(x p + (1-p))^2} = \dfrac {px^{p-1}} {(x p + (1-p))^2} (px + (1-p) - x)= \\
\dfrac {px^{p-1}(1-p) (1-x)} {(x p + (1-p))^2}
$$

It is obvious that that ##f'(x) \gt 0## for ##x \in (0, 1), p \in (0, 1)## since when those conditions are met, the both the numerator and denominator of the above expression for ##f'(x)## are positive. We also notice that ##f(1) = 1##. Since ##f(x)## is strictly increasing for ##0 \lt x \lt 1## and reaches a value of 1 when ##x=1##, it follows that ##f(x) \lt 1 \, \forall x \in (0, 1)##. Hence, the question's inequality is proved subject to the additional conditions mentioned earlier.

I believe that the question inadvertently missed some conditions. With the specified criteria, it is possible for LHS of the inequality to be equal to the RHS. For example, if ##a_1 = a_2 = a##, we get ##a_1^{m_1}a_2^{m_2} = a^{m_1}a^{m_2} = a^{m_1+m_2} = a##, while the RHS of the question's inequality becomes ##m_1 a + (1-m_1) a = a##, so the inequality becomes an equality. Another example is when ##m_1 = 0##. Here the LHS becomes ##a_1^0 a_2^1 = a_2## and the RHS becomes ##0 \times a_1 + 1 \times a_2 = a_2##, again giving an equality. Therefore, I prove the inequality with the following additional or modified assumptions: ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.

Well, I was given this exercise as an undergrad some decades ago just like I'm giving it in this challenge thread - i.e, no modifications. While there are of course the cases you mention, the question is looking for the general case where ##a_1 \neq a_2## and ##m_1, m_2 \neq 0##.
 
  • Informative
Likes Not anonymous
  • #84
As the Question ##6## is already solved I find it advisable to post my own solution as well.

Let's take the function ##f: (0,+\infty) \rightarrow \mathbb{R}## with ##f(x) = \ln{x}##.
We have: ##f'(x) = \frac{1}{x}## and ##f''(x) = -\frac{1}{x^2} \lt 0 \hspace{0.5cm} \forall x \in (0,+\infty)##.

Suppose ##a_1 \lt a_2####(^*)##. We apply the Mean Value Theorem over the intervals ##[a_1, \frac{a_1 + ma_2}{1 + m}],[\frac{a_1 + ma_2}{1 + m}, a_2] \hspace{0.5cm} m \gt 0##. So, ##\exists \xi_1 \in (a_1, \frac{a_1 + ma_2}{1 + m}), \xi_2 \in (\frac{a_1 + ma_2}{1 + m}, a_2)## such that

##\frac{f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2) - f(a_1)}{\frac{a_1 + ma_2}{1 + m} - a_1} = f'(\xi_1), \frac{f(a_2) - f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2)}{a_2 - \frac{a_1 + ma_2}{1 + m}} = f'(\xi_2)##.

We also have that ##f'## is a strictly decreasing function in ##[a_1, a_2]## as ##f'' \lt 0## in ##(0,+\infty)##.
Now, because ##\xi_1 \lt \xi_2## we have ##f'(\xi_1) \gt f'(\xi_2)##, so

##\frac{f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2) - f(a_1)}{\frac{m(a_2 - a_1)}{1 + m}} \gt \frac{f(a_2) - f(\frac{1}{1 + m}a_1 + \frac{m}{1 + m}a_2)}{\frac{a_2 - a_1}{1 + m}}##.

Putting ##\frac{1}{1 + m} = m_1##, ##\frac{m}{1 + m} = m_2## so ##m_1, m_2 \in \mathbb{R}_+## and ##m_1 + m_2 = 1## are satisfied, we have

##\frac{\ln{(m_1a_1 + m_2a_2)} - \ln{a_1}}{m_2} \gt \frac{\ln{a_2} - \ln{(m_1a_1 + m_2a_2)}}{m_1}## so
##m_1\ln{(m_1a_1 + m_2a_2)} - m_1\ln{a_1} \gt m_2\ln{a_2} - m_2\ln{(m_1a_1 + m_2a_2)}##,

so ##(m_1 + m_2) \ln{(m_1a_1 + m_2a_2)} \gt m_1\ln{a_1} + m_2\ln{a_2}## hence

##\ln{(m_1a_1 + m_2a_2)} \gt \ln{(a_1^{m_1}a_2^{m_2})}##, so finally

##m_1a_1 + m_2a_2 \gt a_1^{m_1}a_2^{m_2}## Q.E.D.##(^*)##: Although it could be ##a_1 = a_2## this would lead to an equality in the expression that is asked to be proved but here only the inequality part is asked, so we implicitly assume that ##a_1 \neq a_2##. Also, if
##m_1 = 0## or ##m_2 = 0##, again, we end up to an equality but here only the inequality part is asked, so we implicitly assume that ##m_1, m_2 \neq 0##.
 
  • #85
QuantumQuest said:
Yes, I see what you mean but in order for other people to understand it, I think that it is best to show how to utilize the derivatives and the appropriate theorem in order to reach the point where ##\ln (m_1 a_1 + m_2 a_2) \geq m_1 \ln (a_1) + m_2 \ln (a_2)##.
The ##\ln(x)## function is continuous, with ##d \ln (x) / dx = 1/x## positive for ##x > 0##, and ##d^2 \ln (x) / dx^2 = - 1/x^2## is negative for ##x > 0##. So the derivative is monotonically decreasing and so the function curves down and as a consequence the line segment (chord) between any two points on the graph (for ##x > 0##) lies below or on the graph - see attached figure. Specifically, for any point ##a_1 < x < a_2## the height of the chord is less than the height of ##\ln(x)## (while they are equal at ##x=a_1## and ##x=a_2##). Therefore:

##
\ln (m_1 a_1 + (1-m_1) a_2) \geq m_1 \ln (a_1) + (1-m_1) \ln (a_2)
##

for ##0 \leq m_1 \leq 1##.
 

Attachments

  • convex.jpg
    convex.jpg
    9.4 KB · Views: 152
Last edited:
  • Like
Likes QuantumQuest
  • #86
Question 9 :

Claim 1: ##f(x)## is one - one., i.e., ##f(x)=f(y)\Rightarrow x=y##
Proof: $$f(x)=f(y)$$
$$\Rightarrow f(f(x))=f(f(y))$$
$$\Rightarrow f(f(f(x)))=f(f(f(y)))$$
$$\Rightarrow x=y$$
So, ##f(x)## is one - one.

Claim 2: Since ##f(x)## is continuous in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: Suppose ##f(x)## is a concave downward function. Then, there will be a maximum within the interval, viz., ##f(c)## for some value ##c## such that ##a(=0)<c<b(=1)##. As ##f(x)## is continuous in ##[a,c]## & ##[c,b]##, then ##f(c)>f(a)## and ##f(c)>f(b)##. So, by intermediate value theorem, ##\exists ## ##x_1, x_2## such that ##x_2>x_1## and ##a<x_1<c, c<x_2<b ##but ##f(x_2)>f(x_1)##, which is a contradiction since ##f(x)## is one-one.

It can be similarly proved that ##f(x)## cannot be a concave upward function.

Claim 3: ##f(x)## is strictly increasing.
Proof: If ##f(x)## is strictly decreasing, then for
$$y>x \Rightarrow f(y)<f(x)$$
$$\Rightarrow f(f(y))>f(f(x))$$
$$\Rightarrow f(f(f(y)))<f(f(f(x))) \Rightarrow y<x$$
a contradiction.
So, ##f(x)## is strictly increasing.

Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)\ne x##,
Then either ##f(x)>x## or ##f(x)<x##.

If
\begin{equation} f(x)>x \end{equation}
\begin{equation} \Rightarrow f(f(x))>f(x) \end{equation}
\begin{equation} \Rightarrow f(f(f(x)))>f(f(x)) \end{equation}
\begin{equation} \Rightarrow x>f(f(x)) \end{equation}
From (1), (2) and (4)
\begin{equation} x>f(f(x))>f(x)>x \end{equation}
which is impossible.

Similarly,
If
\begin{equation} f(x)<x \end{equation}
\begin{equation} \Rightarrow f(f(x))<f(x) \end{equation}
\begin{equation} \Rightarrow f(f(f(x)))<f(f(x)) \end{equation}
\begin{equation} \Rightarrow x<f(f(x)) \end{equation}
From (5), (6) and (8)
\begin{equation} x<f(f(x))<f(x)<x \end{equation}
which is impossible.
Hence,
$$f(x)=x$$

As the proof is general, the result is also true for ##f:\mathfrak {R} \rightarrow \mathfrak {R}##.
 
  • #87
It is given that $$ f \left( f
\left (f (x) \right) \right ) = x$$ for all ##x \in [0,1]##. That “for all ##x \in [0,1]##” part is very important.

Let’s assume that ##f(x) \neq x## (that is ##f(x)## is not an identity function). Now, ##f(x)## is either greater than or less than ##f(x) =x## in some sub-interval of ##[0,1]##. $$ f(x) \gt x \\
\textrm{let’s assume that f is an increasing function, well there is no possibility for it to be a constant function, it will either increase or decrease in some sub-interval (that same sub-interval in which it is greater/lesser than x} \\
f \left ( f(x) \right) \gt f(x) \gt x \implies f \left ( f(x) \right ) \gt x \\
f \left (
f \left ( f(x) \right ) \right ) \gt f\left (f (x) \right ) \\
\implies x \gt f \left (f(x) \right)$$ That is a contradiction. Now, I said that ##f(x)## is greater than ##f(x) =x## for some sub-interval, but that’s enough because $$f \left(
f \left( f(x) \right ) \right) $$ is true for all ##x \in [0,1]## and now that we have proved it at least for some ##x## therefore it’s enough.

Similarly, we can reach the contradiction if we assume ##f(x) \lt x## in some sub-interval, or if we assume ##f## to be decreasing, if it’s decreasing that inequality sign would flip.
 
  • #88
Lament said:
Claim 2: Since ##f(x)## is continuous in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.
Proof: Suppose ##f(x)## is a concave downward function. Then, there will be a maximum within the interval, viz., ##f(c)## for some value ##c## such that ##a(=0)<c<b(=1)##. As ##f(x)## is continuous in ##[a,c]## & ##[c,b]##, then ##f(c)>f(a)## and ##f(c)>f(b)##. So, by intermediate value theorem, ##\exists ## ##x_1, x_2## such that ##x_2>x_1## and ##a<x_1<c, c<x_2<b ##but ##f(x_2)>f(x_1)##, which is a contradiction since ##f(x)## is one-one.

It can be similarly proved that ##f(x)## cannot be a concave upward function.
You have the right idea here, but the wording is a little off. Your argument that having a maxmum or a minimum contradicts the injectivity of ##f## is good, but concavity doesn't have much to do with it. Being "concave down" does not mean that a maximum is achieved on the interior (e.g. the square root function is concave down, but still strictly increasing on its domain). The last detail that you should check for this argument to be airtight is that a continuous function defined on an interval is either monotonic or has (interior) local extrema.

Lament said:
Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##
Proof: Suppose ##f(x)\ne x##,
Then either ##f(x)>x## or ##f(x)<x##.

Why are the only three possible cases that ##f(x)=x, f(x)>x##, and ##f(x)<x## (for all ##x##)? Why can't it be mixed?
 
  • #89
@Adesh Please fix your latex spacing- currently a part of your solution extends very far right and is hard to read.

Anyway, you suppose monotonicity and that ##x<f(x)## in some sub-interval ##J##. This doesn't imply that ##f(x)<f(f(x))## though, because ##f(x)## might not be in ##J##.

Adesh said:
Now, I said that ##f(x)## is greater than ##f(x) =x## for some sub-interval, but that’s enough because $$f \left(
f \left( f(x) \right ) \right) $$ is true for all ##x \in [0,1]## and now that we have proved it at least for some ##x## therefore it’s enough.

I don't understand this part. Why is it enough for ##f(f(f(x)))## to be equal to ##x## for some ##x##, as opposed to all ##x##?

Edit: Sorry, my last sentence should read "I don't understand this part. Why is it enough for ##f(x)## to be equal to ##x## for some ##x##, as opposed to all ##x##?"
 
Last edited:
  • #90
Infrared said:
I don't understand this part. Why is it enough for f(f(f(x)))f(f(f(x)))f(f(f(x))) to be equal to xxx for some xxx, as opposed to all xxx?
Actually I meant something else. I wanted to say that I proved that contradiction for some interval ##J##, in other cases I had to prove for all ##x## but since proving for even one single ##x## will suffice because in question that condition holds for all ##x##.
 
  • #91
I still don't understand. What if ##x## is not in such an interval ##J##?

And can you address my question about why ##f(f(x))>f(x)##?
 
  • #92
Infrared said:
And can you address my question about why f(f(x))>f(x)f(f(x))>f(x)f(f(x))>f(x)?
Okay, you mean that ##f## may not be defined on ##f(x)## because we assumed only that ##f## on the sub-interval ##J##?
 
  • #93
Because you're only assuming that ##f## is increasing on the interval ##J##. If ##f(x)## is outside of the interval ##J##, then ##x<f(x)## does not imply ##f(x)<f(f(x))##.

It's not hard to fix this though; ##f##. You should say why ##f## must be strictly monotonic on the whole interval, not just on some sub-interval.
 
  • #94
So ##f## is monotonically increasing because of injectivity. Say ##f(x)## intersects the function ##x## at a finite number of points which you use to define sub-intervals. Then ##f(x)## will alternate between convex down and convex up as you pass from one interval to the next. In a convex down interval, ##[a,b]##, you arrive at the contradiction that: ##c \equiv f(f(f(c))) < c## for any ##c## such that ##a < c < b##. And in an convex up interval, ##[p,q]##, you arrive at the contradiction that ##r \equiv f(f(f(r))) > r## such that ##p < r < q##.
 
  • Like
Likes Adesh
  • #95
julian said:
So ##f## is monotonically increasing because of injectivity.
Or decreasing

julian said:
Say ##f(x)## intersects the function xx at a finite number of points which you use to define sub-intervals.

There don't have to be only finitely many intersections.

julian said:
Then ##f(x)## will alternate between convex down and convex up as you pass from one interval to the next.##
Why can't there be mixed concavity on each interval? The points where ##f(x)## changes concavity won't be the same points where ##f(x)=x## in general.

julian said:
In a convex down interval, ##[a,b]##, you arrive at the contradiction that: ##c=f(f(f(c)))<c## for any ##c## such that ##a<c<b## . And in an convex up interval,
I don't see how you are using concavity for this.

@archaic has an almost complete solution.
 
  • #96
Infrared said:
It's not hard to fix this though; fff. You should say why fff must be strictly monotonic on the whole interval, not just on some sub-interval.
If I prove the monotonicity then how would I ensure that ##f## will be monotone on ##f(x)##? I’m thinking of it, I hope shortly I will come up with something.
 
  • #97
I'm not sure what you mean by "monotone on ##f(x)##", but @Lament gave an argument for why ##f## is (strictly) monotonic on all of ##[0,1]##, with some confusion about concavity. See posts 86 and 88.

You can assume that the range of ##f## lies inside of ##[0,1]## since this must be the case for ##f(f(f(x)))## to be defined.
 
  • #98
Infrared said:
I'm not sure what you mean by "monotone on ##f(x)##", but @Lament gave an argument for why ##f## is (strictly) monotonic on all of ##[0,1]##, with some confusion about concavity. See posts 86 and 88.
Even if ##f## is monotone, how would I go for proving that ##f \circ f## is also monotone?
 
  • #99
If ##f## is increasing, then ##f\circ f## is increasing. If ##f## is decreasing, then ##f\circ f## is increasing.
 
  • #100
Infrared said:
If ##f## is increasing, then ##f\circ f## is increasing. If ##f## is decreasing, then ##f\circ f## is increasing.
I will prove that part (the implication that ##x \lt f(x) \implies f(x) \lt f(f(x))##.
 

Similar threads

2
Replies
61
Views
11K
2
Replies
61
Views
12K
2
Replies
60
Views
11K
Replies
33
Views
8K
3
Replies
100
Views
11K
2
Replies
77
Views
15K
2
Replies
56
Views
10K
3
Replies
102
Views
10K
2
Replies
67
Views
11K
2
Replies
64
Views
15K
Back
Top