Challenge Math Challenge - April 2020

Click For Summary
The Math Challenge - April 2020 discussion covers various mathematical problems and their solutions, including topics in functional analysis, statistics, differential equations, and number theory. Key problems involve showing the unique continuation of linear operators in normed spaces, testing hypotheses for normally distributed variables, and solving initial value problems for differential equations. Participants also explore properties of continuous functions and polynomials, as well as combinatorial problems related to tiling rectangles. The thread emphasizes the importance of clear, complete solutions and rigorous mathematical reasoning.
  • #121
Adesh said:
I can give an informal proof of why an invertible function would be monotonic.
PROOF: Function is invertible implies it is injective, that is it passes horizontal line test for all horizontal lines. That is our function passes every horizontal line just once and this is possible only if it keeps on increasing/decreasing, if it were to change it’s nature, that is from increasing to decreasing (or decreasing to increasing) it would surely intersect at least one horizontal line at least twice and hence will not be injective

The claim "an invertible function would be monotonic" is false. A counterexample is ##f:[0,1]\to [0,1]## defined by ##f(x)=x## for ##x\in (0,1)## and ##f(0)=1, f(1)=0##. Continuity is necessary. You don't mention continuity anywhere in your explanation, so I don't think it can be right.

Adesh said:
Was my proof of “##f## is invertible” right?
A bit wordy. I think it's fine to just say "##f## is invertible because ##f\circ f## is an inverse function to ##f##."
 
Physics news on Phys.org
  • #122
Infrared said:
It is not given that ##f## is differentiable. The function ##f(x)=|x|## is minimized ##0##, but is not differentiable there.
True, I overlooked that. I took a stationary point for defining the extrema.
Infrared said:
Even for differentiable functions, I think your MVT argument is flawed- I can't tell how you're applying MVT, but it looks like you're claiming that any function whose derivative has a zero cannot be injective. The function ##f(x)=x^3## is a counterexample to that.
It is supposed to be IVT (typographical error on my part). What I meant to do was to take an interval around the extrema, if it exists. And then prove that since any horizontal line in between the range of the interval will intersect the curve at two points at least, by injectiveness of ##f## this is impossible and the regional extrema must only occur at the end-points of the interval making the function either strictly monotonically increasing or decreasing.
I wouldn't consider ##f(x)=x^3## at ##x=0## because it is not an extremum even though it is a stationary point.
Infrared said:
It is also not clear how you can make each interval contain only one such extremum, since in general functions can change monotonicity infinitely often in a finite interval, like ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##.
Well, is we take an extremum then in a small neighbourhood of it, it will only contain that extrema. Yes, it can be done away with. Even if we consider any arbitrary interval and it contains an extremum, then again with IVT we can prove that no local extrema are present within the interval. I thought of having an interval with only one such extremum because then we can eventually conclude it to be monotonically increasing within each such interval.
Also from your argument ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##. I didn't think about badly behaving functions.

Infrared said:
As before, dividing into intervals adds some confusion. Here the problem is less severe because the set where ##x\neq f(x)## is open, so it must contain an interval if it's non-empty, but it's still easier to avoid it: suppose for contradiction that ##f(x)\neq x## for some ##x##. Then either ##x<f(x)## or ##x>f(x)##, and apply your argument to the point ##x##, not to an entire interval.
I understand considering point by point the proof is easier but I do not understand why the interval argument raises confusion. Because if ##f(x)\neq x## for some x, then in a small neighbourhood around the point the same must hold true (it is strictly monotonic after all).

Am I still missing something?
 
  • #123
Question 9
We are given that ##f## is a continuous function and satisfies $$ f (f ( f(x) ) ) =x$$ for all ##x \in [0,1]##. Since, ##f## undo the effect of ##f \circ f## therefore ##f \circ f## is inverse of ##f##. Hence, ##f## is an invertible function.
Any continuous invertible function is always monotonic (an informal proof is given in post #120).

Let’s assume that ##f(x) \neq x## for some sub-interval, that is ##f## is not an identity function. Now, in that sub-interval ##f## is either greater or smaller than ##x## and also monotonically continuous (increasing/ decreasing).

Let’s assume ##f## is greater than ##x## in that sub-interval and is monotonically increasing in that sub-interval, $$ x \lt f(x) \\ f(x) \lt f (f(x)) \implies x \lt f ( f(x)) ~~~~~(1)\\ f(f(x)) \lt f ( f( f (x) ) ) \implies f(f(x)) \lt x$$ the last inequality above contradicts with inequality (1). Hence, our assumption was wrong.

We can show similar contradictions for other cases: ##f## is monotonically decreasing, ##f## is less than ##x## and more.

Let’s do it for ##f## being less than ##x## (in a some sub-interval) and monotonically decreasing. $$ f(x) \lt x \\
f ( f(x)) \gt f(x) \\
f( f ( f (x) ) ) \lt f ( f (x) ) \\
x \lt f (f (x) ) \\
f(x) \gt f ( f ( f (x) ) ) \implies f(x) \gt x $$
The last inequality above contradicts with our assumption that ##f(x)## is less than ##x## in the considered sub-interval.

These contradictions are occurring because our assumption of ##f## being not an identity function was false.

Hence, ##f## must be an identity function .
 
  • #124
Lament said:
I wouldn't consider ##f(x)=x^3## at ##x=0## because it is not an extremum even though it is a stationary point.
Yes, this is my point. To me, it looked like your argument was roughly:

"##f## has a local extremum" ##\implies## "##f'(c)=0## for some ##c##" ##\implies## "##f## is not injective"

But ##f(x)=x^3## is a counterexample to the second implication. If this was not the shape of your argument, could you clarify how you were trying to use the fact that ##f'(c)=0##?

Lament said:
Well, is we take an extremum then in a small neighbourhood of it, it will only contain that extrema. Yes, it can be done away with. Even if we consider any arbitrary interval and it contains an extremum, then again with IVT we can prove that no local extrema are present within the interval. I thought of having an interval with only one such extremum because then we can eventually conclude it to be monotonically increasing within each such interval.
Also from your argument ##f(x)=x\sin(1/x), f(0)=0## near ##x=0##. I didn't think about badly behaving functions.
I think my function ##f(x)=x\sin(1/x)## has local extrema in every small interval ##(0,\varepsilon)##, so you can't always make these intervals. But anyway, there is no need to try to make these intervals.

Lament said:
I understand considering point by point the proof is easier but I do not understand why the interval argument raises confusion. Because if ##f(x)\neq x## for some x, then in a small neighbourhood around the point the same must hold true (it is strictly monotonic after all).

Am I still missing something?
It is okay, but the reasoning shouldn't involve monotonicity- only continuity. The function ##x\mapsto f(x)-x## is continuous, so if it is negative (or positive) at a point, it is negative (or positive) in a neighborhood of the point. In general, it's better to avoid making diversions like this that aren't necessary for the proof.
 
  • #125
fresh_42 said:
9. Let ##f:[0,1]\to [0,1]## be a continuous function such that ##f(f(f(x)))=x## for all ##x\in [0,1]##. Show that ##f## must be the identity function ##f(x)=x.## (IR)
Saying ##f(f(f(x)))=x## is like saying ##f(u\in[0,1])=x##. As such, ##f:[0,1]\to[0,1]## is bijective.
Since ##f## is a continuous bijection on ##[0,1]##, then it is either strictly decreasing or strictly increasing, the same for its inverse (which has the same direction of variation as ##f##).

Proving ##\forall x\in [0,1],\,f(f(f(x)))=x\implies f(x)=x## is the same as proving ##f(x)\neq x\implies\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

Suppose that ##f(x)\neq x##, i.e, there exists a real number in ##[0,1]## such that its image by ##f## is different than itself. In the following, by ##a## I mean this real.
All inequalities will be strict because ##f## is a continuous bijection.
Suppose that ##f(x)## is strictly increasing, and let there be ##a\in[0,1]## such that ##f(a)=b\neq a##, then:
If ##b<a##, then ##f(b)=c<f(a)=b##, and thus ##f(c)<f(a)=b##. It follows that ##f(f(f(a)))<b<a##.​
If ##b>a##, then ##f(b)=c>f(a)=b##, and thus ##f(c)>f(a)=b##. It follows that ##f(f(f(a)))>b>a##.​
I have proved that if ##f(x)## is strictly increasing, then ##\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

Suppose that ##f(x)## is strictly decreasing, and let there be ##a\in[0,1]## such that ##f(a)=b\neq a##, then:
If ##b<a##, then ##f(b)=c>f(a)=b##.​
If ##b<c<a##, then ##f(b)=c>f(c)>f(a)=b##. It follows that ##f(f(f(a)))>f(a)##​
If ##a<c##, then ##f(a)=b>f(c)##. It follows that ##f(f(f(a)))<f(a)##​
If ##b>a##, then ##f(b)=c<f(a)=b##.​
If ##b>c>a##, then ##f(b)=c<f(c)<f(a)=b##. It follows that ##f(f(f(a)))<f(a)##​
If ##a>c##, then ##f(a)=b<f(c)##. It follows that ##f(f(f(a)))>f(a)##​
I have proved that if ##f(x)## is strictly decreasing, then ##\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

I have proved that ##f(x)\neq x\implies\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##!
 
Last edited:
  • #126
archaic said:
Not yet a full solution; missing a detail. I'd be grateful if someone could give me a hint for 3) of the first part.
We know that ##p_n(0)=1## for all ##n##, ##p_0(x)=1## has no zeroes, and ##p_1(x)## has only one zero (linear function). For the rest, I suppose that ##n\geq2##.
Let ##n## be an even natural number.
1) ##\forall x\geq0,\,p_n(x)>0## since we have a sum of positive numbers.
2) ##\forall x\in[-1,\,0),\,p_n(x)>0##.
I can write ##p_n(x)## like this:
$$p_n(x)=1+\sum_\underset{\text{step}=2}{k=1}^{n-1}\left(\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}\right)$$
I am going to prove that the parenthesis is always positive:
$$\begin{align*}
x^k\geq |x|^{k+1}&\Leftrightarrow\frac{x^k}{k!}>\frac{x^k}{(k+1)!}\geq\frac{|x|^{k+1}}{(k+1)!} \\
&\Leftrightarrow\frac{x^k}{k!}>\frac{|x|^{k+1}}{(k+1)!}\\
&\Leftrightarrow\frac{x^k}{k!}>\frac{x^{k+1}}{(k+1)!}>-\frac{x^k}{k!}\\
&\Leftrightarrow-\frac{x^k}{k!}<-\frac{x^{k+1}}{(k+1)!}<\frac{x^k}{k!}\\
&\Leftrightarrow0<\frac{x^k}{k!}+\frac{x^{k+1}}{(k+1)!}
\end{align*}$$
3) ##\forall x<-1,\,p_n(x)>0##. (WIP)
Since ##p_n(x)>0## for all real values of ##x##, it has no real zeroes when ##n## is even.

Let ##n## be an odd natural number.
I notice that ##p'_n(x)=p_{n-1}(x)##. We have that ##n-1## is even, so ##p_{n-1}>0## for ##x\in\mathbb{R}##. From this I can say that ##p_n(x)## is a bijection, since it is constantly growing, with no absolute/local maximum or minimum. ##(*)##
We also have that:
$$\lim_{x\to\pm\infty}p_n(x)=\lim_{x\to\pm\infty}\frac{x^n}{n!}=\pm\infty$$
1) This tells me that there exists a real number ##N##, such that ##p_n(N)p_n(-N)<0##.
2) Since ##p_n(x)## is a polynomial, it is continuous over all ##\mathbb{R}##.
Using 1) and 2), I conclude from the intermediate value theorem that there exists a real number ##c## such that ##p_n(c)=0##. Moreover, using ##(*)##, I also conclude that ##c## is unique.

Conclusion:
If ##n## is even, then ##p_n(x)## has no real zeroes. Else, it has only one.
I had someone show me this; it'll prove that for even ##n##, ##p_n(x)=\sum_{k=0}^n\frac{x^n}{n!}>0## for ##x<0##, thus proving 2) and 3).
Since ##e^x## is differentiable infinitely many times, then it is differentiable ##n+1## times. Moreover ##e^x=\lim_{n\to\infty}p_n(x)##, then, using Taylor's theorem and the Lagrange remainder term, we get:
$$e^x=p_n(x)+E(x)\Leftrightarrow p_n(x)=e^x-E(x)$$
Where ##E(x)## is the remainder, and is given by:
$$E(x)=\frac{e^\zeta}{(n+1)!}x^{n+1},\text{ with }x<\zeta<0$$
Since ##x<0## and ##n## is even, then ##-x^{n+1}=|x|^{n+1}##:
$$p_n(x)=e^x-E(x)=e^x+\frac{|x|^{n+1}e^\zeta}{(n+1)!}>0$$
I think that my proof is now complete.
 
Last edited:
  • #127
fresh_42 said:
I still don't like the inaccuracy of "behaves like" because it doesn't capture lower terms quantitatively. We have ##\dfrac{1}{x^n}<\dfrac{1}{x^{n-1}}##, so why can we neglect those terms of lower orders? In other words: Why is ##g(x)=x^{-n}## suited, if ##f(x)=P(x)^{-1}## is actually bigger than ##g(x)?##

There should be chosen a better function ##g(x)##, which is actually greater than ##x^{-n}## and not only "near", and the estimation ##\int 1/P(x) dx < \int g(x)dx## should be accurate.

Your argument goes: "I have a function (##x^{-n}##), which is possibly smaller than ##1/P(x)##, so it cannot guarantee convergence, but I don't care, since my error is small." However, you did not show that this heuristic is allowed.
Hey fresh! Thank you very much. For some reason, I was set on finding an equivalent at infinity for ##P(x)##, and disregarded that I can also find a function (whose integral over that interval converges) such that ##P(x)## is a little o of it like that you suggested.
 
Last edited:
  • #128
@QuantumQuest Sir, I'm having some doubts regarding question number 12, please pardon my lack of knowledge.

When you wrote
A table has an interval moveable surface used for slicing bread.
I couldn't understand what "internal" meant, how it is internal? And does the "surface" means one of those faces of that "cuboidal" machine?
 
  • #129
Infrared said:
Yes, this is my point. To me, it looked like your argument was roughly:

"##f## has a local extremum" ##\implies## "##f'(c)=0## for some ##c##" ##\implies## "##f## is not injective"

But ##f(x)=x^3## is a counterexample to the second implication. If this was not the shape of your argument, could you clarify how you were trying to use the fact that ##f'(c)=0##?I think my function ##f(x)=x\sin(1/x)## has local extrema in every small interval ##(0,\varepsilon)##, so you can't always make these intervals. But anyway, there is no need to try to make these intervals.

@Infrared as per your suggestions I have reworded my claim 2. Is it airtight now? Or did I miss something?

Claim 2: Since ##f(x)## is continuous bijection in the domain ##[0,1]##, so ##f(x)## is either strictly increasing or strictly decreasing within the interval.

Proof: WLOG, let us consider an interval ##[a,b]##. If it contains a local extrema ##f(c)## such that ##a \lt c \lt b##. Then by mean value theorem, ##\exists x_1, x_2## such that ##a \lt x_1 \lt c## and ##c \lt x_2 \lt b##, i.e. ##x_2 \gt x_1## but ##f(x_2)=f(x_1)##. This is a contradiction, since ##f(x)## is injective. So, a local extrema cannot lie in the interval (a,b). Hence, the local extrema must be at the end-points of the interval,i.e. either ##f(b) \gt f(a)## (strictly monotonically increasing) or ##f(b) \lt f(a)## (strictly monotonically decreasing). The monotonicity is strict because otherwise for some interval ##f(x)## is constant which contradicts the injectiveness of ##f(x)##. This result can be extended to the entire range of ##f## indicating that no extremas occur over the range of ##f##.

Infrared said:
It is okay, but the reasoning shouldn't involve monotonicity- only continuity. The function ##x\mapsto f(x)-x## is continuous, so if it is negative (or positive) at a point, it is negative (or positive) in a neighbourhood of the point. In general, it's better to avoid making diversions like this that aren't necessary for the proof.

Yes, I have changed it to the following:

Claim 4: ##f(x)## satisfies the identity relation, i.e., ##f(x)=x##

Proof: Suppose for some ##x \in \mathfrak {R}, f(x)≠x##, then either ##f(x)>x## or ##f(x)<x## at that point.
 
  • #130
Adesh said:
@QuantumQuest Sir, I'm having some doubts regarding question number 12, please pardon my lack of knowledge.

When you wrote

A table has an internal moveable surface used for slicing bread.

I couldn't understand what "internal" meant, how it is internal? And does the "surface" means one of those faces of that "cuboidal" machine?

(corrected "interval" you had in your quote)

Internal means inside, in the interior of something. Now, if you read the second phrase right after the first one it's easy to see that there is an internal surface inside table that can be pulled outside by using two small handles.

fresh_42 said:
A table has an internal movable surface used for slicing bread. In order to be able to be pulled to the outside, there are two small handles on its front face...

To explain even more simply, the internal surface I mention is like a drawer that can be pulled outside , used to do its job and be pushed inside again using the handles,after it has done its job. So, there is no surface of "cuboidal" machine.

EDIT: Also, keep in mind that Question ##12## is only for people currently at High School
 
  • #131
QuantumQuest said:
Now, if you read the second phrase right after the first one it's easy to see that there is an internal surface inside table that can be pulled outside by using two small handles.
What is the table in that figure? What does that cuboid represents in the figure?
 
  • #132
fresh_42 said:
10. Let ##f:[0,1]\to\mathbb{R}## be a continuous function, differentiable on ##(0,1)## that satisfies ##f(0)=f(1)=0##. Prove that ##f(x)=f'(x)## for some ##x\in (0,1)##. Note that ##f'## is not assumed to be continuous. (IR)

I am not sure about the correctness and completeness of the "proof" I have attempted. Even if turns out to be correct, it may have loose ends because my knowledge of calculus and continuity is basic and I haven't had to solve proof-type problems of this type as a part of my curriculum.
The proof should be obvious for the trivial case where ##f## is the constant function ##f(x) = 0 \, \forall x \in [0,1]##. If ##f## is not the constant function, the proof is as follows. Let ##a \in (0, 1]## be the smallest positive number for which ##f(a) = 0 = f(0)##. Obviously, ##a## exists since at least ##a=1## will meet this requirement even if no other value in ##(0,1)## does.
Observation 1: By definition of ##a## and since ##f## is continuous, ##f(x)## must be strictly positive or strictly negative for all ##x \in (0,a)##
Observation 2: By Rolle's theorem, there must exist a value ##b \in (0, a)## for which ##f'(b) = 0##. And by definition of ##a##, ##f(b) \neq 0##

Let us define a function ##g:[0,a]\to\mathbb{R}## as follows: $$
g(x) =
\begin{cases}
-\log (f(x)) & \text{if } f(\frac{a} {2}) > 0 \\
-\log (-f(x)) & \text{if } f(\frac{a} {2}) < 0
\end{cases}
$$

It is easy to see that ##g## too must be a continuous function and differentiable in ##(0, a)##.

Observation 3: ##g'(x) = -\dfrac {f'(x)} {f(x)}##, and since ##f'(b) = 0, f(b) \neq 0##, we must have ##g'(b) = 0##
Observation 4: ##g(x)## tends to ##\infty## as ##x## tends to 0 or ##a## (##x## within the range ##[0,a]##)

(I don't have a proof for the following claim but it just appears to be "intuitively" correct). Since ##g(x)## is unbounded within a finite interval ##[0,a]##, the derivative ##g'(x)## cannot be bounded, so it can take unbounded, arbitrarily extreme negative or arbitrarily extreme positive values as ##x## tends to 0 or ##a## respetively. More specifically, since ##g(x)## goes from 0 to ##\infty## as ##x## goes from ##b## to ##0##, ##g'(x)## should tend to ##-\infty## as ##x## tends to ##0## from the right. So, if we choose an arbitrary bound ##v_0##, there must exist some ##p_0 \in (0, b)## such that ##g'(p_0) \lt v_0##.

Now, we can apply Darboux's theorem of real analysis since ##g## is differentiable. Since ##g'(b) = 0## and if ##g'(p_0) \leq v_0 \gt 0## for some ##p_0 \lt b##, then for any ##v_1 \in (v_1, 0)## must exist a point ##c \in (p_0, b)## such that ##g'(c) = v_1##. Since we claimed that ##g'## is unbounded in the interval of which it is defined, we can choose ##v_0 = -2##. This implies that there must exist ##v_1 \in (p_0, b)## (##p_0## being the largest ##x \in (0, b)## for which ##g'(x) \leq v_0##) such that ##g'(v_1) = -1## (since ##-2 < -1 < 0##).

$$
g'(v_1) = -1 \Rightarrow -\dfrac {f'(v_1)} {f(v_1)} = -1 \Rightarrow \dfrac {f'(v_1)} {f(v_1)} =1 \\
\Rightarrow f'(v_1) = f'(v_1)
$$
Thus there exists a point ##x = v_1 \in (0, 1)## (more precisely ##x = v_1 \in (0, b)##) for which ##f'(x) = f(x)##

(I got to know of Darboux's theorem only when trying to understand non-continuous derivatives after looking at this question - it is a theorem in real analysis, a topic that I have never studied as my math knowledge is mostly limited to high-school curriculum. So don't be surprised if I have interpreted or applied the theorem incorrectly :oops:)
 
  • Like
Likes Infrared
  • #133
@Adesh This is good- to make it complete, you should have a proof of your claim that a continuous bijection ##f:[0,1]\to [0,1]## must be strictly monotonic. You can also cut down a bit on the case-work by noting that ##f## decreasing contradicts ##f\circ f\circ f\circ f=\text{Id}.##

@Lament I think it is good now, but there are two details in your claim 2 that could be expanded upon. It's not immediately clear how you are applying IVT (not MVT), and how you are concluding that ##f## is strictly monotonic from the fact that it does not attain local extrema in its interior.

Since there have been a few nearly complete arguments, I'll post a solution. It should look a lot like several of the preceding posts.

First of all, ##f:[0,1]\to [0,1]## is a bijection because ##f\circ f## is inverse to ##f##. In particular, ##f## is injective. We check that a continuous injection ##f:[0,1]\to [0,1]## must be strictly monotonic.

Suppose for contradiction that ##f## is neither increasing nor decreasing. Since ##f## is not decreasing, there exist ##a,b\in [0,1]## such that ##a<b## and ##f(a)<f(b)##. Similarly, there are ##c,d\in [0,1]## such that ##c<d## and ##f(c)>f(d)## (we may take the inequalities to be strict by injectivity).

Suppose that ##a<c## and ##b<d##. Let ##M=\max(f(b),f(c))## and let ##p\in\{b,c\}## be the corresponding input. Let ##y## be a real number in the interval ##(\max(f(a),f(d)),M)##. By the intermediate value theorem, there exist ##x_1\in (a,p)## and ##x_1\in (p,d)## such that ##f(x_1)=y=f(x_2)##, contradicting injectivity. The other cases are analogous.

A very slick way alternative way to prove this is to consider the set ##S=\{(x,y)\in [0,1]\times [0,1]: x>y\},## and define ##F:S\to\mathbb{R}## by ##F(x,y)=f(x)-f(y)##. Since ##S## is connected and ##F## is continuous, the image ##F(S)## is an interval in ##\mathbb{R}##. This interval does not contain ##0## by injectivity, so ##F(x,y)## is either always positive or always negative, corresponding to ##f## being increasing or decreasing, respectively.

From here, many of you have given a correct argument: The function ##f\circ f\circ f=\text{Id}## has the same monotonicity as ##f##, so ##f## must be increasing. Suppose for contradiction that ##f## is not the identity, so ##x\neq f(x) ## for some ##x##, meaning either ##x<f(x)## or ##x>f(x)##. In the first case, we have that ##x<f(x)<f(f(x))<f(f(f(x)))=x## is a contradiction, and similarly if ##x>f(x)##.
 
Last edited:
  • #134
@archaic Your solution to problem ##8## looks correct, but there isn't any need to do estimates like this. It really is just induction like I promised!

We check by induction that ##p_n## has no roots when ##n## is even, and exactly one root when ##n## is odd:

The base case ##n=0## is clear. Now suppose that this claim is true for ##p_{n-1}##, and consider the polynomial ##p_n##.

If ##n## is odd, then ##p_n'=p_{n=1}## has no roots, and so is always positive. This means that ##p_n## is a strictly increasing function and so can have at most one root. Also ##\lim_{x\to\pm\infty}p_n(x)=\pm\infty## since ##n## is odd, so by IVT, the polynomial ##p_n## does in fact have a root.

If instead ##n## is even, the polynomial ##p_n## must achieve a minimum value (since ##\lim_{x\to\pm\infty}p_n(x)=\infty##), say at ##p_n(c)## Then ##p_n'(c)=p_{n-1}(c)=0##, so ##p_n(c)=p_{n-1}(c)+c^n/n!=c^n/n!>0## because ##n## is even (and we can rule out ##c=0## as a special case). So ##p_n## has no real roots.
 
  • Love
Likes archaic
  • #135
Not anonymous said:
Let ##a \in (0, 1]## be the smallest positive number for which ##f(a) = 0 = f(0)##. Obviously, ##a## exists
I don't think I agree with this. Supposing that the zero set of ##f## is ##\{0\}\cup\{1,1/2,1/3,1/4,1/5,\ldots\}##, what is your ##a##?

Not anonymous said:
(I don't have a proof for the following claim but it just appears to be "intuitively" correct).
I think what you're trying to say is: If ##f:(a,b)\to\mathbb{R}## is differentiable, and ##f'## is bounded, then ##f## is bounded. You can show this with the mean value theorem.

The rest of your proof actually looks pretty good!
 
  • Informative
Likes Not anonymous
  • #137
Problem 10

It is clear there is an interval ##[a,b]\subseteq[0,1]## such that ##f(a)=f(b)=0,\quad |f|\mid_{(a,b)}>0##.
Consider a function ##F(x)=\ln|f(x)|-x##. We have ##F(x)\to-\infty## as ##x\to b-## or ##x\to a+##.
Thus for some ##\tilde x\in(a,b)## we have ##F'(\tilde x)=0##
This solves the problem
 
  • #138
Problem 1 is a very very standard fact. I would propose to prove the following generalization.
Let ##(X,d_X),\quad (Y,d_Y)## be metric spaces. ##Y## is complete. Let ##U\subset X## be a subset of ##X##.

Let a function ##f:U\to Y## be uniformly continuous:
For any ##\varepsilon>0## there is ##\delta=\delta(\varepsilon)## such that
$$d_X(x_1,x_2)<\delta\Longrightarrow d_Y(f(x_1),f(x_2))\le\varepsilon.$$
Then there is a unique function ##\tilde f:\overline U\to Y## such that ##\tilde f\mid_U=f##. The function ##\tilde f## is uniformly continuous with the same ##\delta(\varepsilon)##.

(##\overline U## stands for the closure of ##U##)
 
Last edited:
  • #139
Infrared said:
I don't think I agree with this. Supposing that the zero set of ##f## is ##\{0\}\cup\{1,1/2,1/3,1/4,1/5,\ldots\}##, what is your ##a##?

That is a nice counterexample! I was wary of infinity and yet not sure if my proof correctly dealt with infinity in it various forms. I missed the case of ##f## having infinite number of zeros (except for the constant function ##f(x) = 0 \, \forall x \in [0,1]##). The reason why my attempted proof defined ##a## was to get an interval over which I could define ##g(x)##, specifically an interval whose endpoints would be zeros of ##f(x)## and where ##f## would be either strictly positive or strictly negative within the interval (except at the endpoints). Is it valid to assume that for any differentiable function ##f:[0,1] \rightarrow \mathbb{R}## that is not the constant-value function but has infinite number of zeros, its zeros will form a countably infinite set? Or that for any such function, there will be some interval ##I ⊂[0,1]## containing 2 or more zeros of ##f## with all zeros falling in that interval forming a finite or a countably infinite set (i.e. set of zeros of ##f## contains a finite or countably infinite subset, with all remaining zeros, if any, lying outside the range defined by this finite or countably infinite subset)? If yes, will the following changes to the proof make it correct? (If my understanding of "countably infinite" is correct, the zero set example that you have provided is countably infinite).

Instead of choosing the range over which ##g## is defined as before, we now choose it to be ##[a_{i}, a_{i+1}] \subseteq [0,1]## where ##a_{i}, a_{i+1}## (##a_{i} \lt a_{i+1}##) are any two adjacent elements in the zero set of ##f## (meaning ##f## has no zeros in ##(a_{i}, a_{i+1})##).

##g:[a_{i}, a_{i+1}] \rightarrow \mathbb{R}## is defined as follows: $$g(x) =
\begin{cases}
−\log(f(x)) & \text{if } f(\frac{a_{i} + a_{i+1}} {2}) > 0 \\
−\log(−f(x)) & \text{if } f(\frac{a_{i} + a_{i+1}} {2}) < 0
\end{cases}
$$

We note that ##g(x)## tends to ∞ as ##x## tends to ##a_{i}## (from the right) or ##a_{i+1}## (from the left) (##x## within the range ##[a_{i},a_{i+1}]##). And by Rolle's theorem, there must exist some ##b \in (a_{i},a_{i+1})## such that ##f'(b) = 0##. The rest of the proof is essentially the same as before with the main changes being the use of ####[a_{i},a_{i+1}]#### instead of the earlier used ##[0,a]## and ##(a_{i}, b)## instead of the earlier used ##(0,b)##. The ranges in which ##p_0, v_0, v_1## etc. were defined in the earlier proof will change accordingly.
 
Last edited:
  • #140
wrobel said:
Problem 1 is a very very standard fact.
However, we neither have a Hilbert space nor is ##X## required to be complete in question 1. Hence it is actually close to what you have proposed instead.
 
  • #141
wrobel said:
It is clear there is an interval ##[a,b]\subseteq[0,1]## such that ##f(a)=f(b)=0,\quad |f|\mid_{(a,b)}>0##.
Consider a function ##F(x)=\ln|f(x)|-x##. We have ##F(x)\to-\infty## as ##x\to b-## or ##x\to a+##.
Thus for some ##\tilde x\in(a,b)## we have ##F'(\tilde x)=0##
This solves the problem

This is right, but I do think your first claim deserves a proof. If ##f## is the constant zero function, then the problem is solved. Otherwise, suppose ##f(p)\neq 0## for some ##p##. Then the set ##\{x\in [0,p]: f(x)=0\}## is nonempty, closed and so has a maximal element ##a## (that is not ##p##), and similarly, there is a minimal zero ##b## larger than ##p##. Then ##[a,b]## is the interval you are looking for.

Also, there still some outstanding problems in the March II thread (https://www.physicsforums.com/threads/math-challenge-march-2020-part-ii.985311/) that you might be interested in :)
Not anonymous said:
Is it valid to assume that for any differentiable function ##f:[0,1] \rightarrow \mathbb{R}## that is not the constant-value function but has infinite number of zeros, its zeros will form a countably infinite set?
This is not true, consider ##f(x)=\begin{cases}
0 & x\leq 1/2 \\
(x-1/2)^2 & x\geq 1/2\\
\end{cases}.##

Not anonymous said:
Or that for any such function, there will be some interval ##I ⊂[0,1]## containing 2 or more zeros of ##f## with all zeros falling in that interval forming a finite or a countably infinite set (i.e. set of zeros of ##f## contains a finite or countably infinite subset, with all remaining zeros, if any, lying outside the range defined by this finite or countably infinite subset)?

The issue isn't having countable/uncountably many zeros, but see my above response to @wrobel to see how to find an interval that you're looking for. You should be wary of taking "consecutive elements" of a countable set. For example, there are no consecutive elements in ##\mathbb{Q}##.

Also, this problem has a cute one-line solution: apply Rolle's theorem to the function ##g(x)=e^{-x}f(x)## on the interval ##[0,1].## This is just the exponential of @wrobel's function, and works for the same reason, but has the advantage of being defined everywhere, so there isn't any need to find a sub-interval to work with.
 
Last edited:
  • Informative
  • Like
Likes Not anonymous and fresh_42
  • #142
@Infrared I am trying to have you have a look at my last solution to question 9.
archaic said:
Saying ##f(f(f(x)))=x## is like saying ##f(u\in[0,1])=x##. As such, ##f:[0,1]\to[0,1]## is bijective.
Since ##f## is a continuous bijection on ##[0,1]##, then it is either strictly decreasing or strictly increasing, the same for its inverse (which has the same direction of variation as ##f##).

Proving ##\forall x\in [0,1],\,f(f(f(x)))=x\implies f(x)=x## is the same as proving ##f(x)\neq x\implies\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

Suppose that ##f(x)\neq x##, i.e, there exists a real number in ##[0,1]## such that its image by ##f## is different than itself. In the following, by ##a## I mean this real.
All inequalities will be strict because ##f## is a continuous bijection.
Suppose that ##f(x)## is strictly increasing, and let there be ##a\in[0,1]## such that ##f(a)=b\neq a##, then:
If ##b<a##, then ##f(b)=c<f(a)=b##, and thus ##f(c)<f(a)=b##. It follows that ##f(f(f(a)))<b<a##.

If ##b>a##, then ##f(b)=c>f(a)=b##, and thus ##f(c)>f(a)=b##. It follows that ##f(f(f(a)))>b>a##.

I have proved that if ##f(x)## is strictly increasing, then ##\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

Suppose that ##f(x)## is strictly decreasing, and let there be ##a\in[0,1]## such that ##f(a)=b\neq a##, then:
If ##b<a##, then ##f(b)=c>f(a)=b##.

If ##b<c<a##, then ##f(b)=c>f(c)>f(a)=b##. It follows that ##f(f(f(a)))>f(a)##

If ##a<c##, then ##f(a)=b>f(c)##. It follows that ##f(f(f(a)))<f(a)##

If ##b>a##, then ##f(b)=c<f(a)=b##.

If ##b>c>a##, then ##f(b)=c<f(c)<f(a)=b##. It follows that ##f(f(f(a)))<f(a)##

If ##a>c##, then ##f(a)=b<f(c)##. It follows that ##f(f(f(a)))>f(a)##

I have proved that if ##f(x)## is strictly decreasing, then ##\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##.

I have proved that ##f(x)\neq x\implies\exists x\in [0,1]\,|\,f(f(f(x)))\neq x##!
 
  • #143
Adesh said:
What is the table in that figure? What does that cuboid represents in the figure?

@Adesh I think that after my explanations in post #130 things are sufficiently clear. What you see in Fig.1. is the surface of a table. Now, you see the two handles beneath the surface of the table. These are used to pull an internal movable surface - as I said in post #130 you can think of it as a drawer, and obviously, this "drawer" has an upper and a lower surface - it is a rectangular parallelepiped and so there is friction between its left and right sides (as we see it in Fig.1) and the internal part of the respective sides of the table. So, effectively, we have two parallelepipeds: the upper part of the table as we see it in Fig.1 i.e. without table's legs, is the external one and the movable surface is the internal one which is inside the external parallelepiped of the upper part of the table.
 
Last edited:
  • Like
Likes Adesh
  • #144
For #10, SPOILER:I had in mind something like this: as Infrared pointed out, indeed f-f' has the Darboux property (intermediate value property) because it is a derivative, (since f has an antiderivative by the fundamental theorem of calculus).

Thus either the problem is true, or else f-f' is everywhere positive or everywhere negative on (0,1). Suppose f-f' > 0 on (0,1), hence f >0 at every critical point in (0,1), so the global minimum is 0, and occurs only at the endpoints. So let a be a point in (0,1) where f has a global maximum on [0,1] and f(a) > 0. Then by the MVT on [0,a], there is a point b with 0<b<a, where f'(b) = {f(a)-f(0)}/a. Then we get the contradiction f(b) > f'(b) = f(a)/a > f(a).

If f' > f on (0,1), then f<0 at every critical point on (0,1), so choose a in (0,1) as a global minimum with f(a) < 0. We get a similar contradiction at a point b with 0<b<a, and f(b) < f'(b) = {f(a)-f(0)}/a < f(a).
 
Last edited:
  • Like
Likes Infrared
  • #145
Adesh said:
Question 12
Questions 11-15 are for high schoolers only!
 
Last edited:
  • #146
archaic said:
Saying ##f(f(f(x)))=x## is like saying ##f(u\in[0,1])=x##
This is not properly written. I think what you're trying to say is that is that for any ##x\in [0,1]##, there exists ##u\in [0,1]## such that ##f(u)=x##. This only means that ##f## is surjective, but you need injectivity to conclude monotonicity. Anyway ##f## is a bijection because ##f\circ f## is a (both left and right) inverse function.

archaic said:
Since ##f## is a continuous bijection on ##[0,1]##, then it is either strictly decreasing or strictly increasing
This is true, but I think it's worthwhile to write out a proof. See my post 133 for two arguments, but maybe you want to think about it yourself too.

The casework that follows looks okay, but it could be cleaned up a little bit. For example, the condition ##f\circ f\circ f## is automatically inconsistent with ##f## being decreasing, so there is no need to check all of those cases (and even in this work, I think the sub-cases relating to ##c## are unnecessary). Also, introducing the new variable ##c=f(b)## makes things harder to read, at least for me.
 
  • #147
Infrared said:
This is not properly written. I think what you're trying to say is that is that for any ##x\in [0,1]##, there exists ##u\in [0,1]## such that ##f(u)=x##. This only means that ##f## is surjective, but you need injectivity to conclude monotonicity. Anyway ##f## is a bijection because ##f\circ f## is a (both left and right) inverse function.
I meant that since ##f## is taking ##[0,1]## as an entry point and giving back an ordered ##[0,1]##, or ##x##, then it's bijective as the RHS also is. There can't be a point that has two images because it is a function, and it has pin point the same ##[0,1]## "elements".
 
  • #148
archaic said:
I meant that since ##f## is taking ##[0,1]## as an entry point and giving back an ordered ##[0,1]##, or ##x##
I don't know what this means.

But think about the function ##f:[0,2]\to [0,2]## (re-scaling just to make the numbers easier) given by ##f(x)=2(x-1)^2.## This map is not a bijection because it is not injective, but if we set ##g(x)=\sqrt{\frac{x}{2}}+1##, then we still have ##f(g(x))=x##.
 
  • Like
Likes archaic
  • #149
Let U\subseteq X be dense and A:U\to Y bounded linear, where Y is a Banach space. Fix x\in X. Due to density we have u^n\in U, n\in\mathbb N, such that u^n\xrightarrow [n\to\infty]{} x .

The sequence u^n, n\in\mathbb N, is Cauchy. We show A(u^n), n\in\mathbb N, is Cauchy in Y, hence due to completeness there exists the limit A(u^n) \xrightarrow [n\to\infty]{} y_x \in Y.

Let \varepsilon &gt; 0. By boundedness of A we have M&gt;0 such that \|A(u)\|_Y \leq M\|u\|_X for every u\in U. Since u^n,n\in\mathbb N, is also Cauchy, pick a suitable K(\frac{\varepsilon}{M})\in\mathbb N by definition. Suppose m,n&gt;K, then
<br /> \|A(u^m) - A(u^n)\| = \|A(u^m-u^n)\| \leq M\|u^m-u^n\| &lt; \varepsilon .<br />
Therefore A(u^n),n\in\mathbb N, is Cauchy.

Define \tilde{A} : X\to Y such that \tilde{A}(x) = y_x (if x\in U, then the sequence can be assumed to be constant, hence \tilde{A}(x) = A(x)). We have \tilde{A}|_U = A. Linearity follows from properties of the limit for continuous maps and linearity of A. Additionally, we have by continuity of the norm
<br /> \|\tilde{A}(x)\| = \left \|\lim _{n\to\infty} A(u^n)\right \| = \lim _{n\to\infty} \|A(u^n)\| \leq \|A\|\lim _{n\to\infty} \|u^n\| = \|A\|\|x\|. \tag{1}<br />
Thus \tilde{A} is bounded (i.e continuous).

Next, we show equality of the operator norms. On the one hand \|A\| \leq \|\tilde{A}\| holds trivially, because by extending, the norm can't become smaller. By definition, the operator norm is given as
<br /> T:X\to Y\qquad \|T\| := \sup \left \{ \frac{\|T(x)\|}{\|x\|} \colon x \neq 0\right \}<br />
By (1) we have \|\tilde{A}\| \leq \|A\|. Therefore the operator norms are equal.

Note also that if B:X\to Y was another such extension, then
<br /> \|B(x) - \tilde{A}(x)\| = \lim _{n\to\infty} \|A(u^n) - A(v^n)\| = \lim _{n\to\infty} \|A(u^n-v^n)\| = 0,<br />
because restrictions to U would be equal and u^n,v^n would both converge to x. Thus the extension is unique.
 
Last edited:
  • Like
Likes fresh_42
  • #150
Infrared said:
You should be wary of taking "consecutive elements" of a countable set. For example, there are no consecutive elements in ##\mathbb{Q}##.

I agree. Dealing with infinity or infinite sized sets is very tricky, more so for novices. Like everyone else, I have an infinite amount of things to learn, but the infinity I am yet to learn is larger than those of the experts in this forum :smile:

I have a doubt, since you mentioned ##\mathbb{Q}##. Is there any function that can be expressed using only common standard functions and does not need to be written as a sum of infinite series that is continuous and differentiable in ##(0,1)## which has all rational numbers in its range and no other value as its zeros?

Infrared said:
Also, this problem has a cute one-line solution: apply Rolle's theorem to the function ##g(x)=e^{-x}f(x)## on the interval ##[0,1]##. This is just the exponential of @wrobel's function, and works for the same reason, but has the advantage of being defined everywhere, so there isn't any need to find a sub-interval to work with.

Thanks for that example! It is nice to be able to prove something using just one simple theorem. But I am a bit confused about this function's relation to @wrobel's. @wrobel's function uses absolute value of ##f(x)## whereas yours does not, so your example appears to be of similar form but not exactly the same as the exponential of @wrobel's function. Am I missing something?
 

Similar threads

  • · Replies 61 ·
3
Replies
61
Views
11K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 60 ·
3
Replies
60
Views
12K
  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 77 ·
3
Replies
77
Views
15K
  • · Replies 56 ·
2
Replies
56
Views
10K
  • · Replies 102 ·
4
Replies
102
Views
10K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 64 ·
3
Replies
64
Views
15K