# Prove f(x) is zero in range [-1,1]

WWGD
Gold Member
My idea is that ##f(x)=2xf(x^2-1)=2x2(x^2-1)f((x^2-1)^2-1)=...=2^np_n(x)f(q_n(x))## where ##p_n## and ##q_n## polynomials of degree ##2^n-1## and ##2^n##. If we can prove that ##\lim_{n->\infty}q_n(x)=0## then I think we got it.
But, will the terms on polys remain within ##[-1,1]##?

fresh_42
Mentor
I still think that ##F = F \circ g## with ##g(x)=x^2-1\, , \,F'(x)=f(x)## and ##F(-1)=F(0)=F(-1)## is the key. To write it this way is too invitingly. Why should there be the factor two otherwise?

Delta2
Homework Helper
Gold Member
I still think that ##F = F \circ g## with ##g(x)=x^2-1\, , \,F'(x)=f(x)## and ##F(-1)=F(0)=F(-1)## is the key. To write it this way is too invitingly. Why should there be the factor two otherwise?

yes you got a point here, the 2 factor "screams" that 2x is the derivative of ##x^2-1##.

Still I believe it can be done my way and without the use of integrals and derivatives. My main difficulty is proving that ##2^np_n(x)## is bounded but I believe it can be done.

It is ##p_n(x)=q_1(x)q_2(x)...q_{n-1}(x)##
For ##q_n(x)## we have that ##\lim_{n->\infty}{q_{2n}(x)}=0##, hence we can find a ##k(x)## such that ##q_{2n}(x)>-\frac{1}{4}## for ##n>k(x)##.

So it will be ##p_n(x)=q_1(x)...q_{2k(x)}(x)q_{2k(x)+1}(x)...q_n(x)=P_{2k(x)}(x)q_{2k(x)+1}(x)...q_n(x)##.

to prove that ##2^np_n(x)## is bounded we break it down to
##2^{2k(x)}P_{2k(x)}2^{n-2k(x)}q_{2k(x)+1}(x)...q_n(x)##

##2^{2k(x)}P_{2k(x)}## is bounded cause its finite. The rest part of product I believe it is also bounded for any n because ##q_{2i}(x)>\frac{-1}{4}## for any ##i>k(x)##.

Last edited:
fresh_42
Mentor
yes you got a point here, the 2 factor "screams" that 2x is the derivative of ##x^2-1##. Still I believe it can be done my way and without the use of integrals and derivatives. My main difficulty is proving that ##2^np_n(x)## is bounded but I believe it can be done.
But this still leaves the same convergence problem as with Ray's idea: ##q_n(x)## alternates between ##-1## and ##0## and I'm not sure that the boundaries won't cause problems. The compactness of ##[-1,1]## might play a role, but it is simultaneously an obstacle to get a "norm" below ##1##.

Delta2
Homework Helper
Gold Member
But this still leaves the same convergence problem as with Ray's idea: ##q_n(x)## alternates between ##-1## and ##0## and I'm not sure that the boundaries won't cause problems. The compactness of ##[-1,1]## might play a role, but it is simultaneously an obstacle to get a "norm" below ##1##.
##q_n(x)## alternates but ##f(q_n(x))## converges because f(-1)=f(0)=0 .

fresh_42
Mentor
##q_n(x)## alternates but ##f(q_n(x))## converges because f(-1)=f(0)=0 .
I haven't checked, but what happens at ##\xi = \dfrac{1}{2}(1-\sqrt{5})## with ##q_n(\xi)## and ##p_n(\xi)\,##?

Delta2
Homework Helper
Gold Member
##q_n(\xi)=\xi## , ##p_n(\xi)=\xi^n##. No problem here either because ##f(\xi)=0##

fresh_42
Mentor
Yes, but ##2^n\xi^n## explodes and as we don't know much about ##f##, it cannot be compensated by ##f(q_n(x))##, at least we don't know.

Delta2
Homework Helper
Gold Member
you can prove another way ##f(\xi)=0## check post #8. We must exclude ##\xi## from this treatment and treat it as a special case.

fresh_42
Mentor
you can prove another way ##f(\xi)=0## check post #8. We must exclude ##\xi## from this treatment and treat it as a special case.
That's not my point. ##f(\xi)=0## directly follows from the defining equation. But the exclusion of ##\xi## might not be sufficient, because ##2^n (\xi \pm \varepsilon)^n## is also divergent and polynomials are continuous, so at first glance the construction looks unstable.

Delta2
Homework Helper
Gold Member
even for x neighbouring to ##\xi## , ##2^np_n(x)## is totally different than ##2^n(\xi \pm \epsilon)^n##.

Last edited:
fresh_42
Mentor
even for x neighbouring to ##\xi## , ##2^np_n(x)## is totally different than ##2^n(\xi+-\epsilon)^n##. I don't think the limit ##\lim_{x->\xi}p_n(x)## for x any but not ##\xi## is equal to ##2^n(\xi+-\epsilon)^n##.
I'm not saying it won't work, only that it requires more care. It's not obvious, at least to me, why it will work. I assume that some topological argument might be needed to make the step from iterations to the continuum. I would prefer a theorem before cases, but that's a personal view.

WWGD
Gold Member
even for x neighbouring to ##\xi## , ##2^np_n(x)## is totally different than ##2^n(\xi \pm \epsilon)^n##.

If you are using that to approximate ##f ## in ##[0,1]## then the limit should be finite, as continuous function in compact is bounded. But then you have a limit of the type ##\infty . \epsilon ## which should be dominated by ##\epsilon ## in order for the limit to be finite.

WWGD
Gold Member
Part of the problem is that there are continuous functions on compact intervals with uncountably many zeros and even with the added condition that there are non-zero values between any two zeros, i.e., without f being zero in (a,b) , e.g. the distance function d(x,C) , for C a Cantor set. So it is difficult to discount many choices on these grounds alone. Basically, any closed subspace can be the zero set of a continuous function. This seems to allow a great amount of functions with uncountably-many zeros in a compact subspace; I thought such functions would be rare.

StoneTemplePython
Gold Member
It seems like a lot more brain power has gone into this than perhaps the question wanted...

Another approach would be to show that for any any ##x \in (-1,0)##, after a large enough finite number of iterations, the resulting x becomes 'trapped' in ##\{(-1, 0.99], [0.01, 0)\}## and with respect to the magnitude of the function the effect of the x scaling over two iterations is a multiple less than one fourth, which overwhelms the scaling of ##2^2 = 4##, and hence the infinite product decays toward zero. This can be shown for 3 iterations at a time as well. It's sufficient to show this for 10 iterations for starting x = 0.5.

With respect to the ##2x## factor. Another way to think about it is how it relates to ##GM \leq AM## where the inequality is strict unless all values are the same. If we wanted to have some fun, look at the magnitude of the scalars ##\big \vert \approx 0\big \vert, \big \vert \approx -1\big \vert##, and see that the arithmetic mean is ##\frac{1}{2}## which is strictly larger than the geometric mean. Hence the effect of scaling by ##x## in our infinite product must overwhelm the scaling by 2. The infinite product decays toward zero.

We've only been taught of limits and derivatives and integration.
I tried using derivatives.
f(x) = 2x(f(x2-a))
Let f(x^2 - 1) = g(x)
So
f(x) = 2x g(x)
f'(x) = 2xg'(x) + 2g(x)
f'(x) = 2xf'(x)(2x) + 2f(x^2 - 1)
f'(x) (1-4x^2) = 2f(x^2 - 1)
So we get f'(x) = 2f(x^2 - 1)/(1-4x^2)

Now we got that f(-1) = f(1) = 0.
So if f(x) is non zero at any point between 1 and -1, then f'(x) must be positive some place and negative some place.
But above we see that f'(x) is an even function.
So f'(x) is not satisfying condition of positive and negative. It's either positive or negative and if it's one of them then it can have only 1 zero.
And so we can say that f(x) is constant at zero.

Aren't all continuous functions derivative? I'm not sure what's the difference. Sine and consine are continous and differentiable. So i guess it should be true.

PeroK
Homework Helper
Gold Member
2020 Award
Aren't all continuous functions derivative? I'm not sure what's the difference. Sine and consine are continous and differentiable. So i guess it should be true.

Not all continuous functions are differentiable. One example is the modulus ##|x|##, which is not differentiable at ##x = 0##.

There is, in fact, the Weierstrass function, which is continuous but nowhere differentiable!

https://en.wikipedia.org/wiki/Weierstrass_function

So I guess we can take two conditions.
1) f(x) is differentiable then we can go with my solution.
and 2) f(x) is not differentiable. Any idea how to prove in this case?

PeroK
Homework Helper
Gold Member
2020 Award
We've only been taught of limits and derivatives and integration.
I tried using derivatives.
f(x) = 2x(f(x2-a))
Let f(x^2 - 1) = g(x)
So
f(x) = 2x g(x)
f'(x) = 2xg'(x) + 2g(x)
f'(x) = 2xf'(x)(2x) + 2f(x^2 - 1)
f'(x) (1-4x^2) = 2f(x^2 - 1)
So we get f'(x) = 2f(x^2 - 1)/(1-4x^2)

Now we got that f(-1) = f(1) = 0.
So if f(x) is non zero at any point between 1 and -1, then f'(x) must be positive some place and negative some place.
But above we see that f'(x) is an even function.
So f'(x) is not satisfying condition of positive and negative. It's either positive or negative and if it's one of them then it can have only 1 zero.
And so we can say that f(x) is constant at zero.

First:

##g'(x) = 2xf'(x^2-1) \ne 2xf'(x)##

But also, I don't understand how you get that an even function can't take positive and negative values.

I wonder where you got this question, because it seems to be a little beyond your level at the moment.

jaus tail and Delta2
PeroK
Homework Helper
Gold Member
2020 Award
So I guess we can take two conditions.
1) f(x) is differentiable then we can go with my solution.
and 2) f(x) is not differentiable. Any idea how to prove in this case?

2) f(x) is not differentiable. Any idea how to prove in this case?

See @Ray Vickson's solution in post #14.

jaus tail
Delta2
Homework Helper
Gold Member
If you are allowed to use integral and derivatives and all that, follow Ray Vickson solution at post #14. It is mostly correct, and it uses the theorem that ##F(x)=\int_0^x{f(y)dy}## is differentiable and it is ##F'(x)=f(x)## (that is the fundamental theorem of calculus). it doesn't need the assumption that f(x) is differentiable.

If you are not allowed to use integral and derivatives then you can follow my approach explained in post #15, #20 and #28 which I believe is also correct.

jaus tail
Delta2
Homework Helper
Gold Member
If you are using that to approximate ##f ## in ##[0,1]## then the limit should be finite, as continuous function in compact is bounded. But then you have a limit of the type ##\infty . \epsilon ## which should be dominated by ##\epsilon ## in order for the limit to be finite.

##2^np_n(x)## is dominated by ##p_n(x)## for any ##x\neq\xi## its ##\lim_{n->\infty}{p_n(x)}=0## and the convergence of ##p_n(x)## to 0 is such that it overcomes ##2^n##. I believe I prove this at post #28.

jaus tail and PeroK
PeroK