# Math Challenge - April 2020

• Challenge
• Featured
Going from ##\lim_{y\to 0} \frac{y}{\sqrt{\sin^2{y/2}}}## to ##\lim_{y \to 0} \frac{y}{\sin{y/2}}## was wrong since approaching ##0## from the left is different from approaching it from the right when it comes to the sine function. Remember that ##\sqrt{u^2}=|u|##.
You also made an error when you applied this ##\sin^2{(x/2)}=\frac{1-\cos x}{2}##.

Last edited:
Infrared
Gold Member
#10: Hint:

even if f' is not continuous, it still has the intermediate value property, in common with continuous functions. Hence either the problem is true or else f-f' is always either positive or negative on (0,1).

I'm sure you're aware of this, but the set of functions satisfying the intermediate value property is not closed under addition. Here it should be fine anyway though, since ##f+f'=(F+f)'## where ##F## is a primitive for ##f##.

mathwonk
Going from limy→0y√sin2y/2limy→0ysin2⁡y/2\lim_{y\to 0} \frac{y}{\sqrt{\sin^2{y/2}}} to limy→0ysiny/2limy→0ysin⁡y/2\lim_{y \to 0} \frac{y}{\sin{y/2}} was wrong since approaching 000 from the left is different from approaching it from the right when it comes to the sine function. Remember that √u2=|u|u2=|u|\sqrt{u^2}=|u|
You mean that approaching from left would result in negative ##\sin## and approaching from right would result in ##\sin## being positive ?

archaic
You also made an error when you applied this sin2(2x)=1−cosx2sin2⁡(2x)=1−cos⁡x2\sin^2{(2x)}=\frac{1-\cos x}{2}.
I corrected it upon your advice, thank you for that. But please correct that ##\sin 2x## to ##\sin x/2##

archaic
You mean that approaching from left would result in negative ##\sin## and approaching from right would result in ##\sin## being positive ?
Yep!

Mentor
We have ##n\in\mathbb{N}=\{0,\,1,\,2,\,...\}## (just clarifying because I initially thought of ##\mathbb{N}## as not having ##0##).
Since all the zeroes of ##P(x)## are negative, and its domain is ##\mathbb{R}##, ##\frac{1}{P(x)}## is well defined for ##x\geq0##.
Moreover, the sign of ##\frac{1}{P(x)}## is constant for ##x\geq r_0##, where ##r_0## is the biggest zero of ##P(x)##, and is the same as ##P(x)##'s. As such, it is positive because ##P(x)\underset{\infty}{\sim}x^n## and ##x^n\geq0## when ##x\to+\infty##.
The integral ##\int_1^\infty\frac{dx}{P(x)}## exhibits an improper behaviour only at ##\infty## because of what is stated above about the zeroes of ##P(x)##. I will use this test for improper integrals'convergence:
If ##f## and ##g## are two (piecewisely) continuous and positive real functions on ##[a,\,b)##, where ##a\in\mathbb{R}## and ##b## the same or is infinity, then, if ##f(x)\underset{b}{\sim}g(x)##, then ##\int_a^bf(x)\,dx## converges ##\Leftrightarrow\int_a^bg(x)\,dx## converges.
$$\lim_{x\to\infty}\frac{\frac{1}{P(x)}}{\frac{1}{x^n}}=\lim_{x\to\infty}\frac{x^n}{P(x)}=\lim_{x\to\infty}\frac{x^n}{x^n\left(1+\frac{a_{n-1}}{x}+\frac{a_{n-2}}{x^2}+...+\frac{a_0}{x^n}\right)}=1$$
$$\int_1^\infty\frac{dx}{x^n}\text{ converges }\Leftrightarrow n>1\text{, by p-test}$$
Using the theorem stated above, ##\int_1^\infty\frac{dx}{P(x)}## converges if and only if ##n>1##, or ##n\geq2##.
I still don't like the inaccuracy of "behaves like" because it doesn't capture lower terms quantitatively. We have ##\dfrac{1}{x^n}<\dfrac{1}{x^{n-1}}##, so why can we neglect those terms of lower orders? In other words: Why is ##g(x)=x^{-n}## suited, if ##f(x)=P(x)^{-1}## is actually bigger than ##g(x)?##

There should be chosen a better function ##g(x)##, which is actually greater than ##x^{-n}## and not only "near", and the estimation ##\int 1/P(x) dx < \int g(x)dx## should be accurate.

Your argument goes: "I have a function (##x^{-n}##), which is possibly smaller than ##1/P(x)##, so it cannot guarantee convergence, but I don't care, since my error is small." However, you did not show that this heuristic is allowed.

Last edited:
I still don't like the inaccuracy of "behaves like" because it doesn't capture lower terms quantitatively. We have ##\dfrac{1}{x^n}<\dfrac{1}{x^{n-1}}##, so why can we neglect those terms of lower orders? In other words: Why is ##g(x)=x^{-n}## suited, if ##f(x)=P(x)^{-1}## is actually bigger than ##g(x)?##

There should be chosen a better function ##g(x)##, which is actually greater than ##x^{-n}## and not only "near", and the estimation ##\int 1/P(x) dx < \int g(x)dx## should be accurate.

Your argument goes: "I have a function (##x^{-n}##), which is possibly smaller than ##1/P(x)##, so it cannot guarantee convergence, but I don't care, since my error is small." However, you did not show that this heuristic is allowed.
It is actually a theorem:

Do you want me to prove it?

Mentor
It is actually a theorem:
View attachment 259904
Do you want me to prove it?
You don't need the proof for a specific example. An idea, i.e. a better choice of ##g(x)## would do the job without the machinery. The proof must use a measure for the error, too, so this idea should be applied to the example. I have only 4 short lines for it, but it makes clear why this is true despite the fact that ##1/x^n < 1/x^{n-1}##. I mean this is the crucial point of the entire question, and hiding it in a theorem doesn't really help understanding.

Last edited:
You don't need the proof for a specific example. An idea, i.e. a better choice of ##g(x)## would do the job without the machinery. The proof must use a measure for the error, too, so this idea should be applied to the example. I have only 4 short lines for it, but it makes clear why this is true despite the fact that ##1/x^n < 1/x^{n-1}##. I mean this is the crucial point of the entire question, and hiding it in a theorem doesn't really help understanding.
Hm, maybe you're looking for ##f(x)=\frac{c}{x^n}## where ##c>1##, so that ##1/P(x)<f(x)## for large ##x##?
With that, I can also do a comparison test:
Let ##g(x)=1/f(x)##. We know that after some ##x=N>0## we have ##P(x)\geq g(x)## since ##\lim_{x\to\infty}\left(g(x)-P(x)\right)=-\infty<0##.
It follows that for ##x\geq N## we get ##\frac{1}{P(x)}\leq\frac{c}{x^n}##, and so ##\int_N^\infty\frac{dx}{P(x)}\leq\int_N^\infty c\frac{dx}{x^n}##.
If ##n\geq2##, the second integral converges by the p-test, so the first also does by comparison test. And since ##\int_1^N\frac{dx}{P(x)}## converges (because ##\frac{1}{P(x)}## is well defined for ##x\geq0##), ##\int_1^N\frac{dx}{P(x)}+\int_N^\infty\frac{dx}{P(x)}=\int_1^\infty\frac{dx}{P(x)}## also converges.
If ##n=1##, then ##\int_1^\infty\frac{dx}{x+a_0}=\int_{1+a_0}^\infty\frac{du}{u}## which diverges by the p-test.
If ##n=0##, then ##\int_1^\infty dx## surely diverges.

Last edited:
Mentor
Hm, maybe you're looking for ##f(x)=\frac{c}{x^n}## where ##c>1##, so that ##1/P(x)<f(x)## for large ##x##?
With that, I can also do a comparison test:
Let ##g(x)=1/f(x)##. We know that after some ##x=N>0## we have ##P(x)\geq g(x)## since ##\lim_{x\to\infty}\left(g(x)-P(x)\right)=(1-c)/c<0##.
It follows that for ##x\geq N## we get ##\frac{1}{P(x)}\leq\frac{c}{x^n}##, and so ##\int_N^\infty\frac{dx}{P(x)}\leq\int_N^\infty c\frac{dx}{x^n}##.
If ##n\geq2##, the second integral converges by the p-test, so the first also does by comparison test. And since ##\int_1^N\frac{dx}{P(x)}## converges (because ##\frac{1}{P(x)}## is well defined for ##x\geq0##), ##\int_1^N\frac{dx}{P(x)}+\int_N^\infty\frac{dx}{P(x)}=\int_1^\infty\frac{dx}{P(x)}## also converges.
If ##n=1##, then ##\int_1^\infty\frac{dx}{x+a_0}=\int_{1+a_0}^\infty\frac{du}{u}## which diverges by the p-test.
If ##n=0##, then ##\int_1^\infty dx## surely diverges.
Nothing follows and how can we know what you claim without specifying the functions and constants you use? How do you find such a ##c##, what are ##f## and ##g##? You are sloppy and hand waving in your arguments.

We have to show that
$$\int_1^\infty \left|\dfrac{1}{P(x)}\right|\,dx = \int_1^\infty \left|\dfrac{1}{x^n+a_{n-1}x^{n-1}+\ldots+a_0}\right|\,dx < \infty$$
If you use a comparison with ##\int_1^\infty \left|P(x)^{-1}\right|\,dx \leq \int_1^\infty g(x)\,dx <\infty## then I want to know what ##g(x)## is and why the inequality holds. The argument is obviously wrong if we integrate over zeroes of ##P(x)##. Where did you use that we do not have this case, except within the theorem somewhere? E.g. ##g(x)= x^n > x^n-x^2 =:P(x)## does not work. So which ##g(x)## can we use and why?

archaic
##x^n−x^2=:P(x)##
This can't be as it has a positive zero.
I have shown in my second solution post that ##P(x)## is positive after its greatest zero, which is negative, given the premises, so ##|P(x)|=P(x)## for ##x\geq0##.
##f(x)=c/x^n## where ##c\in(1,\,\infty)##, any number does the job, and ##g(x)=1/f(x)##.
Why any ##c## in that interval does the job? Because (for ##n\geq2##):
$$\lim_{x\to\infty}\left(g(x)-P(x)\right)=\lim_{x\to\infty}\left(\frac{x^n}{c}-x^n-a_{n-1}x^{n-1}+...+a_0\right)=-\infty<0$$
This tells me that, at some point, ##P(x)>g(x)\Leftrightarrow \frac{1}{P(x)}<f(x)##.
EDIT: I have initially written the result of that limit as ##(1-c)/c##, even in the previous post (I corrected it), sorry!

Last edited:
Mentor
This is what I meant by sloppy: the limit above is infinite, not ##(1-c)/c## and one of the last occurrences of ##P(x)## should probably be ##P(x)^{-1}##. A constant is not sufficient, as long as you have no control about the other coefficients. They can either exceed or remain below the borders any specific choice of ##c## gives. So you either choose a non-constant numerator, or you deal with the arbitrary terms of lower degree.

Of course, the negative zeroes are the key, but you only used the leading term of the polynomial, so all others can be arbitrary. The negative roots constrain this arbitrariness, but I cannot see how. Given any polynomial, you cannot see where the roots are, so the low order terms play a role and cannot be ignored.

The error is less in the proof, which is valid if we apply the theorem and only if, but rather in your logic.

Of course, the negative zeroes are the key, but you only used the leading term of the polynomial, so all others can be arbitrary. The negative roots constrain this arbitrariness, but I cannot see how. Given any polynomial, you cannot see where the roots are, so the low order terms play a role and cannot be ignored.
##P(x)## can be written as ##(x+|r_n|)(x+|r_{n-1}|)...(x+|r_1|)##, where ##r_i## are the negative roots, because it is a polynomial. Since ##|r_i|>0##, and thus the coefficients are all positive:
$$x^n \leq x^n+a_{n-1}x^{n-1}\leq...\leq P(x)\Leftrightarrow \frac{1}{x^n}\geq\frac{1}{P(x)}\Leftrightarrow\int_1^\infty\frac{dx}{x^n}\geq\int_1^\infty\frac{dx}{P(x)}$$

Last edited:
This is what I meant by sloppy: the limit above is infinite, not (1−c)/c(1−c)/c(1-c)/c and one of the last occurrences of P(x)P(x)P(x) should probably be P(x)−1P(x)−1P(x)^{-1}.
Thank you, corrected it also.

A constant is not sufficient, as long as you have no control about the other coefficients. They can either exceed or remain below the borders any specific choice of ##c## gives.
But since the limit of the difference is ##-\infty##, mustn't it be the case that, after some ##x##, ##P(x)## is strictly bigger than ##x^n/c## forever?

Mentor
##P(x)## can be written as ##x(x+|r_{n-1}|)...(x+|r_1|)##, where ##r_i## are the negative roots, because it is a polynomial. Since ##|r_i|>0##, and thus the coefficients are all positive:
$$x^n \leq x^n+a_{n-1}x^{n-1}\leq...\leq P(x)\Leftrightarrow \frac{1}{x^n}\geq\frac{1}{P(x)}\Leftrightarrow\int_1^\infty\frac{dx}{x^n}\geq\int_1^\infty\frac{dx}{P(x)}$$
Well, you used methods which are normally beyond the scope of high school students, here the fundamental theorem of algebra, the integral comparison earlier.

It would have been much easier and straight forward with ##g(x)=x^{-n+(1/2)}##.

And .... there is again a sloppiness: ##P(0)\neq 0##.

archaic
And .... there is again a sloppiness: ##P(0)\neq 0##.

Question 13 a) (after the error being pointed out by @archaic, he deserves the credits)

$$\lim_{x\to \pi/2} \frac{x-\pi/2} {\sqrt {1-\sin x} }$$ Let ##y= x- \pi/2 ## $$\lim_{y\to 0} \frac{y}{\sqrt { 1- \sin (y+\pi/2)}}$$

$$\lim_{y\to 0} \frac{y}{ \sqrt { 1- \cos y}} \\ \lim_{y\to 0} \frac{y}{\sqrt { 2 \sin^2 y/2}} \\ \lim_{y\to 0} \frac{1}{\sqrt 2} \frac{y} {\sqrt {\sin^2 (y/2)}}$$
Now, let’s look at a crucial fact ## \sqrt{x^2} = \left ( \sqrt x \right) ^2## but if ##x## is negative then the RHS in the above equality breaks down, therefore, if ##x## can take both +ve and -be values then we consider only +ve values that is the mod value, ##|x|##.
Coming back to our limit $$\frac{1}{\sqrt 2} \lim_{y\to 0} \frac{y}{|\sin y/2 |}$$
$$\frac{1}{\sqrt 2} \lim_{y\to 0^-} \frac{ y/ y/2} { - \sin (y/2) / y/2} = -\sqrt 2$$ And $$\frac {1}{\sqrt 2} \lim_{y\to 0^+} \frac{y/y/2 } { \sin (y/2) / y/2 } = \sqrt 2$$
Hence, limit doesn’t exist.

QuantumQuest
Problem #4
Solve$$y'(x)=y^2(x)-2(x+\frac{1}{2})y(x) +x^2 +x+1$$We complete the square on r.h.s.$$y'(x)=(y-( x+\frac{1}{2}))^2 +\frac{3}{4}$$and make the substitution,$$v(x)=y(x)-( x+\frac{1}{2})\\ v'(x)=y'(x) - 1$$to obtain,$$v'(x)=v^2(x) -\frac{1}{4}$$Separate variables,$$\int \frac{dv}{(v^2 -\frac{1}{4})}= \int dx\\ \int \frac{dv}{(v^2 -\frac{1}{4})}=\int \frac{dv}{(v+(\frac{1}{2})(v-(\frac{1}{2})}=\int[\frac{-1}{(v+\frac{1}{2})} +\frac{1}{(v-\frac{1}{2})} ]dv\\ =ln(\frac{(v-\frac{1}{2})}{(v+\frac{1}{2})})\\ \frac{(v-\frac{1}{2})}{(v+\frac{1}{2})}=Ce^x$$ where C is the constant of integration.$$v(x)=\frac{1}{2}\frac{(1+Ce^x)}{(1-Ce^x)}\\ y(x)=\frac{1}{2}[\frac{(1+Ce^x)}{(1-Ce^x)}+2x+1]\\ y(0)=\frac{1}{1-C}$$ For the case ##y(0)=0## we take the limit of y(C) as C goes to infinity$$lim_{C \rightarrow \infty}y(C)=x$$For the case ##y(0)=1##,$$C=1\\ y=1+x$$For the case ##y(0)=2##$$C=\frac{1}{2}\\ y(x)=\frac{1}{2}[\frac{(1+\frac{1}{2}e^x)}{(1-\frac{1}{2}e^x)}+2x+1]$$

Mentor
Problem #4

Solve$$y'(x)=y^2(x)-2(x+\frac{1}{2})y(x) +x^2 +x+1$$We complete the square on r.h.s.$$y'(x)=(y-( x+\frac{1}{2}))^2 +\frac{3}{4}$$and make the substitution,$$v(x)=y(x)-( x+\frac{1}{2})\\ v'(x)=y'(x) - 1$$to obtain,$$v'(x)=v^2(x) -\frac{1}{4}$$Separate variables,$$\int \frac{dv}{(v^2 -\frac{1}{4})}= \int dx\\ \int \frac{dv}{(v^2 -\frac{1}{4})}=\int \frac{dv}{(v+(\frac{1}{2})(v-(\frac{1}{2})}=\int[\frac{-1}{(v+\frac{1}{2})} +\frac{1}{(v-\frac{1}{2})} ]dv\\ =ln(\frac{(v-\frac{1}{2})}{(v+\frac{1}{2})})\\ \frac{(v-\frac{1}{2})}{(v+\frac{1}{2})}=Ce^x$$ where C is the constant of integration.$$v(x)=\frac{1}{2}\frac{(1+Ce^x)}{(1-Ce^x)}\\ y(x)=\frac{1}{2}[\frac{(1+Ce^x)}{(1-Ce^x)}+2x+1]\\ y(0)=\frac{1}{1-C}$$ For the case ##y(0)=0## we take the limit of y(C) as C goes to infinity
I assume that means that y(x)=x is the solution.
$$lim_{C \rightarrow \infty}y(C)=x$$For the case ##y(0)=1##,$$C=1\\ y=1+x$$For the case ##y(0)=2##$$C=\frac{1}{2}\\ y(x)=\frac{1}{2}[\frac{(1+\frac{1}{2}e^x)}{(1-\frac{1}{2}e^x)}+2x+1]$$
Have you cross checked this expression? Mine is different.

Edit: This could either mean that there is more than one solution or one of us is wrong. I checked your result and couldn't retrieve the differential equation, but I'm not sure whether I didn't make a mistake. Your shift by 1/2 makes a big mess in the calculations.

Last edited:
QuantumQuest
Gold Member
Well ##2!## is just 2.

And by the definition of derivative $$\lim_{h\to 0} \frac{f’(a+h) - f’(a)}{h} = f”(a)$$

Yes, I (obviously) know that ##2!## is 2. I mentioned it just in case you missed it. Also, it may be useful in the context of the hint I gave.

Now, going on, the definition of derivative is what you write but you have an equality and you take the limit of the left side only, you equate it with the right hand side and then you write an equality. I don't make sense of it. In any case, if you take both limits from the start they're both ##f''(a)##, at least as I see it, so, what it gives anyway?

Last edited:
QuantumQuest
Gold Member
Question 13 a)

$$\lim_{x\to \pi/2} \frac{x-\pi/2}{\sqrt { 1-\sin x}}$$ Let ##y = x - \pi/2## $$\lim_{y\to 0} \frac{y}{\sqrt{ 1- \sin(y+ \pi/2)}}$$. $$\lim_{y\to 0} \frac{y}{\sqrt {1- \cos y}}$$ $$\lim_{y\to 0} \frac{y}{\sqrt{2\sin^2 y/2}}$$
$$\lim_{y \to 0} \frac{y}{\sqrt 2 \sin y/2} \\ \lim_{y\to 0} \frac{ \frac{y}{y/2} } {\sqrt 2\frac {\sin y/2}{y/2} }$$ The denominator in the above limit goes to ##\sqrt2## and the numerator cancels out to become 2 hence the answer is ##\sqrt 2##.

Not correct. It's easy to find the mistake and it has to do with some absolute value you missed.

Now, going on, the definition of derivative is what you write but you have an equality and you take the limit only of the left side, you equate it with the right hand side and then you write an equality. I don't make sense of it. In any case, if you take both limits from the start they're both f′′(a)f″(a)f''(a), at least as I see it, so, what it gives anyway?
I can explain it this way, write everything on one side, then we can use the Theorem “limit of a sum is the sum of the limits” so, we can take the limit of that “derivative” part and can leave everything as it is with limit sign being there. Am I being legitimate here?

Also, it may be useful in the context of the hint I gave.
Sir, I tried your hint. I Taylor expanded everything but you know... couldn’t get nothing

QuantumQuest