# Math Challenge - November 2018

• Challenge
• Featured
Mentor
2021 Award
Rules:

a) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored. Solutions will be posted around 15th of the following month.
b) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
c) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
d) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.

Referees:

@QuantumQuest
@StoneTemplePython
@Infrared
@wrobel
@fresh_42

Hints: on demand

Questions:

Basics:

1. (solved by @Delta² ) Let $(x_n)$ be a sequence of positive real numbers such that $x_{n+1}\geq\dfrac{x_n+x_{n+2}}{2}$ for each $n$. Show that the sequence is (weakly) increasing, i.e. $x_n\leq x_{n+1}$ for each $n$.

2. (solved by @timetraveller123 ) Let $p(x)$ be a non-constant real polynomial. Suppose that there exists a real number $a$ such that $p(a)\neq 0$ and $p'(a)=p''(a)=0\,.$ Show that not all of the roots of $p$ are real.

3. (solved by @julian ) Find the area enclosed by the curve ##r^2 = a^2\cos 2\theta\,##.

4. (solved by @Buzz Bloom ) (You may use wolframalpha.com for calculations. It is biological nonsense, cp. post #4, but for the sake of the problem, we will make the following assumptions.) One tiny nocturnal and long-living beetle decided one night to climb a sequoia. The tree was exactly ##100\, m## high at this time. Every night the beetle made a distance of ##10\, cm##. The tree grew every day evenly ##20\, cm## along its entire length.
Did the beetle eventually reach the top of the tree? And if so, how many nights will he need at least?

5. (solved by @julian ) Show that ##\lim_{n \to \infty} \sqrt[n]{p_1a_1^n + p_2a_2^n + \cdots + p_ka_k^n} = max \{a_1, a_2, \cdots ,a_k\}## where ##p_1, p_2, \cdots, p_k \gt 0## and ##a_1, a_2, \cdots, a_k \geq 0##.

6. (solved by @julian ) Show that the sequence ##(a_n)## with ##a_1 = \sqrt[]{b}\; , \;b>0\,##, ##a_{n+1} = \sqrt[]{a_n + b}## for ##n = 1, 2, 3, \ldots## converges to the positive root of the equation ##x^2 - x - b = 0##.

Combinatorics and Probabilities:

7. (solved by @Jacob Nie ) How many times a day is it impossible to determine what time it is, if you have a clock with same length (identical looking) hour and minutes hands, supposing that we always know if it's morning or evening (i.e. we know whether it's am or pm).

8. For the class of ##n \times n## matrices whose entries, (if flattened and sorted) would be ##1, 2, 3, ..., n^2 -1 ,n^2## prove that there always exists two neighboring entries (in same row or same column) that must differ by at least ##n##.

9. There are ##r## sports 'enthusiasts' in a certain city. They are forming various teams to bet on upcoming events. A pair of people dominated last year, so there are new rules in place this year. The peculiar rules are:
each team must have an odd number of members
each and every 2 teams must have an even number of members in common.
For avoidance of doubt, nothing in the rules say a given player can only be on one team.
With these rules in place, is it possible to form more than ##r## teams?

Hint: Model these rules with matrix multiplication and select a suitable field.

Calculus:

10. a) (solved by @eys_physics ) Determine ##\int_1^\infty \frac{\log(x)}{x^3}\,dx\,.##
10. b) (solved by @bhobba ) Determine for which ##\alpha## the integral ##\int_0^\infty x^2\exp(-\alpha x)\,dx## converges.
10. c) (solved by @Keith_McClary ) Find a sequence of functions ##f_n\, : \,\mathbb{R}\longrightarrow \mathbb{R}\, , \,n\in \mathbb{N}## such that $$\sum_\mathbb{N}\int_\mathbb{R}f_n(x)\,dx \neq \int_\mathbb{R}\left(\sum_\mathbb{N}f_n(x) \right) \,dx$$
10. d) (solved by @Delta2 ) Find a family of functions ##f_r\, : \,\mathbb{R} \longrightarrow \mathbb{R}\, , \,r>0## such that
$$\lim_{r \to 0}\int_\mathbb{R}f_r(x)\,dx \neq \int_\mathbb{R} \lim_{r \to 0} f_r(x) \,dx$$
10. e) (resolved in https://www.physicsforums.com/threa...cannot-always-be-swapped.960617/#post-6092583) Find an example for which
$$\dfrac{d}{dx}\int_\mathbb{R}f(x,y)\,dy \neq \int_\mathbb{R}\dfrac{\partial}{\partial x}f(x,y)\,dy$$

11. (solved by @Delta² ) Let ##f##, ##g##: ##\mathbb{R} \rightarrow \mathbb{R}## be two functions with ##f\,''(x) + f\,'(x)g(x) - f(x) = 0##. Show that if ##f(a) = f(b) = 0## then ##f(x) = 0## for all ##x\in [a,b]##.

12. a) Let ##f\, : \,\mathbb{R}^2 \longrightarrow \mathbb{R}## be defined as
$$f(x,y) = \begin{cases} 1 & \text{if } x \geq 0 \text{ and }x \leq y < x+1\\ -1 & \text{if } x \geq 0 \text{ and }x+1 \leq y < x+2\\ 0 & \text{elsewhere } \end{cases}$$
Now calculate ##\int_\mathbb{R}\left[\int_\mathbb{R}f(x,y)\,d\lambda(x) \right] \,d\lambda(y)## and ##\int_\mathbb{R}\left[\int_\mathbb{R}f(x,y)\,d\lambda(y) \right]\,d\lambda(x)\,,## and why isn't it a contradiction to Fubini's theorem.
12. b) (solved by @Keith_McClary ) Show that the integral
$$\int_A \dfrac{1}{x^2+y}\,d\lambda(x,y)$$
with ##A=(0,1)\times (0,1)\subseteq \mathbb{R}^2## is finite.

13. Let ##f\, : \,(0,1)\longrightarrow \mathbb{R}## be Lebesgue integrable and $$Y := \{\,(x_1,x_2)\in\mathbb{R}^2\,|\,x_1,x_2\geq 0\, , \,x_1+x_2\leq 1\,\}$$
Show that for any ##\alpha_1\, , \,\alpha_2 > 0##
$$\int_Y f(x_1+x_2)x_1^{\alpha_1}x_2^{\alpha_2}\,d\lambda(x_1,x_2) = \left[\int_0^1 f(u)u^{\alpha_1+\alpha_2+1}\,d\lambda(u) \right]\cdot \left[\int_0^1 v^{\alpha_1}(1-v)^{\alpha_2}\,d\lambda(v) \right]$$
Hint: Consider ##\phi\, : \,(0,1)^2\longrightarrow \mathbb{R}^2## with ##\phi(u,v)=(vu,(1-v)u)\,.## and apply Fubini's theorem.

14. (solved by @Delta² ) Let ##f## be a differentiable function in ##\mathbb{R}##. If ##f'## is invertible and ##(f')^{-1}## is differentiable in ##\mathbb{R}##, show that ##[I_A (f')^{-1} - f \circ [(f')^{-1} ]]' = (f')^{-1}## where ##I_A## with ##I_A(x) = x## is the identity function ##I_A : A \to A##

Hint: Let ##y = f'(x)## then ##(f')^{-1}(y) = x##. By differentiating, we can get to a useful result including the second derivative of ##f(x)##. Next, we can utilize ##[f\circ (f')^{-1}]'## and incorporate the identity function ##I_A##.

Linear Algebra:

15. Given the Heisenberg algebra $$\mathcal{H}=\left\{\,\begin{bmatrix} 0&x&z\\0&0&y\\0&0&0 \end{bmatrix}\,\right\}=\langle X,Y,Z\,:\,[X,Y]=Z \rangle$$ and $$\mathfrak{A(\mathcal{H})}=\{\,\alpha\, : \,\mathcal{H}\longrightarrow \mathcal{H}\, : \,[\alpha(X),Y]=[\alpha(Y),X]\,\forall\,X,Y\in \mathcal{H}\,\}$$
Since ##\mathfrak{A(\mathcal{H})}## is a Lie algebra and $$[X,\alpha]=[\operatorname{ad}(X),\alpha]=\alpha(X)\circ \alpha - \alpha \circ \operatorname{ad(X)}$$ a Lie multiplication, we can define
\begin{align*}
\mathcal{H}_0 &:= \mathcal{H}\\
\mathcal{H}_{n+1} &:= \mathcal{H}_n \ltimes \mathfrak{A(\mathcal{H}_n)}
\end{align*}
and get a series of subalgebras $$\mathcal{H}_0 \leq \mathcal{H}_1 \leq \mathcal{H}_2 \leq \ldots$$
Show that
##\mathfrak{sl}(2)<\mathcal{H}_n## is a proper subalgebra for all ##n\ge 1##
##\dim \mathcal{H}_{n} \ge 3 \cdot (2^{n+1}-1)## for all ##n\ge 0##, i.e. the series is infinite and doesn't get stationaryl
As a counterexample, if we started with ##\mathcal{H}=\mathfrak{su}(2)\text{ or }\mathfrak{su}(3)## we would get ##\mathcal{H}_n=\mathcal{H}_0## and we were stationary right from the start, which can easily be seen by solving the corresponding system of linear equations.

Hint: The multiplication in ##\mathcal{H}_n## is given by $$[(X,\alpha),(Y,\beta)]=([X,Y],[\alpha,\beta]+[\operatorname{ad}(X),\beta]-[\operatorname{ad}(Y),\alpha])$$ However, it is not really needed here. Calculate ##\mathfrak{A}(\mathcal{H})=\mathfrak{A}(\mathcal{H}_0)## and find a copy of ##\mathfrak{sl}(2)## in it, i.e. a ##2 \times 2## block with zero trace. Then note that all ##\mathcal{H}_n## have a central element (= commutes with all others), and consider its implication for ##\mathfrak{A}(\mathcal{H}_n)\,.## Proceed by induction.

16. (solved by @julian ) Consider the Lie algebra of skew-Hermitian ##2\times 2## matrices ##\mathfrak{g}:=\mathfrak{su}(2,\mathbb{C})## and the Pauli matrices (note that Pauli matrices are not a basis!)
$$\sigma_1=\begin{bmatrix}0&1\\1&0\end{bmatrix}\, , \,\sigma_2=\begin{bmatrix}0&-i\\i&0\end{bmatrix}\, , \,\sigma_3=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$$
Now we define an operation on ##V:=\mathbb{C}_2[x,y]##, the vector space of all complex polynomials of degree less than three in the variables ##x,y## by
\begin{align*}
\varphi(\alpha_1\sigma_1 +\alpha_2\sigma_2+\alpha_3\sigma_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy)= \\
&= x(-i \alpha_1 a_3 +\alpha_2 a_3 - \alpha_3 a_1 )+\\
&+ x^2(2i\alpha_1 a_5 +2 \alpha_2 a_5 + 2\alpha_3 a_2 )+\\
&+ y(-i\alpha_1 a_1 -\alpha_2 a_1 +\alpha_3 a_3 )+\\
&+ y^2(2i\alpha_1 a_5 -2\alpha_2 a_5 -2\alpha_3 a_4 )+\\
&+ xy(-i\alpha_1 a_2 -i\alpha_1 a_4 +\alpha_2 a_2 -\alpha_2 a_4 )
\end{align*}
Show that
16. a) an adjusted ##\varphi## defines a representation of ##\mathfrak{su}(2,\mathbb{C})## on ##\mathbb{C}_2[x,y]##
16. b) Determine its irreducible components.
16. c) Compute a vector of maximal weight for each of the components.

Hint: This is an easy example of a ##\mathfrak{su}(2,\mathbb{C})## representation which shall demonstrate how the ladder up and down operators actually work. Choose ##(1,x,y,x^2,xy,y^2)## as ordered basis for the representation space ##V=\mathbb{C}_2[x,y]## and verify ##[\varphi(\alpha_1,\alpha_2,\alpha_3),\varphi(\alpha'_1,\alpha'_2,\alpha'_3)]=\varphi([(\alpha_1,\alpha_2,\alpha_3),(\alpha'_1,\alpha'_2,\alpha'_3)])## with the adjusted transformation
$$\varphi(\alpha_1,\alpha_2,\alpha_3):=\varphi(\alpha_1\cdot (i\sigma_1),\alpha_2\cdot (i\sigma_2),\alpha_3\cdot (i\sigma_3))$$
and decompose ##V## into three invariant subspaces. To determine the weights, consider the ##\mathbb{C}-##basis $$H=\sigma_3,X=\dfrac{1}{2}\sigma_1+\dfrac{1}{2}i\sigma_2,Y=\dfrac{1}{2}\sigma_1-\dfrac{1}{2}i\sigma_2$$

Abstract Algebra:

17. a) Let $n$ be a positive integer. Let $a_1,\ldots,a_k$ be (positive) factors of $n$ such that $\gcd(a_1,\ldots,a_k,n)=1$. How many solutions $(x_1,\ldots,x_k)$ does the equation $a_1x_1+\ldots+a_kx_k\equiv 0\mod n$ have subject to the restriction that $0\leq x_i<n/a_i$ for each $i$?
17. b) How does the solution change if $\gcd(a_1,\ldots,a_k,n)=d>1$?

18. A function ##|\,.\,|\, : \,\mathbb{F}\longrightarrow \mathbb{R}_{\geq 0}## on a field ##\mathbb{F}## is called a value function if
\begin{align*}
&|x|=0 \Longleftrightarrow x=0 \\
&|xy| = |x|\;|y|\\
&|x+y| \leq |x|+|y|
\end{align*}
It is called Archimedean, if for any two elements ##a,b\,\,(a\neq 0)## there is a natural number ##n## such that ##|na|>|b|\,.## We consider the rational numbers. The usual absolute value
$$|x| = \begin{cases} x &\text{ if }x\geq 0 \\ -x &\text{ if }x<0\end{cases}$$
is Archimedean, whereas the trivial value
$$|x|_0 = \begin{cases} 0 &\text{ if }x = 0 \\ 1 &\text{ if }x\neq 0\end{cases}$$
is not.
Determine all non-trivial and non-Archimedean value functions on ##\mathbb{Q}\,.##

Hint: This is indeed a bit tricky. Since ##|\,.\,|## is non-Archimedean, there are elements ##a,b## with ##|n|<\frac{|b|}{|a|}## for all ##n\in \mathbb{N}\,.## If ##|n| > 1## for a natural number, then ##|n^k|=|n|^k## goes to infinity and cannot be bounded. Thus ##|n|\leq 1## for all ##n\in \mathbb{N}\,.## Note that ##|.|## is non-trivial. Pick a smallest natural number ##n_0=ab## and investigate it.

19. Let $f(x)\in\mathbb{Q}[x]$ be a degree 5 polynomial with splitting field $K$. Suppose that there is a unique extension $F/\mathbb{Q}$ such that $F\subset K$ and $[K:F]=3$. Show that $f(x)$ is divisible by a degree 3 irreducible element of $\mathbb{Q}[x]$.

20. (solved by @julian ) Let's consider complex functions in one variable and especially the involutions
$$\mathcal{I}=\{\, z\stackrel{p}{\mapsto} z\; , \; z\stackrel{q}{\mapsto} -z\; , \;z\stackrel{r}{\mapsto} z^{-1}\; , \;z\stackrel{s}{\mapsto}-z^{-1}\,\}$$
We also consider the two functions $$\mathcal{J}=\{\,z\stackrel{u}{\longmapsto}\frac{1}{2}(-1+i \sqrt{3})z\; , \;z\stackrel{v}{\longmapsto}-\frac{1}{2}(1+i \sqrt{3})z\,\}$$
and the set ##\mathcal{F}## of functions which we get, if we combine any of them: ##\mathcal{F}=\langle\mathcal{I},\mathcal{J} \rangle## by consecutive applications. We now define for ##\mathcal{K}\in \{\mathcal{I},\mathcal{J}\}## a relation on ##\mathcal{F}## by
$$f(z) \sim_\mathcal{K} g(z)\, :\Longleftrightarrow \, (\forall \,h_1\in \mathcal{K})\,(\exists\,h_2\in \mathcal{K})\,: f(h_1(z))=g(h_2(z))$$
20. a) Show that ##\sim_\mathcal{K}## defines an equivalence relation.
20. b) Show that ##\mathcal{F}/\sim_\mathcal{I}## admits a group structure on its equivalence classes by consecutive application.
20. c) Show that ##\mathcal{F}/\sim_\mathcal{J}## does not admit a group structure on its equivalence classes by consecutive applications.

Hint: Determine the groups ##F=\langle \mathcal{F}\rangle\, , \,I=\langle \mathcal{I} \rangle\, , \,J=\langle \mathcal{J} \rangle## and what distinguishes ##F/I## from ##F/J\,.##

Topology:

21. (solved by @julian ) A covering space ##\tilde{X} ## of ##X## is a topological space together with a continuous surjective map ##p\, : \,\tilde{X} \longrightarrow X\,,## such that for every ##x \in X## there is an open neighborhood ##U\subseteq X## of ##x,## such that ##p^{-1}(U)\subseteq \tilde{X}## is a union of pairwise disjoint open sets ##V_\iota## each of which is homeomorphically mapped onto ##U## by ##p##. A deck transformation with respect to ##p## is a homeomorphism ##h\, : \,\tilde{X} \longrightarrow \tilde{X}## with ##p \circ h=p\,.## Let ##\mathcal{D}(p)## be the set of all deck transformations with respect to ##p##.
Show that ##\mathcal{D}(p) ## is a group.
If ##\tilde{X}## is a connected Hausdorff space and ##h \in \mathcal{D}(p)## with ##h(\tilde{x})=\tilde{x}## for some point ##\tilde{x}\in \tilde{X}\,.## then ##h=\operatorname{id}_{\tilde{X}}\,.##
Hint: Show that ##\mathcal{D}(p)## is closed under inversion and multiplication. Then consider ##A:=\{\,\tilde{x}\in \tilde{X}\, : \,h(\tilde{x})=\tilde{x}\,\}\,.##

22. We define an equivalence relation on the topological two-dimensional unit sphere ##\mathbb{S}^2\subseteq \mathbb{R}^3## by ##x \sim y \Longleftrightarrow x \in \{\,\pm y\,\}## and the projection ##q\, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2/\sim \,.## Furthermore we consider the homeomorphism ##\tau \, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2## defined by ##\tau (x)=-x\,.## Note that for ##A \subseteq \mathbb{S}^2## we have ##q^{-1}(q(A))= A \cup \tau(A)\,.## Show that
##q## is open and closed.
##\mathbb{S}^2/\sim ## is compact, i.e. Hausdorff and covering compact.
Let ##U_x=\{\,y\in \mathbb{S}^2\,:\,||y-x||<1\,\}## be an open neighborhood of ##x \in \mathbb{S}^2\,.## Show that ##U_x \cap U_{-x} = \emptyset \; , \;U_{-x}=\tau(U_x)\; , \;q(U_x)=q(U_{-x})## and ##q|_{U_{x}}## is injective. Conclude that ##q## is a covering.
Hint: For (a) consider ##O\cup \tau(O)## for an open set ##O\,.## For (b) use part (a) and that ##\mathbb{S}^2## is Hausdorff. For (c) a covering map is a local homeomorphism, so we need continuity, openess, closedness and bijectivity. Again use the previous parts.

Especially for Physicists:

23. (solved by @PeroK ) On the occasion of the centenary of Emmy Noether's theorem.
This example requires some introduction for all members who aren't familiar with the matter, so let me first give some background information.
The action on a classical particle is the integral of an orbit ##\gamma\, : \,t \rightarrow \gamma(t)## $$S(\gamma)=S(x(t))= \int \mathcal{L}(t,x,\dot{x})\,dt$$ over the Lagrange function ##\mathcal{L}##, which describes the system considered. Now we consider smooth coordinate transformations
\begin{align*}
x &\longmapsto x^* := x +\varepsilon \psi(t,x,\dot{x})+O(\varepsilon^2)\\
t &\longmapsto t^* := t +\varepsilon \varphi(t,x,\dot{x})+O(\varepsilon^2)
\end{align*}
and we compare $$S=S(x(t))=\int \mathcal{L}(t,x,\dot{x})\,dt\text{ and }S^*=S(x^*(t^*))=\int \mathcal{L}(t^*,x^*,\dot{x}^*)\,dt^*$$
Since the functional ##S## determines the law of motion of the particle, $$S=S^*$$ means, that the action on this particle is unchanged, i.e. invariant under these transformations, and especially
\begin{equation*}
\dfrac{\partial S}{\partial \varepsilon}=0 \quad\text{ resp. }\quad \left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0}\left(\mathcal{L}\left(t^*,x^*,\dot{x}^*\right)\cdot \dfrac{dt^*}{dt} \right) = 0 ~~(*)
\end{equation*}
Emmy Noether showed exactly hundred years ago, that under these circumstances (invariance), there is a conserved quantity ##Q##. ##Q## is called the Noether charge. $$S=S^* \Longrightarrow \left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0}\left(\mathcal{L}\left(t^*,x^*,\dot{x}^*\right)\cdot \dfrac{dt^*}{dt} \right) = 0 \Longrightarrow \dfrac{d}{dt}Q(t,x,\dot{x})=0$$
with $$Q=Q(t,x,\dot{x}):= \sum_{i=1}^N \dfrac{\partial \mathcal{L}}{\partial \dot{x}_i}\,\psi_i + \left(\mathcal{L}-\sum_{i=1}^N \dfrac{\partial \mathcal{L}}{\partial \dot{x}_i}\,\dot{x}_i\right)\varphi = \text{ constant}$$
The general way to proceed is:
A. Determine the functions ##\psi,\varphi##, i.e. the transformations, which are considered.
B. Check the symmetry by equation (*).
C. If the symmetry condition holds, then compute the conservation quantity ##Q## with ##\mathcal{L},\psi,\varphi\,.##
Example: Given a particle of mass ##m## in the potential ##U(\vec{r})=\dfrac{U_0}{\vec{r\,}^{2}}## with a constant ##U_0##. At time ##t=0## the particle is at ##\vec{r}_0## with velocity ##\dot{\vec{r}}_0\,.##

Hint: The Lagrange function with ##\vec{r}=(x,y,z,t)=(x_1,x_2,x_3,t)## of this problem is $$\mathcal{L}=T-U=\dfrac{m}{2}\,\dot{\vec{r}}\,^2-\dfrac{U_0}{\vec{r\,}^{2}}$$
a) Give a reason why the energy of the particle is conserved, and what is its energy?
b) Consider the following transformations with infinitesimal ##\varepsilon##
$$\vec{r} \longmapsto \vec{r}\,^*=(1+\varepsilon)\,\vec{r}\,\, , \,\,t\longmapsto t^*=(1+\varepsilon)^2\,t$$
and verify the condition (*) to E. Noether's theorem.
c) Compute the corresponding Noether charge ##Q## and evaluate ##Q## for ##t=0##.
Use ##\psi_i=0 , \varphi=1## for the time invariant energy and consider $$\left. \dfrac{d}{d\varepsilon}\right|_{\varepsilon =0}\left(\mathcal{L}\left(t^*,x^*,\dot{x}^*\right)\cdot \dfrac{dt^*}{dt} \right)$$ For the last part we have ##\partial_x \psi=x\; , \;\partial_y \psi=y\; , \;\partial_z \psi=z## and ##\varphi =2t\,.##

24. Solve and describe the solution step by step in quadrature a Lagrangian differential equation with Lagrangian
$$L(t,x,\dot x)=\frac{1}{2}\dot x^2-\frac{t}{x^4}.$$

25. We consider the vector field ##X\, : \,\mathbb{R}\longrightarrow \mathbb{R}^2## given by ##X(p) := \left(p,\begin{pmatrix} 1\\0 \end{pmatrix}\right)\,.##
Compute the derivative ##d\phi\, : \,T\mathbb{R}^2\longrightarrow T\mathbb{R}^3## of the stereographic projection to the north pole, i.e. plane to sphere with ##\phi(0,0)=(0,0,-1)##, and describe the tangent bundle ##T\mathbb{S}^2## of ##\mathbb{S}^2##. Show that position vectors and tangent vectors are orthogonal.
Compute the vector field ##d\phi(X)## on ##\mathbb{S}^2##. How is it related to the curves ##\gamma(t)=\phi(t,y_0)\,?##
Is ##d\phi(X)## a continuous vector field on ##\mathbb{S}^2## without zeros?
Hint: The stereographic projection to the north pole is given by
\begin{align*}
\phi\, &: \, \mathbb{R}^2\longrightarrow \mathbb{R}^3\\
\phi(x,y)&=\dfrac{1}{x^2+y^2+1} \begin{bmatrix}
2x\\2y\\x^2+y^2-1
\end{bmatrix}
\end{align*}

Last edited:
FritoTaco, PeroK, nuuskur and 4 others

Homework Helper
Gold Member
I think I solved 1) from basics.

given that ##x_{n+1}\geq\frac{x_n+x_{n+2}}{2}## we can get that ##2x_{n+1}\geq x_n+x_{n+2}\Rightarrow x_{n+1}-x_{n}+x_{n+1}-x_{n+2}\geq 0\Rightarrow x_{n+1}-x_n\geq x_{n+2}-x_{n+1}##

From this last inequality we can prove with induction on k that ##x_{n+1}-x_n\geq x_{n+k}-x_{n+k-1}## for any ##k\geq 1##

Adding all these inequalities together we get ##p(x_{n+1}-x_{n})\geq \sum_{k=1}^{p} (x_{n+k}-x_{n+k-1})## or equivalently ##p(x_{n+1}-x_{n})\geq x_{n+p}-x_n##

From this last inequality given that ##x_{n+p}\geq 0## we can conclude that ##x_{n+1}\geq \frac{p-1}{p} x_n## and this hold for any natural number p. This is equivalent to ##x_{n+1}\geq x_n##

Last edited:
rfranceschetti
Gold Member
I think I solved 1) from basics.

given that ##x_{n+1}\geq\frac{x_n+x_{n+2}}{2}## we can get that ##2x_{n+1}\geq x_n+x_{n+2}\Rightarrow x_{n+1}-x_{n}+x_{n+1}-x_{n+2}\geq 0\Rightarrow x_{n+1}-x_n\geq x_{n+2}-x_{n+1}##

From this last inequality we can prove with induction on k that ##x_{n+1}-x_n\geq x_{n+k}-x_{n+k-1}## for any ##k\geq 1##

Adding all these inequalities together we get ##p(x_{n+1}-x_{n})\geq \sum_{k=1}^{p} (x_{n+k}-x_{n+k-1})## or equivalently ##p(x_{n+1}-x_{n})\geq x_{n+p}-x_n##

From this last inequality given that ##x_{n+p}\geq 0## we can conclude that ##x_{n+1}\geq \frac{p-1}{p} x_n## and this hold for any natural number p. This is equivalent to ##x_{n+1}\geq x_n##

This is correct, good job!

Delta2
Mentor
4. (You may use wolframalpha.com for calculations) One tiny night-active and long-living beetle decided one night to climb a sequoia. The tree was exactly 100m" role="presentation">100m high at this time. Every night the beetle made a distance of 10cm" role="presentation">10cm. The tree grew every day evenly 20cm" role="presentation">20cm along its entire length.
The part in red is fine as a math problem. It is not at all correct biologically. If Daniel Boone had carved his famous "kilt a b'ar" quote on the tree, say in 1803, it would be exactly the same height off ground level today as it was back in 1803. Trees grow from the very tip of every branch. Not along the entire length.
Google for the term 'apical meristem'.
https://www.oldewoodltd.com/blog/d-boone-kilt-a-bar Daniel Boone citation.

fresh_42
night-active
which is nocturnal

7. How many times a day is it impossible to determine what time it is, if you have a clock with same length (identical looking) hour and minutes hands, supposing that we always know if it's morning or evening (i.e. we know whether it's am or pm).
Does your “time” counts in minutes?because it seems like there are infinite amount of fractions of time.Or does this not effect this question?

Mentor
2021 Award
Does your “time” counts in minutes?because it seems like there are infinite amount of fractions of time.Or does this not effect this question?
The minimal number of nights will do.

The minimal number of nights will do.
Sorry.I can’t get it.

Mentor
2021 Award
Sorry.I can’t get it.
What? You are traveling a certain distance in an expanding universe at a certain speed, where the expansion rate is given. Will you make it to your destination or not, and if so, how many days will it need? And, yes, non relativistic.

YoungPhysicist
Gold Member
Are you saying the tree only grows during daylight hours?

Mentor
2021 Award
Are you saying the tree only grows during daylight hours?
No. But it doesn't matter which daytime the tree actually grows, as long as it does it daily and all over its length.

What? You are traveling a certain distance in an expanding universe at a certain speed, where the expansion rate is given. Will you make it to your destination or not, and if so, how many days will it need? And, yes, non relativistic.
@fresh_42 now I get it.I am talking about the clock with identical arms question but you are talking about the beetles and the growing tree question.

Gold Member
Problem 2:

If ##p' (a) = p'' (a) = 0## for ##p (a) \not= 0## then ##p (x)## has a point of inflextion at ##x = a## with zero slope there and with ##p (a) \not= 0##.

If the point of inflextion had occurred for ##p (a) = 0## then there would have been at least three repeated real roots at ##x=a##.

But our inflextion occurs for ##p (a) \not= 0##. The inflextion replaces' a trough-peak and hence ##p (x)## will intersect the ##x##-axis too few times for all its roots to be real. There is a root, ##\alpha##, of ##p (x)## that is complex. But as the polynomial is real there must also be another root that is the complex conjugate of ##\alpha##. So there are at least two complex roots.

Problem 3:

Note ##r## is real only if ##\cos 2 \theta \geq 0##, i.e. if

##- \pi /2 \leq 2 \theta \leq \pi / 2## and ##3 \pi / 2 \leq 2 \theta \leq 5 \pi / 2##

or

##- \pi / 4 \leq \theta \leq \pi / 4## and ##3 \pi / 4 \leq \theta \leq 5 \pi / 4##

using just these values of ##\theta##:

##
A = {1 \over 2} \int r^2 d \theta
##
##
= {1 \over 2} \int_{- \pi / 4}^{\pi / 4} r^2 d \theta + {1 \over 2} \int_{3 \pi / 4}^{5 \pi / 4} r^2 d \theta
##

or since the two lobes' are identical:

##
A = \int_{- \pi / 4}^{\pi / 4} r^2 d \theta
##
##
= a^2 \int_{- \pi / 4}^{\pi / 4} \cos 2 \theta d \theta
##
##
= a^2 \int_{- \pi / 2}^{\pi / 2} \cos \phi {d \phi \over 2}
##
##
= a^2 {1 \over 2} [sin \phi]_{- \pi / 2}^{\pi / 2}
##
##
= a^2
##

Last edited:
QuantumQuest
Gold Member
Does your “time” counts in minutes?because it seems like there are infinite amount of fractions of time.Or does this not effect this question?
You may be over thinking this. There is a finite answer. This is a regular non-digital clock that you'd hang on the wall.

YoungPhysicist
You may be over thinking this. There is a finite answer. This is a regular non-digital clock that you'd hang on the wall.
Oh.So do you mean that the clock ticks from one minute to another minute,rather than gradually changing all the time?I know both kinds of non digital clock exist.
The ticking one:
https://tenor.com/view/design-time-clock-tick-tock-gif-3428153
The non ticking one:
https://www.pinterest.co.uk/pin/535083999469459637/
That one is all little weird but you can get the point.
With this kind of clock there are infinite time fractions,which is what I am talking about.

Gold Member
Oh.So do you mean that the clock ticks from one minute to another minute,rather than gradually changing all the time?I know both kinds of non digital clock exist.

ummm no jumps on the minute hand...

This problem is from a puzzle book and they didn't feel the need to state that it isn't a jumping minute hand clock -- it was just assumed to be more natural. If you want you can solve both, but the answer I'm looking for is for non-jumping minute hands.

Mentor
Been reviewing dominated convergence for my delving into divergent series so 10(c) is easy - an answer is cos(t+n) where the integral is from 0 to 2pi. Doing the integral first, each is zero so the sum is 0. The reverse is not even defined - (eg ∑ cos (t+n) where of course the sum is 0 to infinity), except maybe as a distribution - or divergent series - but that would be cheating - or would it?

Thanks
Bill

Last edited:
Homework Helper
Gold Member
I think I got the solution for 11.

We know that ##f## is two times differentiable hence ##f## is continuous in ##[a,b]##. Hence ##f## has a maximum and a minimum in ##[a,b]##. So let ##f(x_M)## be the maximum and ##f(x_m)## the minimum.
We know that ##f'(x_M)=f'(x_m)=0## (from fermat's theorem and since the (absolute or global) extrema are also local extrema) and from the given relation ##f''(x)+f'(x)g(x)-f(x)=0## we can infer that
##f''(x_M)=f(x_M)## (1) and ##f''(x_m)=f(x_m)## (2). From (1) and (2) and since ##f(x_M)## is maximum it is ##f(x_M)\geq f(x_m)## hence we can conclude that ##f''(x_M)\geq f''(x_m)## (3) .
Also since ##f(x_M)## is maximum it must be that ##f''(x_M)\leq 0## and since ##f(x_m)## is minimum ##f''(x_m)\geq 0)## (second derivative test for local extrema). So ##f''(x_M)\leq f''(x_m)##. (4) From (3) and (4) we end up with ##f''(x_M)=f''(x_m)=0## and from this and (1) and (2) we can infer that ##f(x_M)=f(x_m)=0## . So maximum and minimum are both equal to zero, so the function is constant and equal to zero in ##[a,b]##

EDIT: We need the hypothesis ##f(a)=f(b)## to be able to conclude that ##x_M## and ##x_m## are internal points of ##[a,b]## so that fermat's theorem for local extrema apply.

EDIT 2: OK I think in order to be absolutely complete proof I need to take care of the other two cases:

1) ##x_M ## and ##x_m## are at the end points of ##[a,b]##. This case is straightforward then the maximum and the minimum are both equal to zero as it is given that ##f(a)=f(b)=0##.

2) ##x_M## is internal and ##x_m## is at the end points (the other case where ##x_m## is internal and ##x_M## at the end points is treated similarly). Then working as above for ##x_M## we can conclude that ##f''(x_M)=f(x_M)\leq 0## from the relation given and from the second derivative condition for the extrema (since ##x_M## is internal we can use that ##f'(x_M)=0## to get the desired result from the given relation). But since ##f(x_M)## is maximum it is ##f(x_M)\geq f(x_m)=0## (it is given that ##f(a)=f(b)=0## and ##x_m=a## or ##x_m=b## in this case). hence again we condlude that ##f''(x_M)=0## and because ##f''(x_M)=f(x_M)## , we conclude that the maximum is also zero.

Last edited:
timetraveller123 and QuantumQuest
Mentor
Problem 2:
Missing a part:
While the general idea is good there doesn't have to be an inflection point there.

Mentor
2021 Award
Been reviewing dominated convergence for my delving into divergent series so 10(c) is easy - an answer is cos(t+n) where the integral is from 0 to 2pi. Doing the integral first, each is zero so the sum is 0. The reverse is not even defined - (eg ∑ cos (t+n) where of course the sum is 0 to infinity), except maybe as a distribution - or divergent series - but that would be cheating - or would it?

Thanks
Bill
Not really cheating, but I would prefer an example with both sides defined. The idea, however, is similar. Formally it is not a solution because of the different integration intervals.

bhobba
Gold Member
Missing a part:
While the general idea is good there doesn't have to be an inflection point there.

We also have:

So write the polynomial as:

##
p(x) = a_n (x-a)^n + a_{n-1} (x-a)^{n-1} + \dots + a_4 (x-a)^4 + a_3 (x-a)^3 + a_0
##

where ##a_0 \not= 0##. We have an undulation point at ##x=a## if ##a_3 = 0## and the first non-zero term after it has even power in ##(x-a)##. Say this first non-zero term is ##a_{2m} (x-a)^{2m}##. For ##x## close to ##x=a## we have

##
p (x) \approx a_{2m} (x-a)^{2m} + a_0
##

so the corresponding trough or peak is not intersecting the ##x-##axis and again we must have complex roots.

Last edited:
Mentor
Not really cheating, but I would prefer an example with both sides defined. The idea, however, is similar. Formally it is not a solution because of the different integration intervals.

Drats - yes I notice the wording does imply formally it is not an answer - OK - let's try 10b which should succumb to integration by parts - here goes:

∫x^2*e^-αx = -1/α*x^2*e^-αx| (0 to ∞) + ∫2/α*x*e^-αx. If α>0 the first term is 0 so it becomes ∫2/α*x*e^-αx = -2/α^2*x*e^-αx| (0 to ∞) + ∫2/α^2*e^-αx. Again the first term is zero if α>0 and you have ∫2/α^2*e^-αx = -2/α^3*e^-αx| (0 to ∞). Again if α > 0 this becomes 2/α^3

Strictly speaking I should look at L-Hopital on terms like -1/α*x^2*e^-αx = -1/α*x^2/e^αx but the growth of e^αx always swamps any polynomial - I will spell it out if required (doesn't matter how big the polynomial if you keep differentiating both numerator and denominator you eventually end up with C/e^αx which at infinity is zero).

Probably made a goof - but what the heck will put it out there.

Thanks
Bill

Last edited:
Homework Helper
Gold Member
@fresh_42 , @Infrared or the others of the referees, what do you think about my solution for 11 at post #18?. I know I am on the right track but somehow maybe the proof is not 100% correct.

Gold Member
Is this problem 5:

The result is trivial if all the ##a_i##'s are zero.

First assume all the ##a_i##'s are non-zero. Say there is a unique maximum, say ##a_k##, then we can write

##
\lim_{n \rightarrow \infty} \big( [p_1 (a_1 / a_k)^n + p_2 (a_2 / a_k)^n + \dots + p_k] a_k^n \big)^{1/n}
##
##
= a_k \lim_{n \rightarrow \infty} \big( [p_1 (a_1 / a_k)^n + p_2 (a_2 / a_k)^n + \dots + p_k] \big)^{1/n}
##
##
= a_k
##

Say two of the ##a_i##'s coincide, say ##a_k## and ##a_{k-1}##, then:

##
\lim_{n \rightarrow \infty} \big( [p_1 (a_1 / a_k)^n + p_2 (a_2 / a_k)^n + \dots + p_{k-1} + p_k] a_k^n \big)^{1/n}
##
##
= a_k \lim_{n \rightarrow \infty} \big( [p_1 (a_1 / a_k)^n + p_2 (a_2 / a_k)^n + \dots + p_{k-1} + p_k] \big)^{1/n}
##
##
= a_k
##

where we have used ##p_{k-1} + p_k > 0## which follows from the assumption ##p_1, p_2 , \dots , p_k> 0##.

The result is easily generalised to the case where the maximum occurs an arbitrary number of times.

If there are some ##a_i##'s that are zero we would simple omit them from the equation

##
\lim_{n \rightarrow \infty} \big( p_1 a_1^n + p_2 a_2^n + \dots + p_k a_k^n \big)^{1/n}
##

and the limit would be equal to the maximum of the non-zero ##a_i##'s.

QuantumQuest
Mentor
2021 Award
@fresh_42 , @Infrared or the others of the referees, what do you think about my solution for 11 at post #18?. I know I am on the right track but somehow maybe the proof is not 100% correct.
@QuantumQuest moderates this one.

Delta2
Mentor
2021 Award
Drats - yes I notice the wording does imply formally it is not an answer - OK - let's try 10b which should succumb to integration by parts - here goes:

∫x^2*e^-αx = -1/α*x^2*e^-αx| (0 to ∞) + ∫2/α*x*e^-αx. If α>0 the first term is 0 so it becomes ∫2/α*x*e^-αx = -2/α^2*x*e^-αx| (0 to ∞) + ∫2/α^2*e^-αx. Again the first term is zero if α>0 and you have ∫2/α^2*e^-αx = -2/α^3*e^-αx| (0 to ∞). Again if α > 0 this becomes 2/α^3

Strictly speaking I should look at L-Hopital on terms like -1/α*x^2*e^-αx = -1/α*x^2/e^αx but the growth of e^αx always swamps any polynomial - I will spell it out if required (doesn't matter how big the polynomial if you keep differentiating both numerator and denominator you eventually end up with C/e^αx which at infinity is zero).

Probably made a goof - but what the heck will put it out there.

Thanks
Bill
Can you put this in a readable form? And what about ##\alpha \leq 0\,##?

Gold Member
Problem 6:

Put

##
x = \sqrt{b + \sqrt{b + \sqrt{b + \dots}}}
##

then ##x^2 = b + \sqrt{b + \sqrt{b + \dots}}## and so ##x^2 - b = \sqrt{b + \sqrt{b + \dots}} = x##. We get the quadratic: ##x^2 - x - b = 0##. We need to show convergence.

We will use that "Every bounded monotonic sequence converges." (on this see for example the book "Introduction to metric and topological spaces" by W. A, Sutherland).

Monotonic increasing:

We use induction. First

##
a_1 = \sqrt{b} < \sqrt{b + \sqrt{b}} = a_2 .
##

Now assume the result for some ##k##, that is ##a_k < a_{k+1}##. Then ##b + a_k < b + a_{k+1}## and

##
a_{k+1} = \sqrt{b + a_k} < \sqrt{b + a_{k+1}} = a_{k+2} .
##

By induction ##a_n < a_{n+1}## is true for all ##n##.

Bounded:

We wish to prove that ##a_n < f_b## for all ##n## where is ##f_b## is some finite positive number dependent on ##b##. To prove boundedness we use induction. For this purpose we will need the number ##f_b## to satisfy:

##
a_1 = \sqrt{b} < f_b \quad \text{and} \quad \sqrt{b + f_b} < f_b .
##

Why will become clear in a moment. The second condition can be restated as ##b + f_b < f_b^2## (where ##f_b## is a positive number - if it exists) or as

##
0 < f_b^2 - f_b - b
##

But the polynomial ##p(x) = x^2 -x - b## is a quadratic equation with a minimum. Hence, in order to satisfy the above inequality, we just have to choose the value for ##f_b## to be greater than the positive root: ##{1 + \sqrt{4b+1} \over 2}##. To be concrete, we choose the number to be

##
f_b = {1 + \sqrt{4b+1} \over 2} + \sqrt{b} .
##

We now turn to the inductive argument. First we have

##
a_1 = \sqrt{b} < f_b
##

Now assume the result for some ##k##, that is ##a_k < f_b##. Then ##b + a_k < b + f_b## and

##
a_{k+1} = \sqrt{b+ a_k} < \sqrt{b + f_b} < f_b
##

where we have used the inequality that ##f_b## satisfies. By induction ##a_n < f_b## is true for all ##n##.

QuantumQuest
Mentor
Can you put this in a readable form? And what about ##\alpha \leq 0\,##?

Ok here goes. Using integration by parts: (0 to ∞) ∫x^2*e^-αx = limit → ∞ (-1/α*x^2*e^-αx) - (-1/α*0^2*e^-α0) + (0 to ∞) ∫2/α*x*e^-αx.

Lets look at -1/α*0^2*e^-α0 first. e^-α0 = 1. So -1/α*0^2*e^-α0 = -0/α = 0 unless α = 0. If it is zero you get the undefined 0/0. So α can not be zero.

Two other cases α < 0 and α > 0. Let's look at limit x → ∞ -1/α*x^2*e^-αx

Case 1: α < 0
e^-αx → ∞ as x → ∞. So limit x → ∞ -1/α*x^2*e^-αx = ∞. Hence α can not be less than 0.

Case 2 α > 0
Using L-Hopital on limit x → ∞ -1/α*x^2*e^-αx we have limit x → ∞ (-1/α*x^2)/(e^αx) = limit x → ∞ (-1/α*2x)/(α*e^αx) = limit x → ∞ (-2/α)/(α^2*e^αx). Because α > 0 e^αx → ∞ as x → ∞ hence limit x → ∞(-2/α)/(α^2*e^αx) = 0 and limit → ∞ -1/α*x^2*e^-αx = 0.

Ok the integral exists iff α > 0 and is (0 to ∞) ∫2/α*x*e^-αx. We do integration by parts again and get limit x → ∞ (-2/α^2*x*e^-αx) - (-2/α^2*0*e^-α0) + ∫2/α^2*e^-αx. Again as e^-α0 = 1 we have -2/α^2*0*e^-α0 = 0. And using L-Hopital again limit → ∞ (-2/α^2*x*e^-αx) = limit x → ∞ (-2/α^2*x)/(e^αx) = limit x → ∞ (-2/α^2)/(α*e^αx) = 0.

Hence (0 to ∞) ∫x^2*e^-αx =(0 to ∞) ∫2/α^2*e^-αx = 2/α^2 (0 to ∞) ∫e^-αx = 2/α^2 [limit x → ∞ (-1/α*e^-αx) +(1/α*e^-α0)] = 2/α^3 if α > 0.

So the integral only occurs if α > 0.

And yes I made a goof in my first attempt.

The integral is of course the Laplace transform of x^2 and you could probably use some theorems from Laplace transform theory to do it, but the above does it from first principles.

Hope people can follow it now. I am not proficient these days with Latex so how I express it without that may be hard to follow. Sorry - but it been so long it would take me too long that way.

Some minor goofs fixed.

Thanks
Bill

Last edited:
Mentor
2021 Award
Ok here goes. Using integration by parts: (0 to ∞) ∫x^2*e^-αx = limit → ∞ (-1/α*x^2*e^-αx) - (-1/α*0^2*e^-α0) + (0 to ∞) ∫2/α*x*e^-αx.

Lets look at -1/α*0^2*e^-α0 first. e^-α0 = 1. So -1/α*0^2*e^-α0 = -0/α = 0 unless α = 0. If it is zero you get the undefined 0/0. So α can not be zero.

Two other cases α < 0 and α > 0. Let's look at limit x → ∞ -1/α*x^2*e^-αx

Case 1: α < 0
e^-αx → ∞ as x → ∞. So limit x → ∞ -1/α*x^2*e^-αx = ∞. Hence α can not be less than 0.

Case 2 α > 0
Using L-Hopital on limit x → ∞ -1/α*x^2*e^-αx we have limit x → ∞ (-1/α*x^2)/(e^αx) = limit x → ∞ (-1/α*2x)/(α*e^αx) = limit x → ∞ (-2/α)/(α^2*e^αx). Because α > 0 e^αx → ∞ as x → ∞ hence limit x → ∞(-2/α)/(α^2*e^αx) = 0 and limit → ∞ -1/α*x^2*e^-αx = 0.

Ok the limit exists iff α > 0 and is (0 to ∞) ∫2/α*x*e^-αx. We do integration by parts again and get limit x → ∞ (-2/α*x*e^-αx) - (-2/α*0*e^-α0) + ∫2/α*e^-αx. Again as e^-α0 = 1 we have -2/α*0*e^-α0 = 0. And using L-Hopital again limit → ∞ (-2/α*x*e^-αx) = limit x → ∞ (-2/α*x)/(e^αx) = limit x → ∞ (-2/α)/(α*e^αx) = 0.

Hence (0 to ∞) ∫x^2*e^-αx =(0 to ∞) ∫2/α*e^-αx = 2/α (0 to ∞) ∫e^-αx = 2/α [limit x → ∞ (-1/α*e^-αx) +(1/α*e^-α0)] = 2/α^2 if α > 0.

So the integral only occurs if α > 0.

And yes I made a goof in my first attempt.

The integral is of course the Laplace transform of x^2 and you could probably use some theorems from Laplace transform theory to do it, but the above does it from first principles.

Hope people can follow it now. I am not proficient these days with Latex so how I express it without that may be hard to follow. Sorry - but it been so long it would take me too long that way.

Thanks
Bill
It was more the LaTeX code than the trivial ##\alpha \leq 0## cases which I had meant. It's a bit troublesome to read those inline codes. I didn't use de L'Hôpital but let me just add my solution for the sake of readability:
For ##\alpha = 0## we get an infinite integral. Let us now assume ##\alpha \neq 0##. By partial integration twice, we get
\begin{align*}
\int_0^\infty x^2\exp(-\alpha x)\,dx &=\left. -\dfrac{x^2 \exp(-\alpha x)}{\alpha }\right|_{0}^\infty + \dfrac{2}{\alpha}\int_0^\infty x\exp(-\alpha x)\,dx \\
&= \left. -\dfrac{x^2 \exp(-\alpha x)}{\alpha }\right|_{0}^\infty\\ &+\dfrac{2}{\alpha}\left(-\left.\dfrac{x}{\alpha}\exp(-\alpha x)\right|_0^\infty +\dfrac{1}{\alpha} \int_0^\infty \exp(-\alpha x)\,dx \right)\\
&=\dfrac{\exp(-\alpha x)}{\alpha}\left[-x^2-\dfrac{2}{\alpha}x-\dfrac{2}{\alpha^2} \right]_0^\infty \\
&=\dfrac{2}{\alpha^3} -\lim_{x \to \infty}\exp(-\alpha x)\left( \dfrac{x^2}{\alpha}+\dfrac{2x}{\alpha^2}+\dfrac{2}{\alpha^3}
\right) \\
&=\begin{cases}
\dfrac{2}{\alpha^3} &\text{ if }\alpha > 0 \\
\text{ not existent } &\text{ if }\alpha \leq 0
\end{cases}
\end{align*}

bhobba
Homework Helper
Gold Member
i2 I would call obvious, so not sure what is required.

a)Assume we can take it an n-degree polynomial has n roots and maximum n real roots.
and that
b) Between any two successive real roots of p there must be a turning point – that is a point where p changes from increasing with increasing x to decreasing or vice versa. (That is, p' changes signs.) and From continuity.

That is, a point where p' = 0 but p'' ≠ 0. If instead at a point p' = 0 and p'' = 0, that point is not a turning point (p' dices nit change signs); it is also a double (at least) root of p'. Therefore if there is such a point between in the interval between two successive real roots of p there must be at least one other point In this interval where p' = 0, in order for there to be one turning point. Thus in this case there are at least three real roots of p' between the two successive routes of p that we were considering. Therefore outside this interval there are at most (n - 4) real roots of p'. And therefore at most (n - 3) real roots of p. Combined with the already considered two, there are at most (n - 1) real roots, so from (a) at least one nonrealroot.

I thought it is obvious but when I come to write it out it looks farraginous,will this do?

Last edited:
Gold Member
Gold Member
Delta2
Gold Member
Is this problem 5:

Well done @julian

Gold Member
@epenguin I think you have the right idea, but it's a bit hard for me to follow what you've written.
If instead at a point p' = 0 and p'' = 0, that point is not a turning point
If $p(x)=x^3$, then $p'(0)=p''(0)=0$ and $0$ is a turning point according to your definition.

Therefore if there is such a point between in the interval between two successive real roots of p there must be at least one other point In this interval where p' = 0, in order for there to be one turning point.
I don't understand this. If $p(x)=x^4-1$, then the point $0$ is 'such a point' ($p'(0)=p''(0)=0$), but the only zero of $p'$ is zero. Do you mean to say that there are at least two roots of $p'$ counted with multiplicity (because the point is already a zero for $p$ of multiplicity at least $2$). Anyways, what if the point $a$ where $p'(a)=p''(a)$ and $p(a)\neq 0$ is not in the interval between two roots (is smaller than the smallest root or larger than the largest root)?

Where are you using the assumption $p(a)\neq 0$? The problem statement is false if you don't assume this.