Intermediate Math Challenge - June 2018

In summary, Summer is coming and brings a new intermediate math challenge! Enjoy! If you find the problems difficult to solve don't be disappointed! Just check our other basic level math challenge thread!
  • #1
QuantumQuest
Science Advisor
Insights Author
Gold Member
927
484
Summer is coming and brings a new intermediate math challenge! Enjoy! If you find the problems difficult to solve don't be disappointed! Just check our other basic level math challenge thread!

RULES:
1) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored.
2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
3) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
4) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.
5) Mentors, advisors and homework helpers are kindly requested not to post solutions, not even in spoiler tags, for the challenge problems, until 16th of each month. This gives the opportunity to other people including but not limited to students to feel more comfortable in dealing with / solving the challenge problems. In case of an inadvertent posting of a solution the post will be deleted by @fresh_42##1.## (solved by @tnich ) Where ##\mathbf A \in \mathbb R^{\text{m x m}}## for natural number ##m \geq 2## and ##p \in (0,1)##

$$\mathbf A := \left[\begin{matrix}
- p_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & -1&0 &\dots & 0 & 0 & 0
\\0 & p_{}&-1 &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & -1&0 &0
\\0 & 0&0 & \dots & p_{}&-1 &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}-1
\end{matrix}\right]$$

prove the minimal polynomial for ##\mathbf A## is

##= \mathbf A^{m} + \binom{m-1}{1}\mathbf A^{m-1} + \binom{m-1}{2}\mathbf A^{m-2} + \binom{m-1}{3}\mathbf A^{m-3} +...+ \mathbf A## ##\space## ##\space## (by @StoneTemplePython)

##2.## (solved by @tnich ) Find the volume of the solid ##S## which is defined by the relations ##z^2 - y \geq 0,## ##\space## ##x^2 - z \geq 0,## ##\space## ##y^2 - x \geq 0,## ##\space## ##z^2 \leq 2y,## ##\space## ##x^2 \leq 2z,## ##\space## ##y^2 \leq 2x## ##\space## ##\space## (by @QuantumQuest)

##3.## (solved by @julian ) Determine with analytical methods, i.e. with a calculator only, the wavelengths of all local maximal radiation intensities of a black body of temperature ##T## given the following function of radiation up to three digits:
$$
J(\lambda) =\dfrac{c^2h}{\lambda^5\cdot\left(\exp\left(\dfrac{ch}{\lambda \kappa T}\right)-1\right)}
$$
(by @fresh_42)

##4.## (solved by @tnich ) 100 gamers walk into an arcade with 8 different video games. Each gamer plays every video game once and only once. For each video game at least 65 of the gamers beat the final level. (These are easy games.) Prove that there must exist at least one collection of 2 gamers who collectively beat every game / every final level. For avoidance of doubt, this means, e.g.

if we record a 1 indicating the player_k beat final level of game i, and a 0 otherwise,

e.g. if the kth player beat the final level of games 1, 3, 7 and 8 but lost on the others, we'd have


$$\mathbf p_k = \begin{bmatrix}
1\\
0\\
1\\
0\\
0\\
0\\
1\\
1
\end{bmatrix}$$

the task is to prove there must exist (at least one) vector ##\mathbf a## where

##\mathbf a := \mathbf p_k + \mathbf p_j##

and ##\mathbf a \gt 0## (i.e. each component of ##\mathbf a## is strictly positive) ##\space## ##\space## (by @StoneTemplePython)

##5.## (solved by @Math_QED ) Solve the differential equation ##(2x - 4y + 6)dx + (x + y - 3)dy = 0## ##\space## ##\space## (by @QuantumQuest)

##6.## (resolved in post #56) Consider ##\mathfrak{su}(3)=\operatorname{span}\{\,T_3,Y,T_{\pm},U_{\pm},V_{\pm}\,\}## given by the basis elements
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## $$
\begin{align*}
T_3&=\frac{1}{2}\lambda_3\; , \;Y=\frac{1}{\sqrt{3}}\lambda_8\; ,\\
T_{\pm}&=\frac{1}{2}(\lambda_1\pm i\lambda-2)\; , \;U_{\pm}=\frac{1}{2}(\lambda_6\pm i\lambda_7)\; , \;V_{\pm}=\frac{1}{2}(\lambda_4\pm i\lambda_5)
\end{align*}$$

(cp. https://www.physicsforums.com/insights/representations-precision-important) where the ##\lambda_i## are the Gell-Mann matrices and its maximal solvable Borel-subalgebra

##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space##
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\mathfrak{B}:=\langle T_3,Y,T_+,U_+,V_+ \rangle##

Now ##\mathfrak{A(B)}=\{\,\alpha: \mathfrak{g} \to \mathfrak{g}\, : \,[X,\alpha(Y)]=[Y,\alpha(X)]\,\,\forall \,X,Y\in \mathfrak{B}\,\}## is the one-dimensional Lie algebra spanned by ##\operatorname{ad}(V_+)## because ##\mathbb{C}V_+## is a one-dimensional ideal in ##\mathfrak{B}## (Proof?). Then ##\mathfrak{g}:=\mathfrak{B}\ltimes \mathfrak{A(B)}## is again a Lie algebra by the multiplication ##[X,\alpha]=[\operatorname{ad}X,\alpha]## for all ##X\in \mathfrak{B}\; , \;\alpha \in \mathfrak{A(B)}##. (For a proof see problem 9 in https://www.physicsforums.com/threads/intermediate-math-challenge-may-2018.946386/ )

a) Determine the center of ##\mathfrak{g}## , and whether ##\mathfrak{g}## is semisimple, solvable, nilpotent or neither.
b) Show that ##(X,Y) \mapsto \alpha([X,Y])## defines another Lie algebra structure on ##\mathfrak{B}## , which one?
c) Show that ##\mathfrak{A(g)}## is at least two-dimensional. ##\space## ##\space## (by @fresh_42)

##7.## (solved by @julian ) Given m distinct nonzero complex numbers ## x_1, x_2, ..., x_m##, prove that

##\sum_{k=1}^m \frac{1}{x_k} \prod_{j \neq k} \frac{1}{x_k - x_j} = \frac{(-1)^{m+1}}{x_1 x_2 ... x_m}##

hint: first consider the polynomial

##p(x) = -1 + \sum_{k=1}^m \prod_{j\neq k} \frac{x - x_j}{x_k - x_j}## ##\space## ##\space## (by @StoneTemplePython)

##8.## (solved by @julian ) Find the last ##1000## digits of the number ##a = 1 + 50 + 50^2 + 50^3 + \cdots + 50^{999}## ##\space## ##\space## (by @QuantumQuest)

##9.## (solved by @julian ) Consider the Hilbert space ##\mathcal{H}=L_2([a,b])## of Lebesgue square integrable functions on ##[a,b]## , i.e.

##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space##
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\langle \psi,\chi \rangle = \int_{a}^{b}\psi(x)\chi(x)\,dx##

The functions ##\{\,\psi_n:=x^n\, : \,n\in \mathbb{N}_0\,\}## build a system of linear independent functions which can be used to find an orthonormal basis by the Gram-Schmidt procedure. Show that the Legendre polynomials

##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##p_n(x):=\dfrac{1}{(b-a)^n\,n!}\,\sqrt{\dfrac{2n+1}{b-a}}\,\dfrac{d^n}{dx^n}[(x-a)(x-b)]^n\; , \;n\in \mathbb{N}_0##

build an orthnormal system. ##\space## ##\space## (by @fresh_42)

##10.## a) (solved by @I like Serena ) Give an example of an integral domain (no field), which has common divisors, but doesn't have greatest common divisors.
b) (solved by @tnich ) Show that there are infinitely many units (invertible elements) in ##\mathbb{Z}[\sqrt{3}]##.
c) (solved by @I like Serena ) Determine the units of ##\{\,\frac{1}{2}a+ \frac{1}{2}b\sqrt{-3}\,\vert \,a+b \text{ even }\}##.
d) (solved by @I like Serena ) The ring ##R## of integers in ##\mathbb{Q}(\sqrt{-19})## is the ring of all elements, which are roots of monic polynomials with integer coefficients. Show that ##R## is built by all elements of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd. ##\space## ##\space## (by @fresh_42)
 
Last edited by a moderator:
  • Like
Likes member 587159, I like Serena, ISamson and 2 others
Physics news on Phys.org
  • #2
I think I have done 7:

If

##
p(x) = -1 + \sum_{k=1}^m \prod_{j \not= k} \frac{x-x_j}{x_k-x_j}
##

then

\begin{align}
p(x_l) & = -1 +\sum_{k=1}^m \prod_{j \not= k} \frac{x_l-x_j}{x_k-x_j}
\nonumber \\
& = -1 +\sum_{k=1}^m \delta_{lk}
\nonumber \\
&= 0
\nonumber
\end{align}

for ##l = 1, 2, \dots , m##. We have the situation that there are apparently ##m## distinct roots, however ##p(x)## is blatantly a polynomial of degree ##m-1## at most. The only resolution to this is that ##p(x)## is identically zero! Therefore we have:

##
\sum_{k=1}^m \prod_{j \not= k} \frac{x-x_j}{x_k-x_j} = 1 .
##

We put ##x = 0##:

##
(-1)^{m-1} \sum_{k=1}^m \prod_{j \not= k} \frac{x_j}{x_k-x_j} = 1
##

and then divide both sides by ##(-1)^{m+1} (x_1 x_2 \dots x_m)## to obtain

##
\sum_{k=1}^m \frac{1}{x_k} \prod_{j \not= k} \frac{1}{x_k-x_j} = \frac{(-1)^{m+1}}{x_1 x_2 \dots x_m}
##

which is the desired result.
 
Last edited:
  • Like
Likes mfb and StoneTemplePython
  • #3
I have proven that we have an orthonormal system for problem 9:

We prove the orthogonormality of the ##p_n (x)##'s.

We first derive a preliminary result that will be needed in the calculations that follow. Applying Leibnitz:

##
\frac{d^p}{dx^p} (f (x) g(x)) = \sum_{q=0}^p
\begin{pmatrix}
p \\ q
\end{pmatrix}
\frac{d^{p-q}}{dx^{p-q}} f (x) \frac{d^q}{dx^q} g(x)
##

to

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n]
##

we obtain

\begin{align}
& \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] =
\nonumber \\
& = \sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{d^{l-p}}{dx^{l-p}} (x-a)^n \frac{d^p}{dx^p} (x-b)^n
\nonumber \\
& =
\sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{n!}{(n-l+p)!} (x-a)^{n-l+p} \frac{n!}{(n-p)!} (x-b)^{n-l}
\nonumber
\end{align}

Note that for ##l < n## we have

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=a} = \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=b} = 0 . \qquad (1)
##

This is the result we sought and which we will use repeatedly in the following.

Orthogonality:

We first show the orthogonality of ##p_n (x)## and ##p_m (x)## where ##n \not= m## by proving:

##
\int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx = 0 .
##

Without loss of generality we take ##n > m##, then by repeated integration by parts we obtain:

\begin{align}
& \int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx =
\nonumber \\
& = \Big[ \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad - \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^{m+1}}{dx^{m+1}} [(x-a) (x-b)]^m dx
\nonumber \\
&= \qquad \vdots
\nonumber \\
& = (-1)^{l+1} \Big[ \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l-1}}{dx^{m+l-1}} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad + (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber \\
& = (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber
\end{align}

where we have used (1). By taking ##l = m+1 \; (\leq n)## we have the desired result because

##
\frac{d^{2m+1}}{dx^{2m+1}} [(x-a) (x-b)]^m = 0 .
##

Normalization:

We write

##
p_n (x) = \mathcal{N} \frac{d^n}{dx^n} [(x-a) (x-b)]^n
##

and calculate ##\mathcal{N}## by requiring

##
\int_a^b [p_n (x)]^2 dx = 1 .
##

We consider the integral:

\begin{align}
I & = \mathcal{N}^2 \int_a^b \frac{d^n}{dx^n} [(x-a)(x-b)]^n \frac{d^n}{dx^n} [(x-a)(x-b)]^n dx
\nonumber \\
& = - \mathcal{N}^2 \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a)(x-b)]^n \frac{d^{n+1}}{dx^{n+1}} [(x-a)(x-b)]^n dx
\nonumber \\
& \qquad \vdots
\nonumber \\
& = \mathcal{N}^2 (-1)^n \int_a^b [(x-a)(x-b)]^n \frac{d^{2n}}{dx^{2n}} [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber
\end{align}

where we have again used repeated integration by parts and (1) again. We now simplfy ##I## further:

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [x^2 - (a+b) x + ab]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b \Big[ \Big( x - \frac{a+b}{2} \Big)^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ y^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n \qquad \quad (y = x- \frac{a+b}{2})
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n} \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ \Big( \frac{2 y}{b-a} \Big)^2 - 1 \Big)^2 \Big]^n dy
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{b-a}{2} \Big)^{2n+1} \int^1_{-1} (u^2 - 1)^n du \qquad \quad (u = \frac{2y}{b-a})
\nonumber
\end{align}

We use repeated intergration by parts in order to solve the above integral:

\begin{align}
& \int^1_{-1} 1 \cdot (u^2 - 1)^n du =
\nonumber \\
& = [u (u^2 - 1)^n]_{-1}^1 - 2 n \int^1_{-1} u^2 (u^2 - 1)^{n-1} du
\nonumber \\
& = - 2n \Big[ \frac{u^3}{3} (u^2 - 1)^{n-1} \Big]_{-1}^1 + 2^2 n (n-1) \int^1_{-1} \frac{u^4}{3} (u^2 - 1)^{n-2} du
\nonumber \\
& \qquad \vdots
\nonumber \\
& = (-1)^{n+1} 2^{n-1} n! \Big[ \frac{u^{2n-1}}{(2n-1)!} (u^2 - 1) \Big]_{-1}^1
\nonumber \\
& \quad + (-1)^n 2^n n! \int^1_{-1} \frac{u^{2n}}{(2n-1)!} du
\nonumber \\
& = (-1)^n \frac{2^n n!}{(2n-1)!} \Big[ \frac{u^{2n+1}}{2n+1} \Big]_{-1}^1
\nonumber \\
& = (-1)^n\frac{2}{2n+1} \frac{2^n n!}{(2n-1)!}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^n n! 2n (2n-2) \dots 2}{2n (2n-1) (2n -2) \dots 2}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber
\end{align}

So that

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{b-a}{2} \Big)^{2n+1} \times (-1)^n \frac{2}{(2n+1)} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber \\
& = \mathcal{N}^2 (b-a)^{2n} (n!)^2 \frac{b-a}{2n+1}
\nonumber
\end{align}

Requiring ##I = 1## implies:

\begin{align}
\mathcal{N} = \frac{1}{(b-a)^n n!} \sqrt{\frac{2n+1}{b-a}}
\nonumber
\end{align}

the desired result.
 
Last edited:
  • #4
My solution for Problem 5:
We must solve
$$ (2x - 4y + 6) dx + (x + y - 3) dy = 0 $$
My first step is to shift x and y:
x -> x + 1
y -> y + 2
This gives:
$$ (2x - 4y) dx + (x + y) dy = 0 $$
This equation is now homogeneous, and we can solve it by letting y = x*z(x). This gives a differential equation for z in x:
$$ x \frac{dz}{dz} = - \frac{(z-1) (z-2)}{z+1} $$
Rearranging gives us
$$ \frac{dx}{x} = - \frac{(1+z) dz}{(1-z)(2-z)} $$
This has solution
$$ x = x_0 \frac{(1-z)^2}{(2-z)^3} $$

Using the original variables,
$$ x = 1 + x_0 \frac{(1-z)^2}{(2-z)^3} $$
$$ y = 2 + x_0 \frac{z (1-z)^2}{(2-z)^3} $$
 
  • #5
julian said:
I have proven that we have an orthonormal system for problem 9:

We prove the orthogonormality of the ##p_n (x)##'s.

We first derive a preliminary result that will be needed in the calculations that follow. Applying Leibnitz:

##
\frac{d^p}{dx^p} (f (x) g(x)) = \sum_{q=0}^p
\begin{pmatrix}
p \\ q
\end{pmatrix}
\frac{d^{p-q}}{dx^{p-q}} f (x) \frac{d^q}{dx^q} g(x)
##

to

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n]
##

we obtain

\begin{align}
& \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] =
\nonumber \\
& = \sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{d^{l-p}}{dx^{l-p}} (x-a)^n \frac{d^p}{dx^p} (x-b)^n
\nonumber \\
& =
\sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{n!}{(n-l+p)!} (x-a)^{n-l+p} \frac{n!}{(n-p)!} (x-b)^{n-l}
\nonumber
\end{align}

Note that for ##l < n## we have

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=a} = \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=b} = 0 . \qquad (1)
##

This is the result we sought and which we will use repeatedly in the following.

Orthogonality:

We first show the orthogonality of ##p_n (x)## and ##p_m (x)## where ##n \not= m## by proving:

##
\int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx = 0 .
##

Without loss of generality we take ##n > m##, then by repeated integration by parts we obtain:

\begin{align}
& \int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx =
\nonumber \\
& = \Big[ \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad - \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^{m+1}}{dx^{m+1}} [(x-a) (x-b)]^m dx
\nonumber \\
&= \qquad \vdots
\nonumber \\
& = (-1)^{l+1} \Big[ \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l-1}}{dx^{m+l-1}} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad + (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber \\
& = (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber
\end{align}

where we have used (1). By taking ##l = m+1 \; (\leq n)## we have the desired result because

##
\frac{d^{2m+1}}{dx^{2m+1}} [(x-a) (x-b)]^m = 0 .
##

Normalization:

We write

##
p_n (x) = \mathcal{N} \frac{d^n}{dx^n} [(x-a) (x-b)]^n
##

and calculate ##\mathcal{N}## by requiring

##
\int_a^b [p_n (x)]^2 dx = 1 .
##

We consider the integral:

\begin{align}
I & = \mathcal{N}^2 \int_a^b \frac{d^n}{dx^n} [(x-a)(x-b)]^n \frac{d^n}{dx^n} [(x-a)(x-b)]^n dx
\nonumber \\
& = - \mathcal{N}^2 \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a)(x-b)]^n \frac{d^{n+1}}{dx^{n+1}} [(x-a)(x-b)]^n dx
\nonumber \\
& \qquad \vdots
\nonumber \\
& = \mathcal{N}^2 (-1)^n \int_a^b [(x-a)(x-b)]^n \frac{d^{2n}}{dx^{2n}} [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber
\end{align}

where we have again used repeated integration by parts and (1) again. We now simplfy ##I## further:

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [x^2 - (a+b) x + ab]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b \Big[ \Big( x - \frac{a+b}{2} \Big)^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ y^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n \qquad \quad (y = x- \frac{a+b}{2})
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n} \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ \Big( \frac{2 y}{b-a} \Big)^2 - 1 \Big)^2 \Big]^n dy
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n+1} \int^1_{-1} (u^2 - 1)^n du \qquad \quad (u = \frac{2y}{b-a})
\nonumber
\end{align}

We use repeated intergration by parts in order to solve the above integral:

\begin{align}
& \int^1_{-1} 1 \cdot (u^2 - 1)^n du =
\nonumber \\
& = [u (u^2 - 1)^n]_{-1}^1 - 2 n \int^1_{-1} u^2 (u^2 - 1)^{n-1} du
\nonumber \\
& = - 2n \Big[ \frac{u^3}{3} (u^2 - 1)^{n-1} \Big]_{-1}^1 + 2^2 n (n-1) \int^1_{-1} \frac{u^4}{3} (u^2 - 1)^{n-2} du
\nonumber \\
& \qquad \vdots
\nonumber \\
& = (-1)^{n+1} 2^{n-1} n! \Big[ \frac{u^{2n-1}}{(2n-1)!} (u^2 - 1) \Big]_{-1}^1
\nonumber \\
& \quad + (-1)^n 2^n n! \int^1_{-1} \frac{u^{2n}}{(2n-1)!} du
\nonumber \\
& = (-1)^n \frac{2^n n!}{(2n-1)!} \Big[ \frac{u^{2n+1}}{2n+1} \Big]_{-1}^1
\nonumber \\
& = (-1)^n\frac{2}{2n+1} \frac{2^n n!}{(2n-1)!}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^n n! 2n (2n-2) \dots 2}{2n (2n-1) (2n -2) \dots 2}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber
\end{align}

So that

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n+1} \times (-1)^n \frac{2}{(2n+1)} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber \\
& = \mathcal{N}^2 (a-b)^{2n} (n!)^2 \frac{a-b}{2n+1}
\nonumber
\end{align}

Requiring ##I = 1## implies:

\begin{align}
\mathcal{N} = \frac{1}{(b-a)^n n!} \sqrt{\frac{2n+1}{b-a}}
\nonumber
\end{align}

the desired result.
That's correct. You could have omitted the quadratic integration of ##I## by starting with the integration by parts on ##(x-a)^n(x-b)^n##, but this way we have seen an extra integration trick.
 
  • #6
lpetrich said:
My solution for Problem 5:...

The solutions for the differential equation of the problem must be expressed using ##x##, ##y##, number(s) and constants.
 
  • #7
My run at Problem 2:

I used a sort of brute force algebraic approach and got a strange object for "S"... I never disposed of all the inequalities, so the resulting "solid" has a kind of "prerogative volume"...

Interpreting the problem as presented
x^2 - z >= 0 and x^2 <= 2z
y^2 - x >= 0 and y^2 <= 2x
z^2 - y >= 0 and z^2 <= 2y

Reducing left side inequalities
x^2 >= z
y^2 >= x
z^2 >= y

Combining back with right side inequalities
z <= x^2 <= 2z
x <= y^2 <= 2x
y <= z^2 <= 2y

Substituting
z <= (y^2)^2 <= 2z
x <= (z^2)^2 <= 2x
y <= (x^2)^2 <= 2y

Substituting again to get all three expressed in their respective variable
z <= ((z^2)^2)^2 <= 2z
x <= ((x^2)^2)^2 <= 2x
y <= ((y^2)^2)^2 <= 2y

Separating and simplifying the inequalities to examine the variables
z <= z^8 when 0<= z and z^8 <= 2z when z <= 2^(1/7)
x <= x^8 when 0<= x and x^8 <= 2x when x <= 2^(1/7)
y <= y^8 when 0<= y and y^8 <= 2y when y <= 2^(1/7)

Aggregating the above
0 <= z <= 2^(1/7)
0 <= x <= 2^(1/7)
0 <= y <= 2^(1/7)

Unfortunately this describes the solid "S" as a rather variable rectangular solid with one of its vertices at the origin, the solid residing in the positive 1/8 space where x, y, and z are all >= 0, but with each x, y, and z allowed to range from 0 to 2^(1/7), so the shape of "S" takes an infinite amount of solid rectangular forms within the boundaries of a cube of side 2^(1/7) also with one of its vertices at the origin... the volume of solid "S" may range from 0 to (2^(1/7))^3 or from 0 to about 1.3459
 
Last edited:
  • #8
bahamagreen said:
Substituting
z <= (y^2)^2 <= 2z
[...]
That step only works for some special cases at the "faces" of the volume.
While y2 can be x for some points it is not true for all points in the volume.
 
  • Like
Likes QuantumQuest
  • #9
I think I have done problem 8:

We have that

##
1 + 50 + 50^2 + \cdots + 50^{999} = \frac{50^{1000} - 1}{49}
##

The decimal expansion of a rational number always either terminates after a finite number of digits or begins to repeat the same finite sequence of digits over and over. We could consider

##
\frac{5^{1000}}{49} \cdot 10^{1000}
##

(and then subtract off ##1/49## to get an integer) but this appears intractable.

We need to come up with a neat way of "splitting" the number.

But first note we have an additional freedom in that

##
1 + 50 + 50^2 + \cdots + 50^{999} + \sum_{q=1}^p 50^{999+q}
##

has the same last 1000 digits as ##1 + 50 + 50^2 + \cdots + 50^{999}## for any integer ##p \geq 1## as

\begin{align}
50^{1000} & = 5^{1000} \times 10^{1000} = \dots \dots 25 \underbrace{00 \dots \dots 00 \;}_{1000 \; \text{digits}}
\nonumber \\
50^{1001} & = 5^{1001} \times 10^{1001} = \dots \dots 25 \underbrace{000 \dots \dots 00 \;}_{1001 \; \text{digits}}
\nonumber \\
& \text{etc}
\nonumber
\end{align}

We take this auxiliary sum

##
1 + 50 + 50^2 + \cdots + 50^{999} + \sum_{q=1}^p 50^{999+q}
##

and try the following "splitting":

\begin{align}
& \frac{50^{1000+p} - 1}{49} =
\nonumber \\
& = \frac{5^{1000+p} \times 10^{1000+p} - 1}{49}
\nonumber \\
& = \frac{(k \cdot 49 + b) \times 10^{1000+p} - 1}{49}
\nonumber \\
& = k 10^{1000+p} + \frac{b \times 10^{1000+p} - 1}{49}
\nonumber
\end{align}

where ##k## and ##b## are an integers such that

\begin{align}
\frac{5^{1000+p} - b}{49} = k .
\nonumber
\end{align}

The problem is then to find the last 1000 digits of

\begin{align}
\frac{b}{49} \times 10^{1000+p} - \frac{1}{49}
\nonumber
\end{align}

when suitable values of ##p##, ##k## and ##b## are choosen.

We introduce some elements from number theory (see for example "Elementary Number Theory" by Underwood Dudley). Recall ##x = y \; \text{mod} \; n## means ##x = k \cdot n + y## for some integer ##k##.

Euler's theorem then states:

If ##n## is a positive integer and ##a##, ##n## are coprime, then ##a^{\varphi (n)} = 1 \; \text{mod} \; n## where ##\varphi (n)## is the Euler's totient function (The totient ##\varphi (n)## of a positive integer ##n## greater than ##1## is defined to be the number of positive integers less than or equal to ##n## that are coprime to ##n##.)

There is a formula for calculating ##\varphi (n)## from the prime-power decomposition ##n = p_1^{e_1} p_2^{e_2} \dots p_k^{e_k}##:

##
\varphi (n) = n \Big( 1 - \frac{1}{p_1} \Big) \Big( 1 - \frac{1}{p_2} \Big) \dots \Big( 1 - \frac{1}{p_k} \Big) .
##

Now ##49## has the prime-power decomposition ##49 = 7^2##, so that we find ##\varphi (49) = 49 - 7 = 42##. With a bit of experiment you find that ##1008 / 42 = 24##, so we will take ##p=8##. Now ##5^{24}## and ##49## are coprime, to see this note that the divisors of ##49## are just ##1##, ##7##, ##49## and that neither ##7## nor ##49## divides ##5^{24}##. Thus we can employ Euler's theorem:

\begin{align}
5^{1008} - 1 & = 5^{24 \times 42} - 1
\nonumber \\
& = (5^{24})^{42} - 1
\nonumber \\
& = (5^{24})^{\varphi (49)} - 1
\nonumber \\
& = k \cdot 49
\nonumber
\end{align}

(we've taken ##b = 1## in the above "splitting").

So we have arrived at the fact that the integer

\begin{align}
\frac{10^{1008} - 1}{49} = \frac{10^{1008}}{49} - \frac{1}{49}
\nonumber
\end{align}

has the same last 1000 digits as ##1+ 50 + 50^2 + \cdots + 50^{999}##. We consider

\begin{align}
\frac{1}{49} \times 10^{1008}
\nonumber
\end{align}

which will be a lot more tractable than the original fraction we looked at. We could find ##1/49## using long division or wolfram. The result is the repeating decimal:

##
\frac{1}{49} = 0.\underbrace{020408163265306122448979591836734693877551 \; \; \; \;}_{\text{period} \; 42}
##

The period is ##42## but recall that ##1008/42 = 24##, so that multiplying ##1/49## by ##10^{1008}## produces:

\begin{align}
& \frac{1}{49} \times 10^{1008} =
\nonumber \\
& = 020408163265306122448979591836734693877551
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551
\nonumber \\
& \qquad \vdots
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551 \qquad (24th \; \text{row})
\nonumber \\
& \quad .020408163265306122448979591836734693877551 \dots \dots
\nonumber
\end{align}

Subtracting off ##1/49## then removes the part to the right of the decimal point.

So that finally we can say that the last 1000 digits of the original sum are given by

\begin{align}
& \quad \; 3265306122448979591836734693877551
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551
\nonumber \\
& \qquad \vdots
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551 \qquad (24th \; \text{row})
\nonumber
\end{align}
 
Last edited:
  • Like
Likes QuantumQuest and mfb
  • #10
mfb said:
That step only works for some special cases at the "faces" of the volume.
While y2 can be x for some points it is not true for all points in the volume.

Is this a problem of powers within inequalities or something else?
Do you mean that the substitution requires that y^2 = x ?
To be clear, if z <= x^2 and x <= y^2 this does not mean z <= (y^2)^2 ?
 
  • #11
bahamagreen said:
My run at Problem 2:

I used a sort of brute force algebraic approach and got a strange object for "S"... I never disposed of all the inequalities, so the resulting "solid" has a kind of "prerogative volume"...

You start correctly using the given inequalities but take note of what @mfb points out in post #8. In other words, the solid ##S## is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. As an extra hint I would point you towards the direction of making some transformation(s) and seeing if the notion of a determinant is any useful in this case.
 
  • #12
QuantumQuest said:
You start correctly using the given inequalities but take note of what @mfb points out in post #8. In other words, the solid ##S## is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. As an extra hint I would point you towards the direction of making some transformation(s) and seeing if the notion of a determinant is any useful in this case.

Conceptual rather than algebraic... thanks mfb and QQ.
 
  • Like
Likes QuantumQuest
  • #13
julian said:
I think I have done problem 8:

Well done @julian. Just one comment: You can decide about ##\frac{1}{49}## in the early phase i.e. when we are at ##1 + 50 + 50^2 + \cdots + 50^{999} = \frac{50^{1000} - 1}{49}## and then seek for a faster way in order to decide about the difference (the aforementioned fraction split in two pieces). There is absolutely no problem in your reasoning but I leave this faster way for you - if you want / have the time to think about it, or anyone else wanting to solve it which will also get credit.
 
  • #14
bahamagreen said:
To be clear, if z <= x^2 and x <= y^2 this does not mean z <= (y^2)^2 ?
That is correct, but in general it is a weaker condition, and it doesn’t work combined with the other inequality you used (the comparison to 2z).

Every multiplication by 5 shifts the repeating pattern of 1/49 by a fixed number of digits.
 
  • #15
I think I've done problem 3:

We have the formula for intensity as a function of the wavelength ##\lambda##,

\begin{align}
J (\lambda) = \frac{c^2 h}{\lambda^5 \cdot \Big( \exp \Big( \frac{ch}{\lambda \kappa T} \Big) - 1 \Big)} .
\nonumber
\end{align}

Put ##\alpha = \frac{ch}{\kappa T}## to simplify things slightly. Then

\begin{align}
J (\lambda) = \frac{c^2 h}{\lambda^5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)} .
\nonumber
\end{align}

We note some properties of the function ##J (\lambda)##. First

\begin{align}
\lim_{\lambda \rightarrow 0} J (\lambda) & = \lim_{\lambda \rightarrow 0} \frac{c^2 h}{\lambda^5 \exp \Big( \frac{\alpha}{\lambda} \Big)}
\nonumber \\
& = \lim_{\mu \rightarrow \infty} \frac{c^2 h \mu^5}{ \exp (\alpha \mu)} \qquad (\mu = 1 / \lambda)
\nonumber \\
& = \lim_{\mu \rightarrow \infty} \frac{c^2 h 5!}{\alpha^5 \exp (\alpha \mu)} \quad (\text{repeated use of L'Hôpital's rule})
\nonumber \\
& = 0 .
\nonumber
\end{align}

Next

\begin{align}
\lim_{\lambda \rightarrow \infty} J (\lambda) & = \lim_{\lambda \rightarrow \infty} \frac{c^2 h}{\lambda^5 \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1\Big)}
\nonumber \\
& = \lim_{\mu \rightarrow 0} \frac{c^2 h \mu^5}{\exp ( \alpha \mu ) - 1} \qquad (\mu = 1 / \lambda)
\nonumber \\
& = \lim_{\mu \rightarrow 0} \frac{c^2 h 5 \mu^4}{\alpha \exp (\alpha \mu)} \qquad (\text{use of L'Hôpital's rule})
\nonumber \\
& = 0 .
\nonumber
\end{align}

Also, it is easy to see that ##J (\lambda)## is a non-negative function (as it should be physically).

We now look for local extrema in the intensity in the region ##0< \lambda < \infty##. We differentiate ##J (\lambda)## and put the result equal to zero

\begin{align}
\frac{d}{d \lambda} J (\lambda) & = \frac{d}{d \lambda} \frac{c^2 h}{\lambda^5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)}
\nonumber \\
& = - c^2 h \frac{ \frac{d}{d \lambda} \Big\{ \lambda^5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big) \Big\} }{\lambda^{10} \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)^2}
\nonumber \\
& = - c^2 h \frac{ \Big\{ 5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big) + \lambda \cdot \Big( - \frac{\alpha}{\lambda^2} \Big) \exp \Big( \frac{\alpha}{\lambda} \Big) \Big\} }{\lambda^6 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)^2}
\nonumber \\
& = 0 .
\nonumber
\end{align}

This implies

\begin{align}
- 5 \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big) + \frac{\alpha}{\lambda} \exp \Big( \frac{\alpha}{\lambda} \Big) = 0
\nonumber
\end{align}

Put

##
x = \frac{ch}{\lambda \kappa T} = \frac{\alpha}{\lambda} .
##

Then the above equation becomes ##-5 ( \exp ( x ) - 1 ) + x \exp ( x ) = 0## or

\begin{align}
- 5 + x + 5 e^{-x} = 0 .
\nonumber
\end{align}

Obviously ##x = 0## is a root and corresponds to ##\lambda = \infty##. Consider the diagram in which the function ##5 - x## and the function ##5 e^{-x}## are plotted on top of each other. We already know they intersect at ##x = 0##, and as ##5 e^{-x}## is convex and as ##5 - x## is a straight line these functions can only intersect again at most once. To see that they do indeed intersect again, and for positive ##x##, we simply look at the derivatives of the two functions at ##x = 0##:

\begin{align}
\frac{d}{dx} (5 e^{-x}) \Big|_{x=0} & = -5
\nonumber \\
\frac{d}{dx} (5 - x) \Big|_{x=0} & = -1
\nonumber
\end{align}

which imply that initially ##5e^{-x}## goes below ##5 - x## as we move away from ##x = 0## in the direction of positive ##x##.

Given we have this one other root (##x > 0##), and given that ##\lim_{\lambda \rightarrow 0 } J (\lambda) = 0##, ##\lim_{\lambda \rightarrow \infty} J (\lambda) = 0## and that ##J (\lambda)## is non-negative, we know that there will be just the one local maximum in ##J (\lambda)## (actually a global maximum then).

I will use trial and error to estimate this root of ##- 5 + x + 5 e^{-x} = 0##, which we denote ##x_m##. Notice that as ##x## increases, ##\lambda## decreases. Write ##f(x) = 5 e^{-x} - (5 - x)##; a maximum in ##J (\lambda)## will be indicated by the sign of ##f (x)## going from negative to positive as ##x## increases.

But first:

Planck's constant ##h = 6.626070040(81) \times 10^{-34} \; J \cdot s##
Speed of light ##c = 299,792,458 \; m/s##
Boltzmann's constant ##k = 1.38064852(79) \times 10^{-23} \; J/K## .

So

##
\frac{hc}{k} = \frac{6.626070 \times 10^{-34} \times 299792458}{1.3806485 \times 10^{-23}} = 0.01438777 \; m \cdot K .
##

Our wavelength of the maximum intensity, ##\lambda_m##, will be

##
\lambda_m = \frac{hc}{x_m k T} = \frac{1}{T} \frac{1}{x_m} \times 0.01438777 \; m .
##

I now use trial and error to approxiamte ##x_m## using my calculator. First note

##5 e^{-4} = 0.0916## and ##5-4 = 1## ##\qquad \qquad \qquad \; f (4) < 0##
##5 e^{-5} = 0.0337## and ##5-5 = 0## ##\qquad \qquad \qquad \; f (5) > 0##

Implying the root ##x_m## will lie between ##x = 4## and ##x = 5##. Try:

##5 e^{-4.7} = 0.0455## and ##5-4.7 = 0.3## ##\qquad \qquad f (4.7) < 0##
##5 e^{-4.8} = 0.0411## and ##5-4.8 = 0.2## ##\qquad \qquad f (4.8) < 0##
##5 e^{-4.9} = 0.0372## and ##5-4.9 = 0.1## ##\qquad \qquad f (4.9) < 0##
##5 e^{-4.95} = 0.0354## and ##5-4.95 = 0.05## ##\qquad \quad f (4.95) < 0##
##5 e^{-4.96} = 0.0351## and ##5-4.96 = 0.04## ##\qquad \quad f (4.96) < 0##
##5 e^{-4.97} = 0.0347## and ##5-4.97 = 0.03## ##\qquad \quad f (4.97) > 0## .

This implies the root ##x_m## lies between ##4.96## and ##4.97##. Next try:

##5 e^{-4.961} = 0.03502959## and ##5-4.961 = 0.039## ##\qquad \quad f (4.961) < 0##
##5 e^{-4.962} = 0.03499458## and ##5-4.962 = 0.038## ##\qquad \quad f (4.962) < 0##
##5 e^{-4.963} = 0.03495960## and ##5-4.963 = 0.037## ##\qquad \quad f (4.963) < 0##
##5 e^{-4.964} = 0.03492466## and ##5-4.964 = 0.036## ##\qquad \quad f (4.964) < 0##
##5 e^{-4.965} = 0.03488975## and ##5-4.965 = 0.035## ##\qquad \quad f (4.965) < 0##
##5 e^{-4.966} = 0.03485488## and ##5-4.966 = 0.034## ##\qquad \quad f (4.966) > 0## .

This implies the root ##x_m## lies between ##4.965## and ##4.966##. It is easy to get more accuracy. Try:

##5 e^{-4.9651} = 0.03488626## and ##5-4.9651 = 0.0349## ##\quad \quad f (4.9651) < 0##
##5 e^{-4.9652} = 0.03488278## and ##5-4.9652 = 0.0348## ##\quad \quad f (4.9652) > 0##

and then try

##5 e^{-4.96511} = 0.03488592## and ##5-4.96511 = 0.03489## ##\; \; f (4.96511) < 0##
##5 e^{-4.96512} = 0.03488557## and ##5-4.96512 = 0.03488## ##\; \; f (4.96512) > 0## .

So the root ##x_m## lies between ##x = 4.96511## and ##x = 4.96512##, and so ##\lambda_m \equiv 1/x_m## lies between

##
0.2014050013 \leq \lambda_m \leq 0.2014054069 .
##

So we can say

##
\lambda_m \equiv 1/x_m \approx 0.201405
##

Then approximately we have

\begin{align}
\lambda_m & = \frac{1}{T} \frac{1}{x_m} \times \frac{ch}{\kappa}
\nonumber \\
& = \frac{1}{T} 0.201405 \times 0.01438777
\nonumber \\
& = \frac{1}{T} 2.89777 \times 10^{-3}\; m .
\nonumber
\end{align}

Or to 3 significant figures:

\begin{align}
\lambda_m & = \frac{1}{T} 2.90 \times 10^{-3} \; m .
\nonumber
\end{align}
 
Last edited:
  • Like
Likes QuantumQuest
  • #16
Correct, and well done!

I think you're a bit sloppy with de L'Hôpital (missing minus sign, and it goes ##\frac{f\,'}{g\,'}\to c## then ##\frac{f}{g} \to c## which means a bit more care with the order of the the argument) but the conclusion is correct.

I have a slightly different approach to determine the maximum which I like to mention in case you are interested, i.e. no criticism of your version. (I used copy and paste so the variable names are different, but it's not difficult to see the principle.)
$$
f(t):=5(1-e^{-t})=t \text{ with } t=x^{-1}
$$
Because of ##f'(t)>1## on ##[0,\log 5]## the function ##f(t)-t## is strictly monotone increasing there and ##f(t)>t## since ##f(0)=0##. For ##t> \log 5## we get that ##f(t)-t## is strictly monotone decreasing and thus has at most one zero. As ##f(4)-4>0## and ##f(5)-5<0## there is exactly one zero ##t^*## in ##[4,5]## by the intermediate value theorem. Now
$$
q:=\sup\{\,|f'(t)|\, : \,t\in [4,5]\,\}=f'(4)=5e^{-4}=0.09157 < 1
$$
and by the fixed-point theorem and a sequence ##t_{n+1}:=f(t_n) \; , \; t_1=5## we have
$$
|t^*-t_n| < \dfrac{q}{q-1}\,\,|t_n-t_{n-1}|<0.1008\,\,|t_n-t_{n-1}|
$$
and thus ##t^* = 4.965114\,\pm\,10^{-6}## or ##x^*=0.2014052\,\pm\,10^{-7}## with only a few iterations.
 
  • Like
Likes QuantumQuest
  • #17
Thread is now open to all. We have still six unsolved problems: 1,2,5,6,10.

So Homework Helpers, Science Advisors and Mentor can you help?
 
Last edited:
  • #18
Here is a solution to 4.

For brevity, I am going to call beating the final level of a game a "win". Since each of the 8 games has at least 65 wins, there is a total of 520 wins. To achieve 520 wins, at least one player must have at least 6 wins. For suppose not - then no gamer has more than 5 wins and the total wins for the 100 gamers is no more than 500, which is a contradiction. So at least one player must have at least 6 wins. This is an application of the pigeon hole principle.

Given that at least one gamer has at least 6 wins, we choose one of them, whom we will call Gamer X. Of the games on which Gamer X has achieved a win, we choose 6. We know that at least 65 games have wins on each of the two remaining games, which we will call Game 7 and Game 8. For each of those two games at least 64 of those gamers are not Gamer X. Let ##S_7## be the set of gamers who have won Game 7, excluding Gamer X. Let ##S_8## be the set of gamers who have won Game 8, excluding Gamer X. Then ##|S_7| \geq 64## and ##|S_8| \geq 64##.

We need to show that there is at least one gamer (not Gamer X) who has wins on both Game 7 and Game 8. Suppose this is not the case. Then ##S_7\cap S_8 = \phi## and the number of gamers in ##S_7 \cup S_8## is ##|S_7| + |S_8| \geq 128##. But we know that the number of gamers is 100, so we have reached a contradiction. Therefore, at least one gamer has won Game 7 and Game 8.

Now we have shown that at least one gamer has wins on 6 of the games, and another gamer has wins on the other two. So there is at least one collection of two gamers who collectively beat every game / every final level.[/SPOILER]
 
Last edited:
  • Like
Likes StoneTemplePython
  • #19
tnich said:
Here is a solution to 4.

For brevity, I am going to call beating the final level of a game a "win". Since each of the 8 games has at least 65 wins, there is a total of 520 wins. To achieve 520 wins, at least one player must have at least 6 wins. For suppose not - then no gamer has more than 5 wins and the total wins for the 100 gamers is no more than 500, which is a contradiction. So at least one player must have at least 6 wins. This is an application of the pigeon hole principle.

Given that at least one gamer has at least 6 wins, we choose one of them, whom we will call Gamer X. Of the games on which Gamer X has achieved a win, we choose 6. We know that at least 65 games have wins on each of the two remaining games, which we will call Game 7 and Game 8. For each of those two games at least 64 of those gamers are not Gamer X. Let ##S_7## be the set of gamers who have won Game 7, excluding Gamer X. Let ##S_8## be the set of gamers who have won Game 8, excluding Gamer X. Then ##|S_7| \geq 64## and ##|S_8| \geq 64##.

We need to show that there is at least one gamer (not Gamer X) who has wins on both Game 7 and Game 8. Suppose this is not the case. Then ##S_7\cap S_8 = \phi## and the number of gamers in ##S_7 \cup S_8## is ##|S_7| + |S_8| \geq 128##. But we know that the number of gamers is 100, so we have reached a contradiction. Therefore, at least one gamer has won Game 7 and Game 8.

Now we have shown that at least one gamer has wins on 6 of the games, and another gamer has wins on the other two. So there is at least one collection of two gamers who collectively beat every game / every final level.[/SPOILER]

I had a much sneakier probabilistic argument in mind but your approach of pigeon hole + inclusion-exclusion is more direct. Well done.
 
  • #20
Here is a solution to 1.
Given
$$
\mathbf A := \left[\begin{matrix}
- p_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & -1&0 &\dots & 0 & 0 & 0
\\0 & p_{}&-1 &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & -1&0 &0
\\0 & 0&0 & \dots & p_{}&-1 &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}-1
\end{matrix}\right]
$$
The characteristic equation of ##\mathbf A## is ##c_A(\lambda) \equiv |\mathbf A - \lambda \mathbf I|=0##

where
##|\mathbf A - \lambda \mathbf I| =
\begin{vmatrix}
- p-\lambda_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & -1-\lambda&0 &\dots & 0 & 0 & 0
\\0 & p_{}&-1-\lambda &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & -1-\lambda&0 &0
\\0 & 0&0 & \dots & p_{}&-1-\lambda &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}-1-\lambda
\end{vmatrix}##

We can expand by minors about the top row of ##|\mathbf A - \lambda \mathbf I|## to get
##\begin{align}|\mathbf A - \lambda \mathbf I| &= (-p-\lambda)(-1-\lambda)^{m-2}(p-1-\lambda)\nonumber\\
&+\sum_{j=2}^{m-1}(1-p)(-p)^{m-j}(-1-\lambda)^{j-2}(p-1-\lambda)\nonumber\\
&+(1-p)(-p)^{m-1}\nonumber
\end{align}
##

By splitting each term of this expression except the last, we get a telescoping sum in which everything cancels except for ##-\lambda(-1-\lambda)^{m-1}##.
##|\mathbf A - \lambda \mathbf I| =\\
\begin{align}& ~~~~~-\lambda(-1-\lambda)^{m-1}&-(1-p)(-p)(-1-\lambda)^{m-2}&\nonumber\\
&~~~~~+\sum_{j=3}^{m}(1-p)(-p)^{m-j+1}(-1-\lambda)^{j-2}&+\sum_{j=2}^{m-1}-(1-p)(-p)^{m-j+1}(-1-\lambda)^{j-2}&\nonumber\\
&~~~~~+(1-p)(-p)^{m-1}&\nonumber\\ \nonumber\\
&=-\lambda(-1-\lambda)^{m-1}&\nonumber\\ \nonumber\\
&=(-1)^m \lambda(1+\lambda)^{m-1} &\nonumber\\
\end{align}##

Setting ##|\mathbf A - \lambda \mathbf I|## equal to zero, and noting that ##(-1)^m## cannot equal zero, we see that the characteristic equation is
##c_A(\lambda) = \lambda(1+\lambda)^{m-1}=0##

From this we can see that ##c_A## has two eigenvalues, ##\lambda_1=0## of multiplicity 1, and ##\lambda_1=-1## of multiplicity ##m-1##.

Applying the binomial theorem gives us
##c_A(\lambda)=\lambda^m + \binom {m-1} 1 \lambda^{m-1}+\binom {m-2} 2 \lambda^{m-2}+\dots +\lambda##

By definition, the minimal polynomial is the unique polynomial of lowest degree ##m_A(\lambda)## such that ##m_A(A)=0##. We know that ##m_A(A)## divides ##c_A(A)## (see http://www.math.uAlberta.ca/~xichen/math32517w/math325_notes_chap04.pdf), so it must have the form
##m_A(A) = A^j (A+I_{mxm})^k##
where ##j = 0## or ##1##
and ##0 \leq k \leq m-1##

By eliminating all other possibilities can show that ##m_A(A)=c_A(A)##.

Because matrix ##A## has only two distinct eigenvalues we will represent it in Jordan Normal Form as
##A = PJP^{-1}##
where P is a basis formed from generalized eigenvectors and
##J =
\left[\begin{matrix}
0 & 0 & 0 & 0 &\dots & 0
\\0 & -1&1 & 0 &\dots & 0
\\0 & 0&-1 & 1 &\dots & 0
\\0 & 0& 0& -1 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & -1
\end{matrix}\right]##

Then ##c_A(A) = PJ(J+I_{mxm})^{m-1}P^{-1}##

Now
##J+I_{mxm} =
\left[\begin{matrix}
1 & 0 & 0 & 0 &\dots & 0
\\0 & 0&1 & 0 &\dots & 0
\\0 & 0&0 & 1 &\dots & 0
\\0 & 0& 0& 0 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & 0
\end{matrix}\right]##

So
##(J+I_{mxm})^2 =
\left[\begin{matrix}
1 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 1 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \ddots & 1
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\end{matrix}\right]##

We can see that in multiplying ##J+I_{mxm}## by ##J+I_{mxm}##, the 1s on the off-diagonal have been shifted one element to the right. With each additional multiplication by ##J+I_{mxm}##, these elements are again shifted to the right. The leftmost 1 begins in the 3rd column, and at the ##m-2##th shift it disappears from the matrix. So
##(J+I_{mxm})^{m-1} = (J+I_{mxm})(J+I_{mxm})^{m-2}\\
=\left[\begin{matrix}
1 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \ddots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\end{matrix}\right]##

but
##(J+I_{mxm})^{m-2} = (J+I_{mxm})(J+I_{mxm})^{m-3}\\
=\left[\begin{matrix}
1 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 1
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \ddots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\end{matrix}\right]##

We can eliminate the 1 in the first row of ##(J+I_{mxm})^{m-1}## by multiplying on the left by ##J##. But we cannot eliminate non-zero values from the other rows ##(J+I_{mxm})^k## using that operation, and J by itself is not a zero matrix. That leaves ##c_A(A) = A (A+I_{mxm})^{m-1} = P(J(J+I_{mxm})^{m-1}P^{-1}## as the product of factors of ##A## and ##(A + I_{mxm})## of least degree that is equal to 0. This proves that
##m_A(A) = c_A(A)=\lambda^m + \binom {m-1} 1 \lambda^{m-1}+\binom {m-2} 2 \lambda^{m-2}+\dots +\lambda##.

This is not a pretty proof. I am sure that @StoneTemplePython has an elegant proof, and I look forward to seeing it.
 
  • #21
tnich said:
By eliminating all other possibilities can show that ##m_A(A)=c_A(A)##.

Because matrix ##A## has only two distinct eigenvalues we will represent it in Jordan Normal Form as
##A = PJP^{-1}##
where P is a basis formed from generalized eigenvectors and
##J =
\left[\begin{matrix}
0 & 0 & 0 & 0 &\dots & 0
\\0 & -1&1 & 0 &\dots & 0
\\0 & 0&-1 & 1 &\dots & 0
\\0 & 0& 0& -1 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & -1
\end{matrix}\right]##

Then ##c_A(A) = PJ(J+I_{mxm})^{m-1}P^{-1}##

Now
##J+I_{mxm} =
\left[\begin{matrix}
1 & 0 & 0 & 0 &\dots & 0
\\0 & 0&1 & 0 &\dots & 0
\\0 & 0&0 & 1 &\dots & 0
\\0 & 0& 0& 0 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & 0
\end{matrix}\right]##

It's interesting, I generally detest expansion of minors, but I really do enjoy a good telescope -- I'll revisit that part tomorrow.

The part I didn't understand is: how do you justify the number of super diagonal ones in your Jordan block?

tnich said:
By definition, the minimal polynomial is the unique polynomial of lowest degree ##m_A(\lambda)## such that ##m_A(A)=0##. We know that ##m_A(A)## divides ##c_A(A)## (see http://www.math.uAlberta.ca/~xichen/math32517w/math325_notes_chap04.pdf), so it must have the form
##m_A(A) = A^j (A+I_{mxm})^k##
where ##j = 0## or ##1##
and ##0 \leq k \leq m-1##

a minor point: for ease of argument: suppose our scalar field in general is ##\mathbb C## , I don't believe ##j =0## or ##k=0## can ever be valid. If your matrix is annihilated by a polynomial not containing ##\lambda_r## then ##\lambda_r## is not a root of the characteristic polynomial. (Note the converse isn't true -- knowing that a polynomial with ##\lambda_r## as one of its roots annihilates your matrix, doesn't mean per se that ##\lambda_r## is a root of the characteristic polynomial of your matrix-- though it's a good start.)

tnich said:
This is not a pretty proof. I am sure that @StoneTemplePython has an elegant proof, and I look forward to seeing it.

you betcha. I originally solved this with explicit recourse to Jordan forms, but I ultimately pushed them into the background as they aren't directly needed for the problem.
 
  • #22
StoneTemplePython said:
The part I didn't understand is: how do you justify the number of super diagonal ones in your Jordan block?
The multiplicity of eigenvalue ##\lambda_1=-1## is ##m-1##, so the corresponding Jordan block is ##(m-1) \times (m-1)##. Each superdiagonal element in a Jordan block is equal to 1, so this block has ##m-2## superdiagonal 1s.
The ##\lambda_0=0## eigenvalue has multiplicity 1, so its Jordan block is ##1 \times 1## with no superdiagonal elements. Did I miss something?
 
  • #23
tnich said:
The multiplicity of eigenvalue ##\lambda_1=-1## is ##m-1##, so the corresponding Jordan block is ##(m-1) \times (m-1)##. Each superdiagonal element in a Jordan block is equal to 1, so this block has ##m-2## superdiagonal 1s.
The ##\lambda_0=0## eigenvalue has multiplicity 1, so its Jordan block is ##1 \times 1## with no superdiagonal elements. Did I miss something?

so the algebraic multiplicity of ##\lambda_1## is ##m-1## as you've shown.

For simplicity, consider the negative of this matrix -- all eigenvalues are 0 or 1, right?
- - - -
edit: if you don't like rescaling the matrix, you can also shift it and consider ##\big(\mathbf A + \mathbf I\big)## -- this matrix, too is all zeros and 1s for eigenvalues.
- - - -

Now consider an idempotent matrix ##P## (sometimes called a projector) which obeys ##P^2 = P##. These matrices never have superdiagonal ones -- yet all eigenvalues are 0 or 1. That is the other extreme of the minimal polynomial spectrum. So what makes the matrix in the problem so different than an idempotent matrix? Equivalently, how do you justify having all of these superdiagonal ones?
 
  • #24
StoneTemplePython said:
so the algebraic multiplicity of ##\lambda_1## is ##m-1## as you've shown.

For simplicity, consider the negative of this matrix -- all eigenvalues are 0 or 1, right?
- - - -
edit: if you don't like rescaling the matrix, you can also shift it and consider ##\big(\mathbf A + \mathbf I\big)## -- this matrix, too is all zeros and 1s for eigenvalues.
- - - -

Now consider an idempotent matrix ##P## (sometimes called a projector) which obeys ##P^2 = P##. These matrices never have superdiagonal ones -- yet all eigenvalues are 0 or 1. That is the other extreme of the minimal polynomial spectrum. So what makes the matrix in the problem so different than an idempotent matrix? Equivalently, how do you justify having all of these superdiagonal ones?
Oh, I see what you are getting at. I didn't mention that ##\lambda_1 = -1## is a defective eigenvalue in that there is only one eigenvector ##X_1## such that ##X_1^T(A - \lambda_1 I) = 0##. (I chose to use a left eigenvector here for ease of demonstration.) That eigenvector is
##X_1 = \begin{bmatrix}p\\
p-1\\
p-1\\
\vdots\\
p-1
\end{bmatrix}##

Consider the solution to
##0= X^T(A-\lambda_1 I) =
X^T \begin{bmatrix}
1- p & 1-p & \dots & 1-p & 1-p & 1-p\\
p & 0 & \dots & 0 & 0& 0\\
0 & p & & 0 & 0& 0\\
\vdots & &\ddots & &\vdots&\vdots\\
0 & 0 & & p & 0& 0\\
0 & 0 & \dots & 0& p & p \\
\end{bmatrix}##

If we choose ##x_1## to be p, then to make the each element of ##X^T(A-\lambda_1 I)## zero, ##x_2, x_3, \dots , x_m## must be equal to ##p-1##. So except for multiplication by a scalar, there is no other solution than ##X_1##. This means that we need ##m-2## generalized eigenvectors. Hence the m-2 superdiagonal 1s in the Jordan block for ##\lambda_1##.
 
  • Like
Likes StoneTemplePython
  • #25
tnich said:
Oh, I see what you are getting at. I didn't mention that ##\lambda_1 = -1## is a defective eigenvalue in that there is only one eigenvector...

If we choose ##x_1## to be p, then to make the each element of ##X^T(A-\lambda_1 I)## zero...

I think this works.

This is equivalent to stating (and proving) that the geometric multiplicity of ##\lambda_1## is 1 while the algebraic multiplicity is ##m-1## and hence the deficiency creates the number of superdiagonal ones you've shown -- a standard mathematical result.
- - - -

for what is worth, the problem solution I have in mind requires neither expansion of minors nor knowledge of Jordan Forms. The underlying matrix

##
\begin{bmatrix}
1- p & 1-p & \dots & 1-p & 1-p & 1-p\\
p & 0 & \dots & 0 & 0& 0\\
0 & p & & 0 & 0& 0\\
\vdots & &\ddots & &\vdots&\vdots\\
0 & 0 & & p & 0& 0\\
0 & 0 & \dots & 0& p & p \\
\end{bmatrix}##

is something I came up with to help explain (a special case of) linearity of expectations for streaks problems. Happy to share at end of the month.
 
Last edited:
  • #26
Here is my solution to #5
For the differential equation:$$(2x-4y+6)dx + (x +y-3)dy=0$$
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
$$(2x'-4y')dx'+(x'+y')dy'=0$$
The d.e. is clearly not exact so we seek a transformation that will separate variables. Let $$x'=rcos(\frac { \theta}{2}), y'=rsin(\frac { \theta}{2})
\\ dx'=cos(\frac { \theta}{2})dr -rsin(\frac { \theta}{2})d\theta
\\dy'=sin(\frac { \theta}{2})dr +rsin(\frac { \theta}{2})d\theta$$
and so the d.e. becomes
$$
(2cos(\frac { \theta}{2})-4sin(\frac { \theta}{2}))(cos(\frac { \theta}{2})dr-rsin(\frac { \theta}{2})d\theta) +
\\(cos(\frac { \theta}{2})+rsin(\frac { \theta}{2}))( sin(\frac { \theta}{2})dr+rcos(\frac { \theta}{2})d\theta)=0$$
arranging in terms of ##dr## and ##d\theta##,
$$(sin(\frac { \theta}{2})-cos(\frac { \theta}{2})(sin(\frac { \theta}{2})-2cos(\frac { \theta}{2}))dr +
\\r(2sin^2(\frac { \theta}{2})+\frac {cos(\frac { \theta}{2})}{2}(cos(\frac { \theta}{2})-sin(\frac { \theta}{2}))d\theta=0$$
Let
$$u=tan(\frac { \theta}{2}), du=\frac {d\theta}{2cos^2(\frac { \theta}{2})}, cos(\frac { \theta}{2})=\sqrt {\frac {1}{1+u^2}}, sin(\frac { \theta}{2})=\sqrt {\frac {u}{1+u^2}}$$
The d.e. is now
$$-\frac {dr}{r}= \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}+ \frac {du}{2(1+u^2)(1-\frac{u}{2})}$$
and integrating both sides we get
$$log(\frac {1}{r})=I_1 +I_2
\\I_1=\int \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}
\\I_2=\int \frac {du}{2(1+u^2)(1-\frac{u}{2})}
$$
Expanding the integrals in partial fractions:$$
\\I_1=2\int (\frac{au +b }{1+u^2}+ \frac {c}{1-\frac {u}{2}} +\frac {d}{1-u})du$$
We have four equations of the constants in terms of powers of u. I omit the algebra. You can check me. I find ##a=\frac{1}{2}, b=-\frac {3}{4}, c=\frac {7}{4}, d=-1##
and
$$I_2=\frac {1}{2} \int ( \frac {au+ b}{(1+u^2)} +\frac {c}{(1-\frac{u}{2})})du$$
I find for ##I_2##, ##a=\frac{2}{5}, b=\frac{4}{5}, c=\frac{1}{5}##
I consult an old integral table and observe the forms,
$$\int \frac{dx}{1+x^2}=tan^{-1}(x)
\\\int \frac{xdx}{1+x^2}=\frac{1}{2}log\left | 1+ x^2 \right |$$
to find
$$log(\frac{1}{r})=\frac{7}{5}log(1+u^2) + \frac{19}{5}log\left | 1-\frac {u}{2} \right | - \frac {7}{2}log \left | 1-u \right | - \frac {13}{10}tan^{-1}(u) + C$$
where C is a constant of integration. I exponentiate and invert both sides of the equation to express r in terms of ##\theta##:
$$r=\frac {exp( \frac {13}{20} \theta + C)}{\left ( (1+ tan^2(\frac {\theta}{2}))^{\frac {7}{5}} + \left | 1-\frac {tan(\frac {\theta}{2})}{2} \right |^{\frac {19}{5}} + \left |1-tan(\frac {\theta}{2}) \right |^{-\frac {7}{2}}\right )}$$
Because ##r=0## for ##\theta = \pi (n+ \frac {1}{2})## with n an integer, I roughly describe ##r(\theta)##, looking down on the cylindrical axis, as four pedals of increasing exponential extent.
Peace,
Fred
 
Last edited:
  • #27
I hope there is a mistake somewhere because that result looks ugly and it gets even worse if substituted back.
 
  • #28
mfb said:
I hope there is a mistake somewhere because that result looks ugly and it gets even worse if substituted back.
At least there are a lot of typos which makes it difficult to read and distinguish errors from typos.
  1. The substitution is the other way around.
  2. ##dy'## has two sine terms.
  3. The false sine has magically become a cosine again.
  4. Where's the ##-4## term of ##dr## gone?
  5. etc.
If it should be correct, what I doubt, then it definitely needs a review, and I suggest some further substitutions just to make it readable.
 
  • #29
Thank you @mfb and @fresh_42 for reading my submission.
@fresh_42
complaint #1: No, the substitution is correct.
complaint #2: I miss-typed four equations in a furor of cutting and pasting . I humbly apologize. They should read:$$
\\ dx'=cos(\frac { \theta}{2})dr -\frac {1}{2}rsin(\frac { \theta}{2})d\theta
\\dy'=sin(\frac { \theta}{2})dr +\frac {1}{2}rcos(\frac { \theta}{2})d\theta
\\(2cos(\frac { \theta}{2})-4sin(\frac { \theta}{2}))(cos(\frac { \theta}{2})dr-\frac {1}{2}rsin(\frac { \theta}{2})d\theta) +
\\(cos(\frac { \theta}{2})+rsin(\frac { \theta}{2}))( sin(\frac { \theta}{2})dr+\frac {1}{2}rcos(\frac { \theta}{2})d\theta)=0$$
complaint #3: Resolved by the above correction.
complaint #4: Resolved by the above correction.
complaint #5: I couldn't find any other typos or omissions, but I fully understand your skepticism considering my misprint of the above equations.
@mfb: Yes, the result is ugly depending on your sense of aesthetics, but the d.e. started out ugly and unsymmetrical.
For the differential equation:$$(2x-4y+6)dx + (x +y-3)dy=0$$
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
$$(2x'-4y')dx'+(x'+y')dy'=0$$
The d.e. is clearly not exact so we seek a transformation that will separate variables. Let $$x'=rcos(\frac { \theta}{2}), y'=rsin(\frac { \theta}{2})
\\ dx'=cos(\frac { \theta}{2})dr -\frac {1}{2}rsin(\frac { \theta}{2})d\theta
\\dy'=sin(\frac { \theta}{2})dr +\frac {1}{2}rcos(\frac { \theta}{2})d\theta$$
and so the d.e. becomes
$$
(2cos(\frac { \theta}{2})-4sin(\frac { \theta}{2}))(cos(\frac { \theta}{2})dr-\frac {1}{2}rsin(\frac { \theta}{2})d\theta) +
\\(cos(\frac { \theta}{2})+rsin(\frac { \theta}{2}))( sin(\frac { \theta}{2})dr+\frac {1}{2}rcos(\frac { \theta}{2})d\theta)=0$$
arranging in terms of ##dr## and ##d\theta##,
$$(sin(\frac { \theta}{2})-cos(\frac { \theta}{2})(sin(\frac { \theta}{2})-2cos(\frac { \theta}{2}))dr +
\\r(2sin^2(\frac { \theta}{2})+\frac {cos(\frac { \theta}{2})}{2}(cos(\frac { \theta}{2})-sin(\frac { \theta}{2}))d\theta=0$$
Let
$$u=tan(\frac { \theta}{2}), du=\frac {d\theta}{2cos^2(\frac { \theta}{2})}, cos(\frac { \theta}{2})=\sqrt {\frac {1}{1+u^2}}, sin(\frac { \theta}{2})=\sqrt {\frac {u}{1+u^2}}$$
The d.e. is now
$$-\frac {dr}{r}= \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}+ \frac {du}{2(1+u^2)(1-\frac{u}{2})}$$
and integrating both sides we get
$$log(\frac {1}{r})=I_1 +I_2
\\I_1=\int \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}
\\I_2=\int \frac {du}{2(1+u^2)(1-\frac{u}{2})}
$$
Expanding the integrals in partial fractions:$$
\\I_1=2\int (\frac{au +b }{1+u^2}+ \frac {c}{1-\frac {u}{2}} +\frac {d}{1-u})du$$
We have four equations of the constants in terms of powers of u. I omit the algebra. You can check me. I find ##a=\frac{1}{2}, b=-\frac {3}{4}, c=\frac {7}{4}, d=-1##
and
$$I_2=\frac {1}{2} \int ( \frac {au+ b}{(1+u^2)} +\frac {c}{(1-\frac{u}{2})})du$$
I find for ##I_2##, ##a=\frac{2}{5}, b=\frac{4}{5}, c=\frac{1}{5}##
I consult an old integral table and observe the forms,
$$\int \frac{dx}{1+x^2}=tan^{-1}(x)
\\\int \frac{xdx}{1+x^2}=\frac{1}{2}log\left | 1+ x^2 \right |$$
to find
$$log(\frac{1}{r})=\frac{7}{5}log(1+u^2) + \frac{19}{5}log\left | 1-\frac {u}{2} \right | - \frac {7}{2}log \left | 1-u \right | - \frac {13}{10}tan^{-1}(u) + C$$
where C is a constant of integration. I exponentiate and invert both sides of the equation to express r in terms of ##\theta##:
$$r=\frac {exp( \frac {13}{20} \theta + C)}{\left ( (1+ tan^2(\frac {\theta}{2}))^{\frac {7}{5}} + \left | 1-\frac {tan(\frac {\theta}{2})}{2} \right |^{\frac {19}{5}} + \left |1-tan(\frac {\theta}{2}) \right |^{-\frac {7}{2}}\right )}$$
Because ##r=0## for ##\theta = \pi (n+ \frac {1}{2})## with n an integer, I roughly describe ##r(\theta)##, looking down on the cylindrical axis, as four pedals of increasing exponential extent.
Peace,
Fred
 
  • #30
Fred Wright said:
Here is my solution to #5
For the differential equation:$$(2x-4y+6)dx + (x +y-3)dy=0$$
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
$$(2x'-4y')dx'+(x'+y')dy'=0$$
Not that it matters, but ##2x'-4y'=2(x+1)-4(y+2)=2x-4y-6## and this is not the same as ##2x-4y+6##. O.k. it doesn't really matter. Nevertheless, it is wrong.
 
  • #31
fresh_42 said:
Not that it matters, but ##2x'-4y'=2(x+1)-4(y+2)=2x-4y-6## and this is not the same as ##2x-4y+6##. O.k. it doesn't really matter. Nevertheless, it is wrong.
Yes, you are right and I think it matters enough to disqualify my solution.

Peace,
Fred
 
  • #32
Fred Wright said:
Yes, you are right and I think it matters enough to disqualify my solution.

Peace,
Fred
Why? Just swap them: ##x'=x-1\; , \;y'=y-2## and everything else remains the same - as far as I could see. Although I expect there is also a solution for Cartesian coordinates. Re-substitution doesn't look to be a good idea at the end of your solution.
 
  • #34
QuantumQuest said:
##2.## Find the volume of the solid ##S## which is defined by the relations ##z^2 - y \geq 0,## ##\space## ##x^2 - z \geq 0,## ##\space## ##y^2 - x \geq 0,## ##\space## ##z^2 \leq 2y,## ##\space## ##x^2 \leq 2z,## ##\space## ##y^2 \leq 2x## ##\space## ##\space## (by @QuantumQuest

As the 16th of the month has passed, let me try problem 2.
We have:
$$\begin{cases}z^2 - y \geq 0\\ x^2 - z \geq 0\\ y^2 - x \geq 0\\ z^2 \leq 2y\\ x^2 \leq 2z\\ y^2 \leq 2x
\end{cases} \Rightarrow
\begin{cases}0 \le z \le x^2 \le 2z\\ 0 \le x \le y^2 \le 2x\\ 0 \le y\le z^2 \le 2y \end{cases} \Rightarrow
\begin{cases}0 \le z^2 \le x^4 \le 4z^2 \\ 0 \le y \le x^2 \le 8y \\ 0 \le y^2 \le x^4 \le 64y^2 \\ 0 \le x \le x^4 \le128 x \end{cases} \Rightarrow
x=0 \lor 1\le x^3 \le 128 \Rightarrow
x=0\lor 1\le x\le4\sqrt[3] 2
$$
With ##x=0## it follows that ##y=z=0##, meaning this won't give any volume.
So we are left with (with limits from the 2nd step):
$$\text{Volume S} = \int_1^{4\sqrt[3] 2}\int_{\sqrt x}^{\sqrt{2x}}\int_{\sqrt y}^{\sqrt{2y}} 1 \,dz\,dy\,dx
= \int_1^{4\sqrt[3] 2}\int_{\sqrt x}^{\sqrt{2x}} (\sqrt{2}-1)\sqrt y \,dy\,dx
= \int_1^{4\sqrt[3] 2} (\sqrt{2}-1)\frac 23 y^{3/2}\Big|_{\sqrt x}^{\sqrt{2x}} \,dx \\
= \int_1^{4\sqrt[3] 2} (\sqrt{2}-1)\frac 23 \left[(2x)^{3/4}-x^{3/4}\right] \,dx
= \int_1^{4\sqrt[3] 2} (\sqrt{2}-1)\frac 23 (2^{3/4}-1)x^{3/4} \,dx
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 x^{7/4} \Big|_1^{4\sqrt[3] 2} \\
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 \Big((4\sqrt[3] 2)^{7/4} - 1\Big)
$$
 
Last edited:
  • #35
I like Serena said:
##0 \leq y \leq x^2 \leq 8y##
How is this justified?
 

Similar threads

  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
2
Replies
69
Views
3K
  • Math Proof Training and Practice
3
Replies
93
Views
6K
  • Math Proof Training and Practice
2
Replies
63
Views
9K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
Replies
28
Views
5K
  • Math Proof Training and Practice
2
Replies
67
Views
7K
Back
Top