Challenge Intermediate Math Challenge - June 2018

Messages
927
Reaction score
484
Summer is coming and brings a new intermediate math challenge! Enjoy! If you find the problems difficult to solve don't be disappointed! Just check our other basic level math challenge thread!

RULES:
1) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored.
2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
3) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
4) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.
5) Mentors, advisors and homework helpers are kindly requested not to post solutions, not even in spoiler tags, for the challenge problems, until 16th of each month. This gives the opportunity to other people including but not limited to students to feel more comfortable in dealing with / solving the challenge problems. In case of an inadvertent posting of a solution the post will be deleted by @fresh_42##1.## (solved by @tnich ) Where ##\mathbf A \in \mathbb R^{\text{m x m}}## for natural number ##m \geq 2## and ##p \in (0,1)##

$$\mathbf A := \left[\begin{matrix}
- p_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & -1&0 &\dots & 0 & 0 & 0
\\0 & p_{}&-1 &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & -1&0 &0
\\0 & 0&0 & \dots & p_{}&-1 &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}-1
\end{matrix}\right]$$

prove the minimal polynomial for ##\mathbf A## is

##= \mathbf A^{m} + \binom{m-1}{1}\mathbf A^{m-1} + \binom{m-1}{2}\mathbf A^{m-2} + \binom{m-1}{3}\mathbf A^{m-3} +...+ \mathbf A## ##\space## ##\space## (by @StoneTemplePython)

##2.## (solved by @tnich ) Find the volume of the solid ##S## which is defined by the relations ##z^2 - y \geq 0,## ##\space## ##x^2 - z \geq 0,## ##\space## ##y^2 - x \geq 0,## ##\space## ##z^2 \leq 2y,## ##\space## ##x^2 \leq 2z,## ##\space## ##y^2 \leq 2x## ##\space## ##\space## (by @QuantumQuest)

##3.## (solved by @julian ) Determine with analytical methods, i.e. with a calculator only, the wavelengths of all local maximal radiation intensities of a black body of temperature ##T## given the following function of radiation up to three digits:
$$
J(\lambda) =\dfrac{c^2h}{\lambda^5\cdot\left(\exp\left(\dfrac{ch}{\lambda \kappa T}\right)-1\right)}
$$
(by @fresh_42)

##4.## (solved by @tnich ) 100 gamers walk into an arcade with 8 different video games. Each gamer plays every video game once and only once. For each video game at least 65 of the gamers beat the final level. (These are easy games.) Prove that there must exist at least one collection of 2 gamers who collectively beat every game / every final level. For avoidance of doubt, this means, e.g.

if we record a 1 indicating the player_k beat final level of game i, and a 0 otherwise,

e.g. if the kth player beat the final level of games 1, 3, 7 and 8 but lost on the others, we'd have


$$\mathbf p_k = \begin{bmatrix}
1\\
0\\
1\\
0\\
0\\
0\\
1\\
1
\end{bmatrix}$$

the task is to prove there must exist (at least one) vector ##\mathbf a## where

##\mathbf a := \mathbf p_k + \mathbf p_j##

and ##\mathbf a \gt 0## (i.e. each component of ##\mathbf a## is strictly positive) ##\space## ##\space## (by @StoneTemplePython)

##5.## (solved by @Math_QED ) Solve the differential equation ##(2x - 4y + 6)dx + (x + y - 3)dy = 0## ##\space## ##\space## (by @QuantumQuest)

##6.## (resolved in post #56) Consider ##\mathfrak{su}(3)=\operatorname{span}\{\,T_3,Y,T_{\pm},U_{\pm},V_{\pm}\,\}## given by the basis elements
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## $$
\begin{align*}
T_3&=\frac{1}{2}\lambda_3\; , \;Y=\frac{1}{\sqrt{3}}\lambda_8\; ,\\
T_{\pm}&=\frac{1}{2}(\lambda_1\pm i\lambda-2)\; , \;U_{\pm}=\frac{1}{2}(\lambda_6\pm i\lambda_7)\; , \;V_{\pm}=\frac{1}{2}(\lambda_4\pm i\lambda_5)
\end{align*}$$

(cp. https://www.physicsforums.com/insights/representations-precision-important) where the ##\lambda_i## are the Gell-Mann matrices and its maximal solvable Borel-subalgebra

##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space##
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\mathfrak{B}:=\langle T_3,Y,T_+,U_+,V_+ \rangle##

Now ##\mathfrak{A(B)}=\{\,\alpha: \mathfrak{g} \to \mathfrak{g}\, : \,[X,\alpha(Y)]=[Y,\alpha(X)]\,\,\forall \,X,Y\in \mathfrak{B}\,\}## is the one-dimensional Lie algebra spanned by ##\operatorname{ad}(V_+)## because ##\mathbb{C}V_+## is a one-dimensional ideal in ##\mathfrak{B}## (Proof?). Then ##\mathfrak{g}:=\mathfrak{B}\ltimes \mathfrak{A(B)}## is again a Lie algebra by the multiplication ##[X,\alpha]=[\operatorname{ad}X,\alpha]## for all ##X\in \mathfrak{B}\; , \;\alpha \in \mathfrak{A(B)}##. (For a proof see problem 9 in https://www.physicsforums.com/threads/intermediate-math-challenge-may-2018.946386/ )

a) Determine the center of ##\mathfrak{g}## , and whether ##\mathfrak{g}## is semisimple, solvable, nilpotent or neither.
b) Show that ##(X,Y) \mapsto \alpha([X,Y])## defines another Lie algebra structure on ##\mathfrak{B}## , which one?
c) Show that ##\mathfrak{A(g)}## is at least two-dimensional. ##\space## ##\space## (by @fresh_42)

##7.## (solved by @julian ) Given m distinct nonzero complex numbers ## x_1, x_2, ..., x_m##, prove that

##\sum_{k=1}^m \frac{1}{x_k} \prod_{j \neq k} \frac{1}{x_k - x_j} = \frac{(-1)^{m+1}}{x_1 x_2 ... x_m}##

hint: first consider the polynomial

##p(x) = -1 + \sum_{k=1}^m \prod_{j\neq k} \frac{x - x_j}{x_k - x_j}## ##\space## ##\space## (by @StoneTemplePython)

##8.## (solved by @julian ) Find the last ##1000## digits of the number ##a = 1 + 50 + 50^2 + 50^3 + \cdots + 50^{999}## ##\space## ##\space## (by @QuantumQuest)

##9.## (solved by @julian ) Consider the Hilbert space ##\mathcal{H}=L_2([a,b])## of Lebesgue square integrable functions on ##[a,b]## , i.e.

##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space##
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\langle \psi,\chi \rangle = \int_{a}^{b}\psi(x)\chi(x)\,dx##

The functions ##\{\,\psi_n:=x^n\, : \,n\in \mathbb{N}_0\,\}## build a system of linear independent functions which can be used to find an orthonormal basis by the Gram-Schmidt procedure. Show that the Legendre polynomials

##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##p_n(x):=\dfrac{1}{(b-a)^n\,n!}\,\sqrt{\dfrac{2n+1}{b-a}}\,\dfrac{d^n}{dx^n}[(x-a)(x-b)]^n\; , \;n\in \mathbb{N}_0##

build an orthnormal system. ##\space## ##\space## (by @fresh_42)

##10.## a) (solved by @I like Serena ) Give an example of an integral domain (no field), which has common divisors, but doesn't have greatest common divisors.
b) (solved by @tnich ) Show that there are infinitely many units (invertible elements) in ##\mathbb{Z}[\sqrt{3}]##.
c) (solved by @I like Serena ) Determine the units of ##\{\,\frac{1}{2}a+ \frac{1}{2}b\sqrt{-3}\,\vert \,a+b \text{ even }\}##.
d) (solved by @I like Serena ) The ring ##R## of integers in ##\mathbb{Q}(\sqrt{-19})## is the ring of all elements, which are roots of monic polynomials with integer coefficients. Show that ##R## is built by all elements of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd. ##\space## ##\space## (by @fresh_42)
 
Last edited by a moderator:
  • Like
Likes member 587159, I like Serena, ISamson and 2 others
Physics news on Phys.org
I think I have done 7:

If

##
p(x) = -1 + \sum_{k=1}^m \prod_{j \not= k} \frac{x-x_j}{x_k-x_j}
##

then

\begin{align}
p(x_l) & = -1 +\sum_{k=1}^m \prod_{j \not= k} \frac{x_l-x_j}{x_k-x_j}
\nonumber \\
& = -1 +\sum_{k=1}^m \delta_{lk}
\nonumber \\
&= 0
\nonumber
\end{align}

for ##l = 1, 2, \dots , m##. We have the situation that there are apparently ##m## distinct roots, however ##p(x)## is blatantly a polynomial of degree ##m-1## at most. The only resolution to this is that ##p(x)## is identically zero! Therefore we have:

##
\sum_{k=1}^m \prod_{j \not= k} \frac{x-x_j}{x_k-x_j} = 1 .
##

We put ##x = 0##:

##
(-1)^{m-1} \sum_{k=1}^m \prod_{j \not= k} \frac{x_j}{x_k-x_j} = 1
##

and then divide both sides by ##(-1)^{m+1} (x_1 x_2 \dots x_m)## to obtain

##
\sum_{k=1}^m \frac{1}{x_k} \prod_{j \not= k} \frac{1}{x_k-x_j} = \frac{(-1)^{m+1}}{x_1 x_2 \dots x_m}
##

which is the desired result.
 
Last edited:
  • Like
Likes mfb and StoneTemplePython
I have proven that we have an orthonormal system for problem 9:

We prove the orthogonormality of the ##p_n (x)##'s.

We first derive a preliminary result that will be needed in the calculations that follow. Applying Leibnitz:

##
\frac{d^p}{dx^p} (f (x) g(x)) = \sum_{q=0}^p
\begin{pmatrix}
p \\ q
\end{pmatrix}
\frac{d^{p-q}}{dx^{p-q}} f (x) \frac{d^q}{dx^q} g(x)
##

to

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n]
##

we obtain

\begin{align}
& \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] =
\nonumber \\
& = \sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{d^{l-p}}{dx^{l-p}} (x-a)^n \frac{d^p}{dx^p} (x-b)^n
\nonumber \\
& =
\sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{n!}{(n-l+p)!} (x-a)^{n-l+p} \frac{n!}{(n-p)!} (x-b)^{n-l}
\nonumber
\end{align}

Note that for ##l < n## we have

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=a} = \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=b} = 0 . \qquad (1)
##

This is the result we sought and which we will use repeatedly in the following.

Orthogonality:

We first show the orthogonality of ##p_n (x)## and ##p_m (x)## where ##n \not= m## by proving:

##
\int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx = 0 .
##

Without loss of generality we take ##n > m##, then by repeated integration by parts we obtain:

\begin{align}
& \int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx =
\nonumber \\
& = \Big[ \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad - \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^{m+1}}{dx^{m+1}} [(x-a) (x-b)]^m dx
\nonumber \\
&= \qquad \vdots
\nonumber \\
& = (-1)^{l+1} \Big[ \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l-1}}{dx^{m+l-1}} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad + (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber \\
& = (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber
\end{align}

where we have used (1). By taking ##l = m+1 \; (\leq n)## we have the desired result because

##
\frac{d^{2m+1}}{dx^{2m+1}} [(x-a) (x-b)]^m = 0 .
##

Normalization:

We write

##
p_n (x) = \mathcal{N} \frac{d^n}{dx^n} [(x-a) (x-b)]^n
##

and calculate ##\mathcal{N}## by requiring

##
\int_a^b [p_n (x)]^2 dx = 1 .
##

We consider the integral:

\begin{align}
I & = \mathcal{N}^2 \int_a^b \frac{d^n}{dx^n} [(x-a)(x-b)]^n \frac{d^n}{dx^n} [(x-a)(x-b)]^n dx
\nonumber \\
& = - \mathcal{N}^2 \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a)(x-b)]^n \frac{d^{n+1}}{dx^{n+1}} [(x-a)(x-b)]^n dx
\nonumber \\
& \qquad \vdots
\nonumber \\
& = \mathcal{N}^2 (-1)^n \int_a^b [(x-a)(x-b)]^n \frac{d^{2n}}{dx^{2n}} [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber
\end{align}

where we have again used repeated integration by parts and (1) again. We now simplfy ##I## further:

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [x^2 - (a+b) x + ab]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b \Big[ \Big( x - \frac{a+b}{2} \Big)^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ y^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n \qquad \quad (y = x- \frac{a+b}{2})
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n} \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ \Big( \frac{2 y}{b-a} \Big)^2 - 1 \Big)^2 \Big]^n dy
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{b-a}{2} \Big)^{2n+1} \int^1_{-1} (u^2 - 1)^n du \qquad \quad (u = \frac{2y}{b-a})
\nonumber
\end{align}

We use repeated intergration by parts in order to solve the above integral:

\begin{align}
& \int^1_{-1} 1 \cdot (u^2 - 1)^n du =
\nonumber \\
& = [u (u^2 - 1)^n]_{-1}^1 - 2 n \int^1_{-1} u^2 (u^2 - 1)^{n-1} du
\nonumber \\
& = - 2n \Big[ \frac{u^3}{3} (u^2 - 1)^{n-1} \Big]_{-1}^1 + 2^2 n (n-1) \int^1_{-1} \frac{u^4}{3} (u^2 - 1)^{n-2} du
\nonumber \\
& \qquad \vdots
\nonumber \\
& = (-1)^{n+1} 2^{n-1} n! \Big[ \frac{u^{2n-1}}{(2n-1)!} (u^2 - 1) \Big]_{-1}^1
\nonumber \\
& \quad + (-1)^n 2^n n! \int^1_{-1} \frac{u^{2n}}{(2n-1)!} du
\nonumber \\
& = (-1)^n \frac{2^n n!}{(2n-1)!} \Big[ \frac{u^{2n+1}}{2n+1} \Big]_{-1}^1
\nonumber \\
& = (-1)^n\frac{2}{2n+1} \frac{2^n n!}{(2n-1)!}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^n n! 2n (2n-2) \dots 2}{2n (2n-1) (2n -2) \dots 2}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber
\end{align}

So that

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{b-a}{2} \Big)^{2n+1} \times (-1)^n \frac{2}{(2n+1)} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber \\
& = \mathcal{N}^2 (b-a)^{2n} (n!)^2 \frac{b-a}{2n+1}
\nonumber
\end{align}

Requiring ##I = 1## implies:

\begin{align}
\mathcal{N} = \frac{1}{(b-a)^n n!} \sqrt{\frac{2n+1}{b-a}}
\nonumber
\end{align}

the desired result.
 
Last edited:
My solution for Problem 5:
We must solve
$$ (2x - 4y + 6) dx + (x + y - 3) dy = 0 $$
My first step is to shift x and y:
x -> x + 1
y -> y + 2
This gives:
$$ (2x - 4y) dx + (x + y) dy = 0 $$
This equation is now homogeneous, and we can solve it by letting y = x*z(x). This gives a differential equation for z in x:
$$ x \frac{dz}{dz} = - \frac{(z-1) (z-2)}{z+1} $$
Rearranging gives us
$$ \frac{dx}{x} = - \frac{(1+z) dz}{(1-z)(2-z)} $$
This has solution
$$ x = x_0 \frac{(1-z)^2}{(2-z)^3} $$

Using the original variables,
$$ x = 1 + x_0 \frac{(1-z)^2}{(2-z)^3} $$
$$ y = 2 + x_0 \frac{z (1-z)^2}{(2-z)^3} $$
 
julian said:
I have proven that we have an orthonormal system for problem 9:

We prove the orthogonormality of the ##p_n (x)##'s.

We first derive a preliminary result that will be needed in the calculations that follow. Applying Leibnitz:

##
\frac{d^p}{dx^p} (f (x) g(x)) = \sum_{q=0}^p
\begin{pmatrix}
p \\ q
\end{pmatrix}
\frac{d^{p-q}}{dx^{p-q}} f (x) \frac{d^q}{dx^q} g(x)
##

to

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n]
##

we obtain

\begin{align}
& \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] =
\nonumber \\
& = \sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{d^{l-p}}{dx^{l-p}} (x-a)^n \frac{d^p}{dx^p} (x-b)^n
\nonumber \\
& =
\sum_{p=0}^l
\begin{pmatrix}
l \\ p
\end{pmatrix}
\frac{n!}{(n-l+p)!} (x-a)^{n-l+p} \frac{n!}{(n-p)!} (x-b)^{n-l}
\nonumber
\end{align}

Note that for ##l < n## we have

##
\frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=a} = \frac{d^l}{dx^l} [(x-a)^n (x-b)^n] \Big|_{x=b} = 0 . \qquad (1)
##

This is the result we sought and which we will use repeatedly in the following.

Orthogonality:

We first show the orthogonality of ##p_n (x)## and ##p_m (x)## where ##n \not= m## by proving:

##
\int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx = 0 .
##

Without loss of generality we take ##n > m##, then by repeated integration by parts we obtain:

\begin{align}
& \int_a^b \frac{d^n}{dx^n} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m dx =
\nonumber \\
& = \Big[ \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^m}{dx^m} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad - \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a) (x-b)]^n \frac{d^{m+1}}{dx^{m+1}} [(x-a) (x-b)]^m dx
\nonumber \\
&= \qquad \vdots
\nonumber \\
& = (-1)^{l+1} \Big[ \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l-1}}{dx^{m+l-1}} [(x-a) (x-b)]^m \Big]_a^b
\nonumber \\
& \quad + (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber \\
& = (-1)^l \int_a^b \frac{d^{n-l}}{dx^{n-l}} [(x-a) (x-b)]^n \frac{d^{m+l}}{dx^{m+l}} [(x-a) (x-b)]^m dx
\nonumber
\end{align}

where we have used (1). By taking ##l = m+1 \; (\leq n)## we have the desired result because

##
\frac{d^{2m+1}}{dx^{2m+1}} [(x-a) (x-b)]^m = 0 .
##

Normalization:

We write

##
p_n (x) = \mathcal{N} \frac{d^n}{dx^n} [(x-a) (x-b)]^n
##

and calculate ##\mathcal{N}## by requiring

##
\int_a^b [p_n (x)]^2 dx = 1 .
##

We consider the integral:

\begin{align}
I & = \mathcal{N}^2 \int_a^b \frac{d^n}{dx^n} [(x-a)(x-b)]^n \frac{d^n}{dx^n} [(x-a)(x-b)]^n dx
\nonumber \\
& = - \mathcal{N}^2 \int_a^b \frac{d^{n-1}}{dx^{n-1}} [(x-a)(x-b)]^n \frac{d^{n+1}}{dx^{n+1}} [(x-a)(x-b)]^n dx
\nonumber \\
& \qquad \vdots
\nonumber \\
& = \mathcal{N}^2 (-1)^n \int_a^b [(x-a)(x-b)]^n \frac{d^{2n}}{dx^{2n}} [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber
\end{align}

where we have again used repeated integration by parts and (1) again. We now simplfy ##I## further:

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [(x-a)(x-b)]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b [x^2 - (a+b) x + ab]^n dx
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int_a^b \Big[ \Big( x - \frac{a+b}{2} \Big)^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ y^2 - \Big( \frac{a-b}{2} \Big)^2 \Big]^n \qquad \quad (y = x- \frac{a+b}{2})
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n} \int^{\frac{b-a}{2}}_{- \frac{b-a}{2}} \Big[ \Big( \frac{2 y}{b-a} \Big)^2 - 1 \Big)^2 \Big]^n dy
\nonumber \\
& = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n+1} \int^1_{-1} (u^2 - 1)^n du \qquad \quad (u = \frac{2y}{b-a})
\nonumber
\end{align}

We use repeated intergration by parts in order to solve the above integral:

\begin{align}
& \int^1_{-1} 1 \cdot (u^2 - 1)^n du =
\nonumber \\
& = [u (u^2 - 1)^n]_{-1}^1 - 2 n \int^1_{-1} u^2 (u^2 - 1)^{n-1} du
\nonumber \\
& = - 2n \Big[ \frac{u^3}{3} (u^2 - 1)^{n-1} \Big]_{-1}^1 + 2^2 n (n-1) \int^1_{-1} \frac{u^4}{3} (u^2 - 1)^{n-2} du
\nonumber \\
& \qquad \vdots
\nonumber \\
& = (-1)^{n+1} 2^{n-1} n! \Big[ \frac{u^{2n-1}}{(2n-1)!} (u^2 - 1) \Big]_{-1}^1
\nonumber \\
& \quad + (-1)^n 2^n n! \int^1_{-1} \frac{u^{2n}}{(2n-1)!} du
\nonumber \\
& = (-1)^n \frac{2^n n!}{(2n-1)!} \Big[ \frac{u^{2n+1}}{2n+1} \Big]_{-1}^1
\nonumber \\
& = (-1)^n\frac{2}{2n+1} \frac{2^n n!}{(2n-1)!}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^n n! 2n (2n-2) \dots 2}{2n (2n-1) (2n -2) \dots 2}
\nonumber \\
& = (-1)^n \frac{2}{2n+1} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber
\end{align}

So that

\begin{align}
I & = \mathcal{N}^2 (-1)^n (2n)! \Big( \frac{a-b}{2} \Big)^{2n+1} \times (-1)^n \frac{2}{(2n+1)} \frac{2^{2n} (n!)^2}{(2n)!}
\nonumber \\
& = \mathcal{N}^2 (a-b)^{2n} (n!)^2 \frac{a-b}{2n+1}
\nonumber
\end{align}

Requiring ##I = 1## implies:

\begin{align}
\mathcal{N} = \frac{1}{(b-a)^n n!} \sqrt{\frac{2n+1}{b-a}}
\nonumber
\end{align}

the desired result.
That's correct. You could have omitted the quadratic integration of ##I## by starting with the integration by parts on ##(x-a)^n(x-b)^n##, but this way we have seen an extra integration trick.
 
lpetrich said:
My solution for Problem 5:...

The solutions for the differential equation of the problem must be expressed using ##x##, ##y##, number(s) and constants.
 
My run at Problem 2:

I used a sort of brute force algebraic approach and got a strange object for "S"... I never disposed of all the inequalities, so the resulting "solid" has a kind of "prerogative volume"...

Interpreting the problem as presented
x^2 - z >= 0 and x^2 <= 2z
y^2 - x >= 0 and y^2 <= 2x
z^2 - y >= 0 and z^2 <= 2y

Reducing left side inequalities
x^2 >= z
y^2 >= x
z^2 >= y

Combining back with right side inequalities
z <= x^2 <= 2z
x <= y^2 <= 2x
y <= z^2 <= 2y

Substituting
z <= (y^2)^2 <= 2z
x <= (z^2)^2 <= 2x
y <= (x^2)^2 <= 2y

Substituting again to get all three expressed in their respective variable
z <= ((z^2)^2)^2 <= 2z
x <= ((x^2)^2)^2 <= 2x
y <= ((y^2)^2)^2 <= 2y

Separating and simplifying the inequalities to examine the variables
z <= z^8 when 0<= z and z^8 <= 2z when z <= 2^(1/7)
x <= x^8 when 0<= x and x^8 <= 2x when x <= 2^(1/7)
y <= y^8 when 0<= y and y^8 <= 2y when y <= 2^(1/7)

Aggregating the above
0 <= z <= 2^(1/7)
0 <= x <= 2^(1/7)
0 <= y <= 2^(1/7)

Unfortunately this describes the solid "S" as a rather variable rectangular solid with one of its vertices at the origin, the solid residing in the positive 1/8 space where x, y, and z are all >= 0, but with each x, y, and z allowed to range from 0 to 2^(1/7), so the shape of "S" takes an infinite amount of solid rectangular forms within the boundaries of a cube of side 2^(1/7) also with one of its vertices at the origin... the volume of solid "S" may range from 0 to (2^(1/7))^3 or from 0 to about 1.3459
 
Last edited:
bahamagreen said:
Substituting
z <= (y^2)^2 <= 2z
[...]
That step only works for some special cases at the "faces" of the volume.
While y2 can be x for some points it is not true for all points in the volume.
 
  • Like
Likes QuantumQuest
I think I have done problem 8:

We have that

##
1 + 50 + 50^2 + \cdots + 50^{999} = \frac{50^{1000} - 1}{49}
##

The decimal expansion of a rational number always either terminates after a finite number of digits or begins to repeat the same finite sequence of digits over and over. We could consider

##
\frac{5^{1000}}{49} \cdot 10^{1000}
##

(and then subtract off ##1/49## to get an integer) but this appears intractable.

We need to come up with a neat way of "splitting" the number.

But first note we have an additional freedom in that

##
1 + 50 + 50^2 + \cdots + 50^{999} + \sum_{q=1}^p 50^{999+q}
##

has the same last 1000 digits as ##1 + 50 + 50^2 + \cdots + 50^{999}## for any integer ##p \geq 1## as

\begin{align}
50^{1000} & = 5^{1000} \times 10^{1000} = \dots \dots 25 \underbrace{00 \dots \dots 00 \;}_{1000 \; \text{digits}}
\nonumber \\
50^{1001} & = 5^{1001} \times 10^{1001} = \dots \dots 25 \underbrace{000 \dots \dots 00 \;}_{1001 \; \text{digits}}
\nonumber \\
& \text{etc}
\nonumber
\end{align}

We take this auxiliary sum

##
1 + 50 + 50^2 + \cdots + 50^{999} + \sum_{q=1}^p 50^{999+q}
##

and try the following "splitting":

\begin{align}
& \frac{50^{1000+p} - 1}{49} =
\nonumber \\
& = \frac{5^{1000+p} \times 10^{1000+p} - 1}{49}
\nonumber \\
& = \frac{(k \cdot 49 + b) \times 10^{1000+p} - 1}{49}
\nonumber \\
& = k 10^{1000+p} + \frac{b \times 10^{1000+p} - 1}{49}
\nonumber
\end{align}

where ##k## and ##b## are an integers such that

\begin{align}
\frac{5^{1000+p} - b}{49} = k .
\nonumber
\end{align}

The problem is then to find the last 1000 digits of

\begin{align}
\frac{b}{49} \times 10^{1000+p} - \frac{1}{49}
\nonumber
\end{align}

when suitable values of ##p##, ##k## and ##b## are choosen.

We introduce some elements from number theory (see for example "Elementary Number Theory" by Underwood Dudley). Recall ##x = y \; \text{mod} \; n## means ##x = k \cdot n + y## for some integer ##k##.

Euler's theorem then states:

If ##n## is a positive integer and ##a##, ##n## are coprime, then ##a^{\varphi (n)} = 1 \; \text{mod} \; n## where ##\varphi (n)## is the Euler's totient function (The totient ##\varphi (n)## of a positive integer ##n## greater than ##1## is defined to be the number of positive integers less than or equal to ##n## that are coprime to ##n##.)

There is a formula for calculating ##\varphi (n)## from the prime-power decomposition ##n = p_1^{e_1} p_2^{e_2} \dots p_k^{e_k}##:

##
\varphi (n) = n \Big( 1 - \frac{1}{p_1} \Big) \Big( 1 - \frac{1}{p_2} \Big) \dots \Big( 1 - \frac{1}{p_k} \Big) .
##

Now ##49## has the prime-power decomposition ##49 = 7^2##, so that we find ##\varphi (49) = 49 - 7 = 42##. With a bit of experiment you find that ##1008 / 42 = 24##, so we will take ##p=8##. Now ##5^{24}## and ##49## are coprime, to see this note that the divisors of ##49## are just ##1##, ##7##, ##49## and that neither ##7## nor ##49## divides ##5^{24}##. Thus we can employ Euler's theorem:

\begin{align}
5^{1008} - 1 & = 5^{24 \times 42} - 1
\nonumber \\
& = (5^{24})^{42} - 1
\nonumber \\
& = (5^{24})^{\varphi (49)} - 1
\nonumber \\
& = k \cdot 49
\nonumber
\end{align}

(we've taken ##b = 1## in the above "splitting").

So we have arrived at the fact that the integer

\begin{align}
\frac{10^{1008} - 1}{49} = \frac{10^{1008}}{49} - \frac{1}{49}
\nonumber
\end{align}

has the same last 1000 digits as ##1+ 50 + 50^2 + \cdots + 50^{999}##. We consider

\begin{align}
\frac{1}{49} \times 10^{1008}
\nonumber
\end{align}

which will be a lot more tractable than the original fraction we looked at. We could find ##1/49## using long division or wolfram. The result is the repeating decimal:

##
\frac{1}{49} = 0.\underbrace{020408163265306122448979591836734693877551 \; \; \; \;}_{\text{period} \; 42}
##

The period is ##42## but recall that ##1008/42 = 24##, so that multiplying ##1/49## by ##10^{1008}## produces:

\begin{align}
& \frac{1}{49} \times 10^{1008} =
\nonumber \\
& = 020408163265306122448979591836734693877551
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551
\nonumber \\
& \qquad \vdots
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551 \qquad (24th \; \text{row})
\nonumber \\
& \quad .020408163265306122448979591836734693877551 \dots \dots
\nonumber
\end{align}

Subtracting off ##1/49## then removes the part to the right of the decimal point.

So that finally we can say that the last 1000 digits of the original sum are given by

\begin{align}
& \quad \; 3265306122448979591836734693877551
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551
\nonumber \\
& \qquad \vdots
\nonumber \\
& \quad \; 020408163265306122448979591836734693877551 \qquad (24th \; \text{row})
\nonumber
\end{align}
 
Last edited:
  • Like
Likes QuantumQuest and mfb
  • #10
mfb said:
That step only works for some special cases at the "faces" of the volume.
While y2 can be x for some points it is not true for all points in the volume.

Is this a problem of powers within inequalities or something else?
Do you mean that the substitution requires that y^2 = x ?
To be clear, if z <= x^2 and x <= y^2 this does not mean z <= (y^2)^2 ?
 
  • #11
bahamagreen said:
My run at Problem 2:

I used a sort of brute force algebraic approach and got a strange object for "S"... I never disposed of all the inequalities, so the resulting "solid" has a kind of "prerogative volume"...

You start correctly using the given inequalities but take note of what @mfb points out in post #8. In other words, the solid ##S## is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. As an extra hint I would point you towards the direction of making some transformation(s) and seeing if the notion of a determinant is any useful in this case.
 
  • #12
QuantumQuest said:
You start correctly using the given inequalities but take note of what @mfb points out in post #8. In other words, the solid ##S## is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. As an extra hint I would point you towards the direction of making some transformation(s) and seeing if the notion of a determinant is any useful in this case.

Conceptual rather than algebraic... thanks mfb and QQ.
 
  • Like
Likes QuantumQuest
  • #13
julian said:
I think I have done problem 8:

Well done @julian. Just one comment: You can decide about ##\frac{1}{49}## in the early phase i.e. when we are at ##1 + 50 + 50^2 + \cdots + 50^{999} = \frac{50^{1000} - 1}{49}## and then seek for a faster way in order to decide about the difference (the aforementioned fraction split in two pieces). There is absolutely no problem in your reasoning but I leave this faster way for you - if you want / have the time to think about it, or anyone else wanting to solve it which will also get credit.
 
  • #14
bahamagreen said:
To be clear, if z <= x^2 and x <= y^2 this does not mean z <= (y^2)^2 ?
That is correct, but in general it is a weaker condition, and it doesn’t work combined with the other inequality you used (the comparison to 2z).

Every multiplication by 5 shifts the repeating pattern of 1/49 by a fixed number of digits.
 
  • #15
I think I've done problem 3:

We have the formula for intensity as a function of the wavelength ##\lambda##,

\begin{align}
J (\lambda) = \frac{c^2 h}{\lambda^5 \cdot \Big( \exp \Big( \frac{ch}{\lambda \kappa T} \Big) - 1 \Big)} .
\nonumber
\end{align}

Put ##\alpha = \frac{ch}{\kappa T}## to simplify things slightly. Then

\begin{align}
J (\lambda) = \frac{c^2 h}{\lambda^5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)} .
\nonumber
\end{align}

We note some properties of the function ##J (\lambda)##. First

\begin{align}
\lim_{\lambda \rightarrow 0} J (\lambda) & = \lim_{\lambda \rightarrow 0} \frac{c^2 h}{\lambda^5 \exp \Big( \frac{\alpha}{\lambda} \Big)}
\nonumber \\
& = \lim_{\mu \rightarrow \infty} \frac{c^2 h \mu^5}{ \exp (\alpha \mu)} \qquad (\mu = 1 / \lambda)
\nonumber \\
& = \lim_{\mu \rightarrow \infty} \frac{c^2 h 5!}{\alpha^5 \exp (\alpha \mu)} \quad (\text{repeated use of L'Hôpital's rule})
\nonumber \\
& = 0 .
\nonumber
\end{align}

Next

\begin{align}
\lim_{\lambda \rightarrow \infty} J (\lambda) & = \lim_{\lambda \rightarrow \infty} \frac{c^2 h}{\lambda^5 \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1\Big)}
\nonumber \\
& = \lim_{\mu \rightarrow 0} \frac{c^2 h \mu^5}{\exp ( \alpha \mu ) - 1} \qquad (\mu = 1 / \lambda)
\nonumber \\
& = \lim_{\mu \rightarrow 0} \frac{c^2 h 5 \mu^4}{\alpha \exp (\alpha \mu)} \qquad (\text{use of L'Hôpital's rule})
\nonumber \\
& = 0 .
\nonumber
\end{align}

Also, it is easy to see that ##J (\lambda)## is a non-negative function (as it should be physically).

We now look for local extrema in the intensity in the region ##0< \lambda < \infty##. We differentiate ##J (\lambda)## and put the result equal to zero

\begin{align}
\frac{d}{d \lambda} J (\lambda) & = \frac{d}{d \lambda} \frac{c^2 h}{\lambda^5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)}
\nonumber \\
& = - c^2 h \frac{ \frac{d}{d \lambda} \Big\{ \lambda^5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big) \Big\} }{\lambda^{10} \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)^2}
\nonumber \\
& = - c^2 h \frac{ \Big\{ 5 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big) + \lambda \cdot \Big( - \frac{\alpha}{\lambda^2} \Big) \exp \Big( \frac{\alpha}{\lambda} \Big) \Big\} }{\lambda^6 \cdot \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big)^2}
\nonumber \\
& = 0 .
\nonumber
\end{align}

This implies

\begin{align}
- 5 \Big( \exp \Big( \frac{\alpha}{\lambda} \Big) - 1 \Big) + \frac{\alpha}{\lambda} \exp \Big( \frac{\alpha}{\lambda} \Big) = 0
\nonumber
\end{align}

Put

##
x = \frac{ch}{\lambda \kappa T} = \frac{\alpha}{\lambda} .
##

Then the above equation becomes ##-5 ( \exp ( x ) - 1 ) + x \exp ( x ) = 0## or

\begin{align}
- 5 + x + 5 e^{-x} = 0 .
\nonumber
\end{align}

Obviously ##x = 0## is a root and corresponds to ##\lambda = \infty##. Consider the diagram in which the function ##5 - x## and the function ##5 e^{-x}## are plotted on top of each other. We already know they intersect at ##x = 0##, and as ##5 e^{-x}## is convex and as ##5 - x## is a straight line these functions can only intersect again at most once. To see that they do indeed intersect again, and for positive ##x##, we simply look at the derivatives of the two functions at ##x = 0##:

\begin{align}
\frac{d}{dx} (5 e^{-x}) \Big|_{x=0} & = -5
\nonumber \\
\frac{d}{dx} (5 - x) \Big|_{x=0} & = -1
\nonumber
\end{align}

which imply that initially ##5e^{-x}## goes below ##5 - x## as we move away from ##x = 0## in the direction of positive ##x##.

Given we have this one other root (##x > 0##), and given that ##\lim_{\lambda \rightarrow 0 } J (\lambda) = 0##, ##\lim_{\lambda \rightarrow \infty} J (\lambda) = 0## and that ##J (\lambda)## is non-negative, we know that there will be just the one local maximum in ##J (\lambda)## (actually a global maximum then).

I will use trial and error to estimate this root of ##- 5 + x + 5 e^{-x} = 0##, which we denote ##x_m##. Notice that as ##x## increases, ##\lambda## decreases. Write ##f(x) = 5 e^{-x} - (5 - x)##; a maximum in ##J (\lambda)## will be indicated by the sign of ##f (x)## going from negative to positive as ##x## increases.

But first:

Planck's constant ##h = 6.626070040(81) \times 10^{-34} \; J \cdot s##
Speed of light ##c = 299,792,458 \; m/s##
Boltzmann's constant ##k = 1.38064852(79) \times 10^{-23} \; J/K## .

So

##
\frac{hc}{k} = \frac{6.626070 \times 10^{-34} \times 299792458}{1.3806485 \times 10^{-23}} = 0.01438777 \; m \cdot K .
##

Our wavelength of the maximum intensity, ##\lambda_m##, will be

##
\lambda_m = \frac{hc}{x_m k T} = \frac{1}{T} \frac{1}{x_m} \times 0.01438777 \; m .
##

I now use trial and error to approxiamte ##x_m## using my calculator. First note

##5 e^{-4} = 0.0916## and ##5-4 = 1## ##\qquad \qquad \qquad \; f (4) < 0##
##5 e^{-5} = 0.0337## and ##5-5 = 0## ##\qquad \qquad \qquad \; f (5) > 0##

Implying the root ##x_m## will lie between ##x = 4## and ##x = 5##. Try:

##5 e^{-4.7} = 0.0455## and ##5-4.7 = 0.3## ##\qquad \qquad f (4.7) < 0##
##5 e^{-4.8} = 0.0411## and ##5-4.8 = 0.2## ##\qquad \qquad f (4.8) < 0##
##5 e^{-4.9} = 0.0372## and ##5-4.9 = 0.1## ##\qquad \qquad f (4.9) < 0##
##5 e^{-4.95} = 0.0354## and ##5-4.95 = 0.05## ##\qquad \quad f (4.95) < 0##
##5 e^{-4.96} = 0.0351## and ##5-4.96 = 0.04## ##\qquad \quad f (4.96) < 0##
##5 e^{-4.97} = 0.0347## and ##5-4.97 = 0.03## ##\qquad \quad f (4.97) > 0## .

This implies the root ##x_m## lies between ##4.96## and ##4.97##. Next try:

##5 e^{-4.961} = 0.03502959## and ##5-4.961 = 0.039## ##\qquad \quad f (4.961) < 0##
##5 e^{-4.962} = 0.03499458## and ##5-4.962 = 0.038## ##\qquad \quad f (4.962) < 0##
##5 e^{-4.963} = 0.03495960## and ##5-4.963 = 0.037## ##\qquad \quad f (4.963) < 0##
##5 e^{-4.964} = 0.03492466## and ##5-4.964 = 0.036## ##\qquad \quad f (4.964) < 0##
##5 e^{-4.965} = 0.03488975## and ##5-4.965 = 0.035## ##\qquad \quad f (4.965) < 0##
##5 e^{-4.966} = 0.03485488## and ##5-4.966 = 0.034## ##\qquad \quad f (4.966) > 0## .

This implies the root ##x_m## lies between ##4.965## and ##4.966##. It is easy to get more accuracy. Try:

##5 e^{-4.9651} = 0.03488626## and ##5-4.9651 = 0.0349## ##\quad \quad f (4.9651) < 0##
##5 e^{-4.9652} = 0.03488278## and ##5-4.9652 = 0.0348## ##\quad \quad f (4.9652) > 0##

and then try

##5 e^{-4.96511} = 0.03488592## and ##5-4.96511 = 0.03489## ##\; \; f (4.96511) < 0##
##5 e^{-4.96512} = 0.03488557## and ##5-4.96512 = 0.03488## ##\; \; f (4.96512) > 0## .

So the root ##x_m## lies between ##x = 4.96511## and ##x = 4.96512##, and so ##\lambda_m \equiv 1/x_m## lies between

##
0.2014050013 \leq \lambda_m \leq 0.2014054069 .
##

So we can say

##
\lambda_m \equiv 1/x_m \approx 0.201405
##

Then approximately we have

\begin{align}
\lambda_m & = \frac{1}{T} \frac{1}{x_m} \times \frac{ch}{\kappa}
\nonumber \\
& = \frac{1}{T} 0.201405 \times 0.01438777
\nonumber \\
& = \frac{1}{T} 2.89777 \times 10^{-3}\; m .
\nonumber
\end{align}

Or to 3 significant figures:

\begin{align}
\lambda_m & = \frac{1}{T} 2.90 \times 10^{-3} \; m .
\nonumber
\end{align}
 
Last edited:
  • Like
Likes QuantumQuest
  • #16
Correct, and well done!

I think you're a bit sloppy with de L'Hôpital (missing minus sign, and it goes ##\frac{f\,'}{g\,'}\to c## then ##\frac{f}{g} \to c## which means a bit more care with the order of the the argument) but the conclusion is correct.

I have a slightly different approach to determine the maximum which I like to mention in case you are interested, i.e. no criticism of your version. (I used copy and paste so the variable names are different, but it's not difficult to see the principle.)
$$
f(t):=5(1-e^{-t})=t \text{ with } t=x^{-1}
$$
Because of ##f'(t)>1## on ##[0,\log 5]## the function ##f(t)-t## is strictly monotone increasing there and ##f(t)>t## since ##f(0)=0##. For ##t> \log 5## we get that ##f(t)-t## is strictly monotone decreasing and thus has at most one zero. As ##f(4)-4>0## and ##f(5)-5<0## there is exactly one zero ##t^*## in ##[4,5]## by the intermediate value theorem. Now
$$
q:=\sup\{\,|f'(t)|\, : \,t\in [4,5]\,\}=f'(4)=5e^{-4}=0.09157 < 1
$$
and by the fixed-point theorem and a sequence ##t_{n+1}:=f(t_n) \; , \; t_1=5## we have
$$
|t^*-t_n| < \dfrac{q}{q-1}\,\,|t_n-t_{n-1}|<0.1008\,\,|t_n-t_{n-1}|
$$
and thus ##t^* = 4.965114\,\pm\,10^{-6}## or ##x^*=0.2014052\,\pm\,10^{-7}## with only a few iterations.
 
  • Like
Likes QuantumQuest
  • #17
Thread is now open to all. We have still six unsolved problems: 1,2,5,6,10.

So Homework Helpers, Science Advisors and Mentor can you help?
 
Last edited:
  • #18
Here is a solution to 4.

For brevity, I am going to call beating the final level of a game a "win". Since each of the 8 games has at least 65 wins, there is a total of 520 wins. To achieve 520 wins, at least one player must have at least 6 wins. For suppose not - then no gamer has more than 5 wins and the total wins for the 100 gamers is no more than 500, which is a contradiction. So at least one player must have at least 6 wins. This is an application of the pigeon hole principle.

Given that at least one gamer has at least 6 wins, we choose one of them, whom we will call Gamer X. Of the games on which Gamer X has achieved a win, we choose 6. We know that at least 65 games have wins on each of the two remaining games, which we will call Game 7 and Game 8. For each of those two games at least 64 of those gamers are not Gamer X. Let ##S_7## be the set of gamers who have won Game 7, excluding Gamer X. Let ##S_8## be the set of gamers who have won Game 8, excluding Gamer X. Then ##|S_7| \geq 64## and ##|S_8| \geq 64##.

We need to show that there is at least one gamer (not Gamer X) who has wins on both Game 7 and Game 8. Suppose this is not the case. Then ##S_7\cap S_8 = \phi## and the number of gamers in ##S_7 \cup S_8## is ##|S_7| + |S_8| \geq 128##. But we know that the number of gamers is 100, so we have reached a contradiction. Therefore, at least one gamer has won Game 7 and Game 8.

Now we have shown that at least one gamer has wins on 6 of the games, and another gamer has wins on the other two. So there is at least one collection of two gamers who collectively beat every game / every final level.[/SPOILER]
 
Last edited:
  • Like
Likes StoneTemplePython
  • #19
tnich said:
Here is a solution to 4.

For brevity, I am going to call beating the final level of a game a "win". Since each of the 8 games has at least 65 wins, there is a total of 520 wins. To achieve 520 wins, at least one player must have at least 6 wins. For suppose not - then no gamer has more than 5 wins and the total wins for the 100 gamers is no more than 500, which is a contradiction. So at least one player must have at least 6 wins. This is an application of the pigeon hole principle.

Given that at least one gamer has at least 6 wins, we choose one of them, whom we will call Gamer X. Of the games on which Gamer X has achieved a win, we choose 6. We know that at least 65 games have wins on each of the two remaining games, which we will call Game 7 and Game 8. For each of those two games at least 64 of those gamers are not Gamer X. Let ##S_7## be the set of gamers who have won Game 7, excluding Gamer X. Let ##S_8## be the set of gamers who have won Game 8, excluding Gamer X. Then ##|S_7| \geq 64## and ##|S_8| \geq 64##.

We need to show that there is at least one gamer (not Gamer X) who has wins on both Game 7 and Game 8. Suppose this is not the case. Then ##S_7\cap S_8 = \phi## and the number of gamers in ##S_7 \cup S_8## is ##|S_7| + |S_8| \geq 128##. But we know that the number of gamers is 100, so we have reached a contradiction. Therefore, at least one gamer has won Game 7 and Game 8.

Now we have shown that at least one gamer has wins on 6 of the games, and another gamer has wins on the other two. So there is at least one collection of two gamers who collectively beat every game / every final level.[/SPOILER]

I had a much sneakier probabilistic argument in mind but your approach of pigeon hole + inclusion-exclusion is more direct. Well done.
 
  • #20
Here is a solution to 1.
Given
$$
\mathbf A := \left[\begin{matrix}
- p_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & -1&0 &\dots & 0 & 0 & 0
\\0 & p_{}&-1 &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & -1&0 &0
\\0 & 0&0 & \dots & p_{}&-1 &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}-1
\end{matrix}\right]
$$
The characteristic equation of ##\mathbf A## is ##c_A(\lambda) \equiv |\mathbf A - \lambda \mathbf I|=0##

where
##|\mathbf A - \lambda \mathbf I| =
\begin{vmatrix}
- p-\lambda_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & -1-\lambda&0 &\dots & 0 & 0 & 0
\\0 & p_{}&-1-\lambda &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & -1-\lambda&0 &0
\\0 & 0&0 & \dots & p_{}&-1-\lambda &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}-1-\lambda
\end{vmatrix}##

We can expand by minors about the top row of ##|\mathbf A - \lambda \mathbf I|## to get
##\begin{align}|\mathbf A - \lambda \mathbf I| &= (-p-\lambda)(-1-\lambda)^{m-2}(p-1-\lambda)\nonumber\\
&+\sum_{j=2}^{m-1}(1-p)(-p)^{m-j}(-1-\lambda)^{j-2}(p-1-\lambda)\nonumber\\
&+(1-p)(-p)^{m-1}\nonumber
\end{align}
##

By splitting each term of this expression except the last, we get a telescoping sum in which everything cancels except for ##-\lambda(-1-\lambda)^{m-1}##.
##|\mathbf A - \lambda \mathbf I| =\\
\begin{align}& ~~~~~-\lambda(-1-\lambda)^{m-1}&-(1-p)(-p)(-1-\lambda)^{m-2}&\nonumber\\
&~~~~~+\sum_{j=3}^{m}(1-p)(-p)^{m-j+1}(-1-\lambda)^{j-2}&+\sum_{j=2}^{m-1}-(1-p)(-p)^{m-j+1}(-1-\lambda)^{j-2}&\nonumber\\
&~~~~~+(1-p)(-p)^{m-1}&\nonumber\\ \nonumber\\
&=-\lambda(-1-\lambda)^{m-1}&\nonumber\\ \nonumber\\
&=(-1)^m \lambda(1+\lambda)^{m-1} &\nonumber\\
\end{align}##

Setting ##|\mathbf A - \lambda \mathbf I|## equal to zero, and noting that ##(-1)^m## cannot equal zero, we see that the characteristic equation is
##c_A(\lambda) = \lambda(1+\lambda)^{m-1}=0##

From this we can see that ##c_A## has two eigenvalues, ##\lambda_1=0## of multiplicity 1, and ##\lambda_1=-1## of multiplicity ##m-1##.

Applying the binomial theorem gives us
##c_A(\lambda)=\lambda^m + \binom {m-1} 1 \lambda^{m-1}+\binom {m-2} 2 \lambda^{m-2}+\dots +\lambda##

By definition, the minimal polynomial is the unique polynomial of lowest degree ##m_A(\lambda)## such that ##m_A(A)=0##. We know that ##m_A(A)## divides ##c_A(A)## (see http://www.math.uAlberta.ca/~xichen/math32517w/math325_notes_chap04.pdf), so it must have the form
##m_A(A) = A^j (A+I_{mxm})^k##
where ##j = 0## or ##1##
and ##0 \leq k \leq m-1##

By eliminating all other possibilities can show that ##m_A(A)=c_A(A)##.

Because matrix ##A## has only two distinct eigenvalues we will represent it in Jordan Normal Form as
##A = PJP^{-1}##
where P is a basis formed from generalized eigenvectors and
##J =
\left[\begin{matrix}
0 & 0 & 0 & 0 &\dots & 0
\\0 & -1&1 & 0 &\dots & 0
\\0 & 0&-1 & 1 &\dots & 0
\\0 & 0& 0& -1 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & -1
\end{matrix}\right]##

Then ##c_A(A) = PJ(J+I_{mxm})^{m-1}P^{-1}##

Now
##J+I_{mxm} =
\left[\begin{matrix}
1 & 0 & 0 & 0 &\dots & 0
\\0 & 0&1 & 0 &\dots & 0
\\0 & 0&0 & 1 &\dots & 0
\\0 & 0& 0& 0 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & 0
\end{matrix}\right]##

So
##(J+I_{mxm})^2 =
\left[\begin{matrix}
1 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 1 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \ddots & 1
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\end{matrix}\right]##

We can see that in multiplying ##J+I_{mxm}## by ##J+I_{mxm}##, the 1s on the off-diagonal have been shifted one element to the right. With each additional multiplication by ##J+I_{mxm}##, these elements are again shifted to the right. The leftmost 1 begins in the 3rd column, and at the ##m-2##th shift it disappears from the matrix. So
##(J+I_{mxm})^{m-1} = (J+I_{mxm})(J+I_{mxm})^{m-2}\\
=\left[\begin{matrix}
1 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \ddots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\end{matrix}\right]##

but
##(J+I_{mxm})^{m-2} = (J+I_{mxm})(J+I_{mxm})^{m-3}\\
=\left[\begin{matrix}
1 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 1
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \ddots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\\0 & 0 & 0 & 0 & \dots & 0
\end{matrix}\right]##

We can eliminate the 1 in the first row of ##(J+I_{mxm})^{m-1}## by multiplying on the left by ##J##. But we cannot eliminate non-zero values from the other rows ##(J+I_{mxm})^k## using that operation, and J by itself is not a zero matrix. That leaves ##c_A(A) = A (A+I_{mxm})^{m-1} = P(J(J+I_{mxm})^{m-1}P^{-1}## as the product of factors of ##A## and ##(A + I_{mxm})## of least degree that is equal to 0. This proves that
##m_A(A) = c_A(A)=\lambda^m + \binom {m-1} 1 \lambda^{m-1}+\binom {m-2} 2 \lambda^{m-2}+\dots +\lambda##.

This is not a pretty proof. I am sure that @StoneTemplePython has an elegant proof, and I look forward to seeing it.
 
  • #21
tnich said:
By eliminating all other possibilities can show that ##m_A(A)=c_A(A)##.

Because matrix ##A## has only two distinct eigenvalues we will represent it in Jordan Normal Form as
##A = PJP^{-1}##
where P is a basis formed from generalized eigenvectors and
##J =
\left[\begin{matrix}
0 & 0 & 0 & 0 &\dots & 0
\\0 & -1&1 & 0 &\dots & 0
\\0 & 0&-1 & 1 &\dots & 0
\\0 & 0& 0& -1 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & -1
\end{matrix}\right]##

Then ##c_A(A) = PJ(J+I_{mxm})^{m-1}P^{-1}##

Now
##J+I_{mxm} =
\left[\begin{matrix}
1 & 0 & 0 & 0 &\dots & 0
\\0 & 0&1 & 0 &\dots & 0
\\0 & 0&0 & 1 &\dots & 0
\\0 & 0& 0& 0 & \dots & 0
\\0 & 0&0 & 0 & \ddots & 1
\\0 & 0&0 & 0 & \dots & 0
\end{matrix}\right]##

It's interesting, I generally detest expansion of minors, but I really do enjoy a good telescope -- I'll revisit that part tomorrow.

The part I didn't understand is: how do you justify the number of super diagonal ones in your Jordan block?

tnich said:
By definition, the minimal polynomial is the unique polynomial of lowest degree ##m_A(\lambda)## such that ##m_A(A)=0##. We know that ##m_A(A)## divides ##c_A(A)## (see http://www.math.uAlberta.ca/~xichen/math32517w/math325_notes_chap04.pdf), so it must have the form
##m_A(A) = A^j (A+I_{mxm})^k##
where ##j = 0## or ##1##
and ##0 \leq k \leq m-1##

a minor point: for ease of argument: suppose our scalar field in general is ##\mathbb C## , I don't believe ##j =0## or ##k=0## can ever be valid. If your matrix is annihilated by a polynomial not containing ##\lambda_r## then ##\lambda_r## is not a root of the characteristic polynomial. (Note the converse isn't true -- knowing that a polynomial with ##\lambda_r## as one of its roots annihilates your matrix, doesn't mean per se that ##\lambda_r## is a root of the characteristic polynomial of your matrix-- though it's a good start.)

tnich said:
This is not a pretty proof. I am sure that @StoneTemplePython has an elegant proof, and I look forward to seeing it.

you betcha. I originally solved this with explicit recourse to Jordan forms, but I ultimately pushed them into the background as they aren't directly needed for the problem.
 
  • #22
StoneTemplePython said:
The part I didn't understand is: how do you justify the number of super diagonal ones in your Jordan block?
The multiplicity of eigenvalue ##\lambda_1=-1## is ##m-1##, so the corresponding Jordan block is ##(m-1) \times (m-1)##. Each superdiagonal element in a Jordan block is equal to 1, so this block has ##m-2## superdiagonal 1s.
The ##\lambda_0=0## eigenvalue has multiplicity 1, so its Jordan block is ##1 \times 1## with no superdiagonal elements. Did I miss something?
 
  • #23
tnich said:
The multiplicity of eigenvalue ##\lambda_1=-1## is ##m-1##, so the corresponding Jordan block is ##(m-1) \times (m-1)##. Each superdiagonal element in a Jordan block is equal to 1, so this block has ##m-2## superdiagonal 1s.
The ##\lambda_0=0## eigenvalue has multiplicity 1, so its Jordan block is ##1 \times 1## with no superdiagonal elements. Did I miss something?

so the algebraic multiplicity of ##\lambda_1## is ##m-1## as you've shown.

For simplicity, consider the negative of this matrix -- all eigenvalues are 0 or 1, right?
- - - -
edit: if you don't like rescaling the matrix, you can also shift it and consider ##\big(\mathbf A + \mathbf I\big)## -- this matrix, too is all zeros and 1s for eigenvalues.
- - - -

Now consider an idempotent matrix ##P## (sometimes called a projector) which obeys ##P^2 = P##. These matrices never have superdiagonal ones -- yet all eigenvalues are 0 or 1. That is the other extreme of the minimal polynomial spectrum. So what makes the matrix in the problem so different than an idempotent matrix? Equivalently, how do you justify having all of these superdiagonal ones?
 
  • #24
StoneTemplePython said:
so the algebraic multiplicity of ##\lambda_1## is ##m-1## as you've shown.

For simplicity, consider the negative of this matrix -- all eigenvalues are 0 or 1, right?
- - - -
edit: if you don't like rescaling the matrix, you can also shift it and consider ##\big(\mathbf A + \mathbf I\big)## -- this matrix, too is all zeros and 1s for eigenvalues.
- - - -

Now consider an idempotent matrix ##P## (sometimes called a projector) which obeys ##P^2 = P##. These matrices never have superdiagonal ones -- yet all eigenvalues are 0 or 1. That is the other extreme of the minimal polynomial spectrum. So what makes the matrix in the problem so different than an idempotent matrix? Equivalently, how do you justify having all of these superdiagonal ones?
Oh, I see what you are getting at. I didn't mention that ##\lambda_1 = -1## is a defective eigenvalue in that there is only one eigenvector ##X_1## such that ##X_1^T(A - \lambda_1 I) = 0##. (I chose to use a left eigenvector here for ease of demonstration.) That eigenvector is
##X_1 = \begin{bmatrix}p\\
p-1\\
p-1\\
\vdots\\
p-1
\end{bmatrix}##

Consider the solution to
##0= X^T(A-\lambda_1 I) =
X^T \begin{bmatrix}
1- p & 1-p & \dots & 1-p & 1-p & 1-p\\
p & 0 & \dots & 0 & 0& 0\\
0 & p & & 0 & 0& 0\\
\vdots & &\ddots & &\vdots&\vdots\\
0 & 0 & & p & 0& 0\\
0 & 0 & \dots & 0& p & p \\
\end{bmatrix}##

If we choose ##x_1## to be p, then to make the each element of ##X^T(A-\lambda_1 I)## zero, ##x_2, x_3, \dots , x_m## must be equal to ##p-1##. So except for multiplication by a scalar, there is no other solution than ##X_1##. This means that we need ##m-2## generalized eigenvectors. Hence the m-2 superdiagonal 1s in the Jordan block for ##\lambda_1##.
 
  • Like
Likes StoneTemplePython
  • #25
tnich said:
Oh, I see what you are getting at. I didn't mention that ##\lambda_1 = -1## is a defective eigenvalue in that there is only one eigenvector...

If we choose ##x_1## to be p, then to make the each element of ##X^T(A-\lambda_1 I)## zero...

I think this works.

This is equivalent to stating (and proving) that the geometric multiplicity of ##\lambda_1## is 1 while the algebraic multiplicity is ##m-1## and hence the deficiency creates the number of superdiagonal ones you've shown -- a standard mathematical result.
- - - -

for what is worth, the problem solution I have in mind requires neither expansion of minors nor knowledge of Jordan Forms. The underlying matrix

##
\begin{bmatrix}
1- p & 1-p & \dots & 1-p & 1-p & 1-p\\
p & 0 & \dots & 0 & 0& 0\\
0 & p & & 0 & 0& 0\\
\vdots & &\ddots & &\vdots&\vdots\\
0 & 0 & & p & 0& 0\\
0 & 0 & \dots & 0& p & p \\
\end{bmatrix}##

is something I came up with to help explain (a special case of) linearity of expectations for streaks problems. Happy to share at end of the month.
 
Last edited:
  • #26
Here is my solution to #5
For the differential equation:$$(2x-4y+6)dx + (x +y-3)dy=0$$
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
$$(2x'-4y')dx'+(x'+y')dy'=0$$
The d.e. is clearly not exact so we seek a transformation that will separate variables. Let $$x'=rcos(\frac { \theta}{2}), y'=rsin(\frac { \theta}{2})
\\ dx'=cos(\frac { \theta}{2})dr -rsin(\frac { \theta}{2})d\theta
\\dy'=sin(\frac { \theta}{2})dr +rsin(\frac { \theta}{2})d\theta$$
and so the d.e. becomes
$$
(2cos(\frac { \theta}{2})-4sin(\frac { \theta}{2}))(cos(\frac { \theta}{2})dr-rsin(\frac { \theta}{2})d\theta) +
\\(cos(\frac { \theta}{2})+rsin(\frac { \theta}{2}))( sin(\frac { \theta}{2})dr+rcos(\frac { \theta}{2})d\theta)=0$$
arranging in terms of ##dr## and ##d\theta##,
$$(sin(\frac { \theta}{2})-cos(\frac { \theta}{2})(sin(\frac { \theta}{2})-2cos(\frac { \theta}{2}))dr +
\\r(2sin^2(\frac { \theta}{2})+\frac {cos(\frac { \theta}{2})}{2}(cos(\frac { \theta}{2})-sin(\frac { \theta}{2}))d\theta=0$$
Let
$$u=tan(\frac { \theta}{2}), du=\frac {d\theta}{2cos^2(\frac { \theta}{2})}, cos(\frac { \theta}{2})=\sqrt {\frac {1}{1+u^2}}, sin(\frac { \theta}{2})=\sqrt {\frac {u}{1+u^2}}$$
The d.e. is now
$$-\frac {dr}{r}= \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}+ \frac {du}{2(1+u^2)(1-\frac{u}{2})}$$
and integrating both sides we get
$$log(\frac {1}{r})=I_1 +I_2
\\I_1=\int \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}
\\I_2=\int \frac {du}{2(1+u^2)(1-\frac{u}{2})}
$$
Expanding the integrals in partial fractions:$$
\\I_1=2\int (\frac{au +b }{1+u^2}+ \frac {c}{1-\frac {u}{2}} +\frac {d}{1-u})du$$
We have four equations of the constants in terms of powers of u. I omit the algebra. You can check me. I find ##a=\frac{1}{2}, b=-\frac {3}{4}, c=\frac {7}{4}, d=-1##
and
$$I_2=\frac {1}{2} \int ( \frac {au+ b}{(1+u^2)} +\frac {c}{(1-\frac{u}{2})})du$$
I find for ##I_2##, ##a=\frac{2}{5}, b=\frac{4}{5}, c=\frac{1}{5}##
I consult an old integral table and observe the forms,
$$\int \frac{dx}{1+x^2}=tan^{-1}(x)
\\\int \frac{xdx}{1+x^2}=\frac{1}{2}log\left | 1+ x^2 \right |$$
to find
$$log(\frac{1}{r})=\frac{7}{5}log(1+u^2) + \frac{19}{5}log\left | 1-\frac {u}{2} \right | - \frac {7}{2}log \left | 1-u \right | - \frac {13}{10}tan^{-1}(u) + C$$
where C is a constant of integration. I exponentiate and invert both sides of the equation to express r in terms of ##\theta##:
$$r=\frac {exp( \frac {13}{20} \theta + C)}{\left ( (1+ tan^2(\frac {\theta}{2}))^{\frac {7}{5}} + \left | 1-\frac {tan(\frac {\theta}{2})}{2} \right |^{\frac {19}{5}} + \left |1-tan(\frac {\theta}{2}) \right |^{-\frac {7}{2}}\right )}$$
Because ##r=0## for ##\theta = \pi (n+ \frac {1}{2})## with n an integer, I roughly describe ##r(\theta)##, looking down on the cylindrical axis, as four pedals of increasing exponential extent.
Peace,
Fred
 
Last edited:
  • #27
I hope there is a mistake somewhere because that result looks ugly and it gets even worse if substituted back.
 
  • #28
mfb said:
I hope there is a mistake somewhere because that result looks ugly and it gets even worse if substituted back.
At least there are a lot of typos which makes it difficult to read and distinguish errors from typos.
  1. The substitution is the other way around.
  2. ##dy'## has two sine terms.
  3. The false sine has magically become a cosine again.
  4. Where's the ##-4## term of ##dr## gone?
  5. etc.
If it should be correct, what I doubt, then it definitely needs a review, and I suggest some further substitutions just to make it readable.
 
  • #29
Thank you @mfb and @fresh_42 for reading my submission.
@fresh_42
complaint #1: No, the substitution is correct.
complaint #2: I miss-typed four equations in a furor of cutting and pasting . I humbly apologize. They should read:$$
\\ dx'=cos(\frac { \theta}{2})dr -\frac {1}{2}rsin(\frac { \theta}{2})d\theta
\\dy'=sin(\frac { \theta}{2})dr +\frac {1}{2}rcos(\frac { \theta}{2})d\theta
\\(2cos(\frac { \theta}{2})-4sin(\frac { \theta}{2}))(cos(\frac { \theta}{2})dr-\frac {1}{2}rsin(\frac { \theta}{2})d\theta) +
\\(cos(\frac { \theta}{2})+rsin(\frac { \theta}{2}))( sin(\frac { \theta}{2})dr+\frac {1}{2}rcos(\frac { \theta}{2})d\theta)=0$$
complaint #3: Resolved by the above correction.
complaint #4: Resolved by the above correction.
complaint #5: I couldn't find any other typos or omissions, but I fully understand your skepticism considering my misprint of the above equations.
@mfb: Yes, the result is ugly depending on your sense of aesthetics, but the d.e. started out ugly and unsymmetrical.
For the differential equation:$$(2x-4y+6)dx + (x +y-3)dy=0$$
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
$$(2x'-4y')dx'+(x'+y')dy'=0$$
The d.e. is clearly not exact so we seek a transformation that will separate variables. Let $$x'=rcos(\frac { \theta}{2}), y'=rsin(\frac { \theta}{2})
\\ dx'=cos(\frac { \theta}{2})dr -\frac {1}{2}rsin(\frac { \theta}{2})d\theta
\\dy'=sin(\frac { \theta}{2})dr +\frac {1}{2}rcos(\frac { \theta}{2})d\theta$$
and so the d.e. becomes
$$
(2cos(\frac { \theta}{2})-4sin(\frac { \theta}{2}))(cos(\frac { \theta}{2})dr-\frac {1}{2}rsin(\frac { \theta}{2})d\theta) +
\\(cos(\frac { \theta}{2})+rsin(\frac { \theta}{2}))( sin(\frac { \theta}{2})dr+\frac {1}{2}rcos(\frac { \theta}{2})d\theta)=0$$
arranging in terms of ##dr## and ##d\theta##,
$$(sin(\frac { \theta}{2})-cos(\frac { \theta}{2})(sin(\frac { \theta}{2})-2cos(\frac { \theta}{2}))dr +
\\r(2sin^2(\frac { \theta}{2})+\frac {cos(\frac { \theta}{2})}{2}(cos(\frac { \theta}{2})-sin(\frac { \theta}{2}))d\theta=0$$
Let
$$u=tan(\frac { \theta}{2}), du=\frac {d\theta}{2cos^2(\frac { \theta}{2})}, cos(\frac { \theta}{2})=\sqrt {\frac {1}{1+u^2}}, sin(\frac { \theta}{2})=\sqrt {\frac {u}{1+u^2}}$$
The d.e. is now
$$-\frac {dr}{r}= \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}+ \frac {du}{2(1+u^2)(1-\frac{u}{2})}$$
and integrating both sides we get
$$log(\frac {1}{r})=I_1 +I_2
\\I_1=\int \frac {2udu}{(1+u^2)(1-u)(1-\frac{u}{2})}
\\I_2=\int \frac {du}{2(1+u^2)(1-\frac{u}{2})}
$$
Expanding the integrals in partial fractions:$$
\\I_1=2\int (\frac{au +b }{1+u^2}+ \frac {c}{1-\frac {u}{2}} +\frac {d}{1-u})du$$
We have four equations of the constants in terms of powers of u. I omit the algebra. You can check me. I find ##a=\frac{1}{2}, b=-\frac {3}{4}, c=\frac {7}{4}, d=-1##
and
$$I_2=\frac {1}{2} \int ( \frac {au+ b}{(1+u^2)} +\frac {c}{(1-\frac{u}{2})})du$$
I find for ##I_2##, ##a=\frac{2}{5}, b=\frac{4}{5}, c=\frac{1}{5}##
I consult an old integral table and observe the forms,
$$\int \frac{dx}{1+x^2}=tan^{-1}(x)
\\\int \frac{xdx}{1+x^2}=\frac{1}{2}log\left | 1+ x^2 \right |$$
to find
$$log(\frac{1}{r})=\frac{7}{5}log(1+u^2) + \frac{19}{5}log\left | 1-\frac {u}{2} \right | - \frac {7}{2}log \left | 1-u \right | - \frac {13}{10}tan^{-1}(u) + C$$
where C is a constant of integration. I exponentiate and invert both sides of the equation to express r in terms of ##\theta##:
$$r=\frac {exp( \frac {13}{20} \theta + C)}{\left ( (1+ tan^2(\frac {\theta}{2}))^{\frac {7}{5}} + \left | 1-\frac {tan(\frac {\theta}{2})}{2} \right |^{\frac {19}{5}} + \left |1-tan(\frac {\theta}{2}) \right |^{-\frac {7}{2}}\right )}$$
Because ##r=0## for ##\theta = \pi (n+ \frac {1}{2})## with n an integer, I roughly describe ##r(\theta)##, looking down on the cylindrical axis, as four pedals of increasing exponential extent.
Peace,
Fred
 
  • #30
Fred Wright said:
Here is my solution to #5
For the differential equation:$$(2x-4y+6)dx + (x +y-3)dy=0$$
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
$$(2x'-4y')dx'+(x'+y')dy'=0$$
Not that it matters, but ##2x'-4y'=2(x+1)-4(y+2)=2x-4y-6## and this is not the same as ##2x-4y+6##. O.k. it doesn't really matter. Nevertheless, it is wrong.
 
  • #31
fresh_42 said:
Not that it matters, but ##2x'-4y'=2(x+1)-4(y+2)=2x-4y-6## and this is not the same as ##2x-4y+6##. O.k. it doesn't really matter. Nevertheless, it is wrong.
Yes, you are right and I think it matters enough to disqualify my solution.

Peace,
Fred
 
  • #32
Fred Wright said:
Yes, you are right and I think it matters enough to disqualify my solution.

Peace,
Fred
Why? Just swap them: ##x'=x-1\; , \;y'=y-2## and everything else remains the same - as far as I could see. Although I expect there is also a solution for Cartesian coordinates. Re-substitution doesn't look to be a good idea at the end of your solution.
 
  • #34
QuantumQuest said:
##2.## Find the volume of the solid ##S## which is defined by the relations ##z^2 - y \geq 0,## ##\space## ##x^2 - z \geq 0,## ##\space## ##y^2 - x \geq 0,## ##\space## ##z^2 \leq 2y,## ##\space## ##x^2 \leq 2z,## ##\space## ##y^2 \leq 2x## ##\space## ##\space## (by @QuantumQuest

As the 16th of the month has passed, let me try problem 2.
We have:
$$\begin{cases}z^2 - y \geq 0\\ x^2 - z \geq 0\\ y^2 - x \geq 0\\ z^2 \leq 2y\\ x^2 \leq 2z\\ y^2 \leq 2x
\end{cases} \Rightarrow
\begin{cases}0 \le z \le x^2 \le 2z\\ 0 \le x \le y^2 \le 2x\\ 0 \le y\le z^2 \le 2y \end{cases} \Rightarrow
\begin{cases}0 \le z^2 \le x^4 \le 4z^2 \\ 0 \le y \le x^2 \le 8y \\ 0 \le y^2 \le x^4 \le 64y^2 \\ 0 \le x \le x^4 \le128 x \end{cases} \Rightarrow
x=0 \lor 1\le x^3 \le 128 \Rightarrow
x=0\lor 1\le x\le4\sqrt[3] 2
$$
With ##x=0## it follows that ##y=z=0##, meaning this won't give any volume.
So we are left with (with limits from the 2nd step):
$$\text{Volume S} = \int_1^{4\sqrt[3] 2}\int_{\sqrt x}^{\sqrt{2x}}\int_{\sqrt y}^{\sqrt{2y}} 1 \,dz\,dy\,dx
= \int_1^{4\sqrt[3] 2}\int_{\sqrt x}^{\sqrt{2x}} (\sqrt{2}-1)\sqrt y \,dy\,dx
= \int_1^{4\sqrt[3] 2} (\sqrt{2}-1)\frac 23 y^{3/2}\Big|_{\sqrt x}^{\sqrt{2x}} \,dx \\
= \int_1^{4\sqrt[3] 2} (\sqrt{2}-1)\frac 23 \left[(2x)^{3/4}-x^{3/4}\right] \,dx
= \int_1^{4\sqrt[3] 2} (\sqrt{2}-1)\frac 23 (2^{3/4}-1)x^{3/4} \,dx
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 x^{7/4} \Big|_1^{4\sqrt[3] 2} \\
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 \Big((4\sqrt[3] 2)^{7/4} - 1\Big)
$$
 
Last edited:
  • #35
I like Serena said:
##0 \leq y \leq x^2 \leq 8y##
How is this justified?
 
  • #36
Fred Wright said:
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
##(2x′−4y′)dx′+(x′+y′)dy′=0##​

I don't think that the substitution is carried out right.

EDIT: The result of this problem is not ugly at all. It just needs some different thinking about simplification in the beginning in order to manage things properly.
 
  • #37
QuantumQuest said:
How is this justified?

That should be ##y \le x^4 \le 8y##, which follows from ##0 \le z^2 \le x^4 \le 4z^2## and ##0\le y\le z^2 \le 2y##.
Consequently the solution was unfortunately wrong. It should be:
$$\begin{cases}z^2 - y \geq 0\\ x^2 - z \geq 0\\ y^2 - x \geq 0\\ z^2 \leq 2y\\ x^2 \leq 2z\\ y^2 \leq 2x
\end{cases} \Rightarrow
\begin{cases}0 \le z \le x^2 \le 2z\\ 0 \le x \le y^2 \le 2x\\ 0 \le y\le z^2 \le 2y \end{cases} \Rightarrow
\begin{cases}0 \le z^2 \le x^4 \le 4z^2 \\ 0 \le y \le x^4 \le 8y \\ 0 \le y^2 \le x^8 \le 64y^2 \\ 0 \le x \le x^8 \le128 x \end{cases} \Rightarrow
x=0 \lor 1\le x^7 \le 128 \Rightarrow
x=0\lor 1\le x\le 2
$$
With ##x=0## it follows that ##y=z=0##, meaning this won't give any volume.
So we are left with (with limits from the 2nd step):
$$\text{Volume S} = \int_1^{2}\int_{\sqrt x}^{\sqrt{2x}}\int_{\sqrt y}^{\sqrt{2y}} 1 \,dz\,dy\,dx
= \int_1^{2}\int_{\sqrt x}^{\sqrt{2x}} (\sqrt{2}-1)\sqrt y \,dy\,dx
= \int_1^{2} (\sqrt{2}-1)\frac 23 y^{3/2}\Big|_{\sqrt x}^{\sqrt{2x}} \,dx \\
= \int_1^{2} (\sqrt{2}-1)\frac 23 \left[(2x)^{3/4}-x^{3/4}\right] \,dx
= \int_1^{2} (\sqrt{2}-1)\frac 23 (2^{3/4}-1)x^{3/4} \,dx
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 x^{7/4} \Big|_1^{2} \\
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 \Big(2^{7/4} - 1\Big)
$$
 
  • #38
I like Serena said:
That should be ##y \le x^4 \le 8y##, which follows from ##0 \le z^2 \le x^4 \le 4z^2## and ##0\le y\le z^2 \le 2y##.
Consequently the solution was unfortunately wrong. It should be:

The volume you find is not correct. I'll just ask how did you find / deduce this

##0 \le x \le x^8 \le128 x## because I cannot see it. I don't know how exactly you solve all these inequalities but if it is of any help, the bounds you give for ##x## i.e. ##1\le x\le 2## are the right bounds but not for the ##x## alone. You must find some other expression(s) from the given of the problem that is / are bounded this way.
 
  • #39
QuantumQuest said:
As an extra hint I would point you towards the direction of making some transformation(s) and seeing if the notion of a determinant is any useful in this case.
We don't seem to be making much progress on this problem, so let's take what we've got so far and try to come up with an approach to solve it.

I can tell you that doing the integration in the given cartesian coordinates results in a bunch of triple integrals to account for all of the combinations of constraints. I think it is solvable that way, but tedious in the extreme. It would be difficult to know if you arrived at the correct answer with that approach because you could never be sure there was no mistake somewhere unless you automated the whole process, and in that case you might as well use Mathematica.

@QuantumQuest gives us a couple of hints. The idea of using a matrix determinant suggests transforming the solid defined by the constraints into a parallelepiped. The volume of a parallelepiped is equal to the determinant of a 3X3 matrix constructed using four of its vertices. This idea seems credible. The volume in question resembles a parallelepiped in that it has 6 faces, each of which forms an edge with four of the other faces. The faces and edges are warped and twisted, though, so a transformation would be needed to make them linear. I have not been able to coax Mathematica into drawing a plot of this shape, but you can get an idea of it by looking at the constraints:

##y \leq z^2 \leq 2y##
##z \leq x^2 \leq 2z##
##x \leq y^2 \leq 2x##

A negative value of ##x, y,## or ##z## would violate a constraint, so we can assume they are non-negative. Then we can bound their values further to within the cube ##1\leq x \leq 2##, ##1\leq y \leq 2##, ##1\leq z \leq 2##. This follows from a chain of inequalities:
##~~~~~~~~~~~~~~~~~~~y \leq z^2 \leq 2y##
##~~~~~~~~~x\leq y^2 \leq z^4 \leq 4y^2 \leq 8x##
##z\leq x^2\leq y^4 \leq z^8 \leq 16y^4 \leq 64x^2\leq 128z##

##1\leq z^7 \leq 128##

The other variables can be bounded similarly. It is also clear that both ##(x,y,z) = (1,1,1)## and ##(x,y,z) = (2,2,2)## satisfy all of the constraints.

Now consider one face, say ##y=z^2##. Within the cube, this face cannot intersect ##z^2=2y##, but it does intersect each of the other four faces. For example, its intersection with ##z=x^2## is described by the parametric curve ##(x, x^4, x^2)##.

It is not obvious what kind of transformation would make that curve (and each of the five other edges) look linear. I tried what I thought was the simplest possibility. Since the volume is trilaterally symmetric about the line ##w\vec S## where ##w## is a real-valued parameter and ##\vec S = \frac 1 {\sqrt 3}(1, 1, 1)##, it would be really nice if the distance between ##\vec T(x) \equiv (x-1, x^4-1, x^2-1)## and ##w \vec S## varied linearly. (I subtract 1 from each coordinate because the curve passes through the point (1,1,1), at which point its distance from ##w \vec S## is 0.)

Specifically, if ##|\vec T(x) -(\vec T(x) \cdot \vec S) \vec S| = a(x-1)## where ##a## is some constant (and assuming the same transformation worked for the other edges) we could just apply a rotation depending on the values of ##x, y,## and ##z## and turn our shape into a parallelepiped. It is easy to factor ##x-1## out of this expression, but the other factor looks nothing like a constant because the coefficients of ##x^n## terms for ##n>0## fail to cancel out. So this transformation would not work. Does anyone have an idea for a transformation that would work?
 
  • #40
tnich said:
We don't seem to be making much progress on this problem, so let's take what we've got so far and try to come up with an approach to solve it.

Although the post of @tnich does not ask me for an answer or a hint, I regard it fair to answer myself as an appreciation to the efforts put from all members who tried this particular problem. I see that, particularly, both you @tnich, and @I like Serena put good efforts to solve the problem, so I owe some further and better hint. A first observation is that the solid is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. Now, try the transformation ##\frac{z^2}{y} = u##, ##\frac{x^2}{z} = v##, ##\frac{y^2}{x} = w##.
 
  • #41
QuantumQuest said:
Although the post of @tnich does not ask me for an answer or a hint, I regard it fair to answer myself as an appreciation to the efforts put from all members who tried this particular problem. I see that, particularly, both you @tnich, and @I like Serena put good efforts to solve the problem, so I owe some further and better hint. A first observation is that the solid is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. Now, try the transformation ##\frac{z^2}{y} = u##, ##\frac{x^2}{z} = v##, ##\frac{y^2}{x} = w##.
It looks obvious now that you have shown it to us. The Jacobean determinant of that transformation would be
##\begin{vmatrix}
0 &-\frac {z^2} {y^2} & \frac {2z} y \\
\frac {2x} z & 0 & -\frac {x^2} {z^2} \\
-\frac {y^2} {x^2} & \frac {2y} x & 0
\end{vmatrix} = 7##

So ##dx~dy~dz = \frac 1 7 du~dv~dw## and the volume of the solid is

##\int_1^2\int_1^2\int_1^2 \frac 1 7 du~dv~dw = \frac 1 7##
 
  • Like
Likes QuantumQuest
  • #42
(5) looks like a standard differential equation. When my last exam is done tomorrow, I will attempt it.

Here is already a sketch of what I would attempt:

- Calculate the intercept of the lines in the DE.
- Substitute ##u = x- x_0, v = y - y_0 ## with ##(x_0,y_0)## the intersection point.
- The DE becomes homogeneous: divide everything by ##u##, and subsitute ##z:= v/u##
- The DE becomes solvable by separation of variables.

and then we are done.
 
Last edited by a moderator:
  • #43
QuantumQuest said:
##10.## b) Show that there are infinitely many units (invertible elements) in ##\mathbb{Z}[\sqrt{3}]##.
Any pair of integers ##(x, y)## such that ##x^2 - 3 y^2 = 1## forms two invertible elements ##x \pm \sqrt 3 y##. For the first few pairs with this property ##(1,0), (2,1), (7,4), (26,15)##, both the ##x_n## and the ##y_n## satisfy the linear difference equation ##z_n=4z_{n-1}-z_{n-2}##. The general solution of this difference equation is ##z_n = a(2+\sqrt 3)^n + b(2-\sqrt 3)^n##. For ##x_n##, ##a_x = b_x = \frac 1 2##. For ##y_n##, ##a_y = -b_y = \frac 1 {2\sqrt 3}##.

Claim. Let ##x_n = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]##, and ##y_n = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]##. Then for ##\forall n \in \mathbb Z^+##, ##x_n, y_n \in \mathbb Z## and ##x_n^2 - 3 y_n^2 = 1##.

Proof.
##\forall n \in \mathbb Z^+, x_n \in \mathbb Z##:
$$\begin{align}x_n & = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 2 \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j + \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For odd j, the terms in the two sums cancel, leaving
$$x_n = \sum_{k=0}^{\lfloor \frac n 2 \rfloor} \binom n {2k} 2^{n-2k}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+, y_n \in \mathbb Z##:
$$\begin{align}y_n & = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 {2\sqrt 3} \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j - \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For even j, the terms in the two sums cancel, leaving
$$y_n = \sum_{k=0}^{\lfloor \frac {n-1} 2 \rfloor} \binom n {2k+1} 2^{n-2k-1}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+##, ##x_n^2 - 3 y_n^2 = 1##:
First note that ##x_1 = 2, y_1 = 1## and ##x_1^2-3y_1^2 = (2+\sqrt 3)(2-\sqrt 3)=1##.
Now
$$\begin{align}x_n^2 - 3 y_n^2 & = \left(\frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]\right)^2-3\left(\frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]\right)^2\nonumber \\
& = \frac 1 4 [(2+\sqrt 3)^{2n} + 2+(2-\sqrt 3)^{2n}]-3\frac 1 {12} [(2+\sqrt 3)^{2n} - 2+(2-\sqrt 3)^{2n}]\nonumber \\
& = 1\nonumber \\
\end{align}$$
This shows that there is an infinite number of invertible elements in ##\mathbb Z[\sqrt 3]## formed from pairs ##(x_n, y_n)## for which ##(x_n+\sqrt 3 y_n)(x_n-\sqrt 3 y_n) = 1##.
 
  • #44
tnich said:
Any pair of integers ##(x, y)## such that ##x^2 - 3 y^2 = 1## forms two invertible elements ##x \pm \sqrt 3 y##. For the first few pairs with this property ##(1,0), (2,1), (7,4), (26,15)##, both the ##x_n## and the ##y_n## satisfy the linear difference equation ##z_n=4z_{n-1}-z_{n-2}##. The general solution of this difference equation is ##z_n = a(2+\sqrt 3)^n + b(2-\sqrt 3)^n##. For ##x_n##, ##a_x = b_x = \frac 1 2##. For ##y_n##, ##a_y = -b_y = \frac 1 {2\sqrt 3}##.

Claim. Let ##x_n = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]##, and ##y_n = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]##. Then for ##\forall n \in \mathbb Z^+##, ##x_n, y_n \in \mathbb Z## and ##x_n^2 - 3 y_n^2 = 1##.

Proof.
##\forall n \in \mathbb Z^+, x_n \in \mathbb Z##:
$$\begin{align}x_n & = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 2 \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j + \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For odd j, the terms in the two sums cancel, leaving
$$x_n = \sum_{k=0}^{\lfloor \frac n 2 \rfloor} \binom n {2k} 2^{n-2k}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+, y_n \in \mathbb Z##:
$$\begin{align}y_n & = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 {2\sqrt 3} \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j - \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For even j, the terms in the two sums cancel, leaving
$$y_n = \sum_{k=0}^{\lfloor \frac {n-1} 2 \rfloor} \binom n {2k+1} 2^{n-2k-1}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+##, ##x_n^2 - 3 y_n^2 = 1##:
First note that ##x_1 = 2, y_1 = 1## and ##x_1^2-3y_1^2 = (2+\sqrt 3)(2-\sqrt 3)=1##.
Now
$$\begin{align}x_n^2 - 3 y_n^2 & = \left(\frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]\right)^2-3\left(\frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]\right)^2\nonumber \\
& = \frac 1 4 [(2+\sqrt 3)^{2n} + 2+(2-\sqrt 3)^{2n}]-3\frac 1 {12} [(2+\sqrt 3)^{2n} - 2+(2-\sqrt 3)^{2n}]\nonumber \\
& = 1\nonumber \\
\end{align}$$
This shows that there is an infinite number of invertible elements in ##\mathbb Z[\sqrt 3]## formed from pairs ##(x_n, y_n)## for which ##(x_n+\sqrt 3 y_n)(x_n-\sqrt 3 y_n) = 1##.
Yes, this is correct, although a bit too much work. With your setting of ##x_n## and ##y_n## you immediately have
$$x_n-\sqrt{3}y_n\; , \;x_n+\sqrt{3}y_n \in \mathbb{Z}[\sqrt{3}]$$
and by the first line and the calculation at the end (3 lines) that these are units. You didn't need to prove ##x_n,y_n## are integers.

So the prove reduces to:
  • 1st line
  • definition of ##x_n,y_n##
    but without the scaling factors: ##\pm (2 \pm \sqrt{3})^n## do the job
  • last five lines to prove they are units
So your proof is correct, just a bit too complicated.
 
  • #45
fresh_42 said:
You didn't need to prove ##x_n,y_n## are integers.

fresh_42 said:
  • definition of ##x_n,y_n## but without the scaling factors: ##\pm (2 \pm \sqrt{3})^n## do the job
Even better, noting that ##(2+\sqrt 3)(2-\sqrt 3) = 1##, it follows directly that ##(2+\sqrt 3)^n(2-\sqrt 3)^n = 1~ \forall n \in \mathbb Z^+##. So ##\{(2 \pm \sqrt 3)^n: n \in \mathbb Z^+\}## is an infinite set of units in ##\mathbb Z[\sqrt 3]##.
 
  • Like
Likes fresh_42
  • #46
tnich said:
Even better, noting that ##(2+\sqrt 3)(2-\sqrt 3) = 1##, it follows directly that ##(2+\sqrt 3)^n(2-\sqrt 3)^n = 1~ \forall n \in \mathbb Z^+##. So ##\{(2 \pm \sqrt 3)^n: n \in \mathbb Z^+\}## is an infinite set of units in ##\mathbb Z[\sqrt 3]##.
Yep. I think with ##\pm 1## they are also all units, but I haven't checked. So if you're interested, go ahead.
 
  • #47
QuantumQuest said:
##10.## a) Give an example of an integral domain (no field), which has common divisors, but doesn't have greatest common divisors.
b) (solved by @tnich ) Show that there are infinitely many units (invertible elements) in ##\mathbb{Z}[\sqrt{3}]##.
c) Determine the units of ##\{\,\frac{1}{2}a+ \frac{1}{2}b\sqrt{-3}\,\vert \,a+b \text{ even }\}##.
d) The ring ##R## of integers in ##\mathbb{Q}(\sqrt{-19})## is the ring of all elements, which are roots of monic polynomials with integer coefficients. Show that ##R## is built by all elements of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd. ##\space## ##\space## (by @fresh_42)

a) The Eisenstein Integers ##\mathbb Z[\sqrt{-3}]## (same as in (c)). For instance the numbers ##a=4=2\cdot 2=(1+\sqrt{-3})(1-\sqrt{-3})## and ##b=2(1+\sqrt{-3})## have both ##2## and ##1+\sqrt{-3}## as maximal common divisor. As they have the same norm ##N=2^2=1^2+3\cdot 1^2##, that means that ##a## and ##b## do not have a greatest common divisor.

b) Skipping as @tnich already solved it.

c) Every unit has norm ##\pm 1##. So we must have ##a^2+3b^2=\pm 1##. It follows that ##|a|\le 1## and ##|b|\le 1##. When enumerating the possibilities we find the six units ##\pm 1## and ##\pm1 \pm \sqrt{-3}##.

d) Let ##p+q\sqrt{-19}## be an element in ##R## with rational p and q. Then ##p-q\sqrt{-19}## is also in ##R##, and ##(x-(p+q\sqrt{-19}))(x-(p-q\sqrt{-19})=x^2-2px + (p^2+19 q^2)## must be a monic polynomial with integer coefficients.
It follows that p must be of the form ##\frac 12 a## where ##a## is an integer.
If p is an integer, then ##19q^2## must be an integer as well, and since 19 is prime, q must be an integer.
If ##p=\frac 12 a## with odd a, then ##p^2+19 q^2=(\frac 12 a)^2+19 q^2## must be an integer as well, it follows that ##a^2+19(2q)^2 \equiv 1 + 19(2q)^2 \equiv 0 \pmod 4 \Rightarrow (2q)^2 \equiv 19^{-1}\cdot -1 \equiv 1 \pmod 4##. Therefore 2q must be an odd integer as well.
Thus any element in R is of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd.
 
  • #48
I like Serena said:
a) The Eisenstein Integers ##\mathbb Z[\sqrt{-3}]## (same as in (c)). For instance the numbers ##a=4=2\cdot 2=(1+\sqrt{-3})(1-\sqrt{-3})## and ##b=2(1+\sqrt{-3})## have both ##2## and ##1+\sqrt{-3}## as maximal common divisor. As they have the same norm ##N=2^2=1^2+3\cdot 1^2##, that means that ##a## and ##b## do not have a greatest common divisor.
Correct. It also works for other rings of the form ##\mathbb{Z}[\sqrt{-p}]##. But it is not the example in c).
b) Skipping as @tnich already solved it.

c) Every unit has norm ##\pm 1##. So we must have ##a^2+3b^2=\pm 1##. It follows that ##|a|\le 1## and ##|b|\le 1##. When enumerating the possibilities we find the six units ##\pm 1## and ##\pm1 \pm \sqrt{-3}##.
Correct ... in a way, but you could have been a bit less sloppy here. As the elements are defined as ##\dfrac{1}{2}a+\dfrac{1}{2}b\sqrt{-3}## with ##a+b## even, this ring is not ##\mathbb{Z}[\sqrt{-3}]## as you have suggested in a) and apparently assumed here, so the norm of units is actually ##a^2+3b^2=+4## (or ##a^2+3b^2=+1## in ##\mathbb{Z}[\sqrt{-3}]##, it is not ##\pm 1\,!##).

Therefore the possibilities are ##|a|=2\; , \;b=0## or ##|a|=|b|=1##. The six elements are then ## \dfrac{1}{2}\left( a+b\sqrt{-3}\right)^n## for ##n=0,\ldots ,5##.

I will take your solution as solved, since the basic idea is the same as for the ring I actually used, and I'd have to post a solution today anyway; which I herewith did.
d) Let ##p+q\sqrt{-19}## be an element in ##R## with rational p and q. Then ##p-q\sqrt{-19}## is also in ##R##,...
Why does ##p-q\sqrt{-19}## have to be in ##R\,?## The reason is, that the integers are, ergo ##p## is and thus ##p-q\sqrt{-19}##. So ##\mathbb{Z}\subseteq R## is crucial here, which means it deserves mention.
... and ##(x-(p+q\sqrt{-19}))(x-(p-q\sqrt{-19})=x^2-2px + (p^2+19 q^2)## must be a monic polynomial with integer coefficients.
It follows that p must be of the form ##\frac 12 a## where ##a## is an integer.
You should have mentioned how you use ##a## and ##b## here, since it is not clear which quantifiers apply in your proof. This has to do with the fact that you didn't clearly say which direction of the proof your in. This makes the entire proof unnecessarily harder to read.
If p is an integer, then ##19q^2## must be an integer as well, and since 19 is prime, q must be an integer.
... and ##p \equiv q \mod 2## which follows immediately from the fact that ##19## is prime.
If ##p=\frac 12 a## with odd a, then ##p^2+19 q^2=(\frac 12 a)^2+19 q^2## must be an integer as well, it follows that ##a^2+19(2q)^2 \equiv 1 + 19(2q)^2 \equiv 0 \pmod 4 \Rightarrow (2q)^2 \equiv 19^{-1}\cdot -1 \equiv 1 \pmod 4##. Therefore 2q must be an odd integer as well.
... and ##b=2q\,.##

What happened to the case that both, ##a## and ##b## are even? As it is part of the statement, it does need a reason. That integers are in ##R## is somehow trivial, but ##\mathbb{Z}+\mathbb{Z}[\sqrt{-19}]## needs a little argument. Both could (should?) have been done at the beginning. Also the modulo calculations could have been a bit more explicit, since you have always to think about whom do you write those lines for - and it is not me!
Thus any element in R is of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd.
Formally you only have shown a necessary condition and not sufficiency, i.e. that those elements are indeed within ##R##. I know, that it is no big deal and implicitly covered by what you actually wrote, but I think we should be less sloppy here, since sloppiness must be earned. Only if you can write a proof properly such that everybody can follow it, only then you're allowed to be sloppy. Students should not get used to it unless they have the ability to do it clean.

So although your proof contains all necessary parts (somewhere), let me give an example of what I meant:

We recall that ##R## is defined as all integers in ##\mathbb{Q}(\sqrt{-19})##, i.e. all elements of ##\mathbb{Q}(\sqrt{-19})## with a minimal polynomial in ##\mathbb{Z}[x]##.

##"\Rightarrow "## : Elements of the stated form are integers in ##\mathbb{Q}(\sqrt{-19})##.

For ##r=\frac{1}{2}(a+b\sqrt{-19})\in R## we have ##\frac{1}{2}(a+b\sqrt{-19})\cdot \frac{1}{2}(a-b\sqrt{-19})=\frac{1}{4}(a^2+19b^2)\in \mathbb{Z}##, because ##a+b\equiv 0 (2)##. So $$(x-\frac{1}{2}(a+b\sqrt{-19}))(x-\frac{1}{2}(a-b\sqrt{-19}))=x^2-ax+\frac{1}{4}(a^2+19b^2)\in \mathbb{Z}[x]$$

##"\Leftarrow "## : Integers in ##\mathbb{Q}(\sqrt{-19})## are of the stated form.

If we have ##r \in \mathbb{Q}(\sqrt{-19})## an integer, then ##r^2+ar+b=0## for ##a,b \in \mathbb{Z}##, i.e. ##2r=-a\pm \sqrt{a^2-4b}##. As ##r\in \mathbb{Q}(\sqrt{-19})## we have ##\mathbb{Z} \ni a^2-4b=(\alpha + \beta \sqrt{-19})^2=\alpha^2 -19\beta^2 +2\alpha \beta\sqrt{-19}## and thus ##\alpha \beta=0##.

Case 1: ##\beta = 0##. Then ##b=\frac{a-\alpha}{2}\cdot \frac{a+\alpha}{2}\in \mathbb{Z}## and ##r_{1,2}=-\frac{1}{2}(a \pm \alpha)## and ##a\pm \alpha \equiv 0(2)## which means ##r_{1,2}\in R##.

Case 2: ##\alpha =0##. Then ##b=\frac{1}{4}(a^2+19\beta^2)\in \mathbb{Z}## and ##r_{1,2}=-\frac{1}{2}(a \pm \beta \sqrt{-19})## and we have to show that ##-a\pm \beta \equiv 0 (2)##. Let us assume this is not the case and ##\beta^2=(2k+a+1)^2##. Then
$$
4\,|\,(a^2+19\beta^2)=20a^2+76\cdot(n^2+an+n)+38a+19 \not\equiv 0(4)
$$
which is a contradiction. Therefore we have again ##r_{1,2}\in R##.
 
Last edited:
  • #49
Here is my attempt for 5.

We want to solve the differential equation ##(2x-4y+6)dx + (x+y-3)dy = 0##

Perform the subsitution ##u = x-1, v = y- 2, du = dx, dv = dy##.

The DE becomes: ##(2(u+1)-4(v+2) +6)du + (u+1 + v+2 - 3)dv = (2u - 4v)du + (u+v)dv=0##

This implies that ##(2-4v/u)du + (1+v/u)dv = 0##

Substitute ##z:= v/u, dv/du = d(zu)/du = dz/du u + z##

The DE becomes: ##(2-4z) + (1+z)(z'u + z) = 0##

Or equivalently:

##dz \frac{1+z}{-z^2 + 3z -2} = 1/u du##

We integrate both sides, and find:

##2\log|1-z| - 3\log|2-z| = \log|u| + C##

Thus, the solution is given by:

##2 \log|1 - v/u| - 3\log|2-v/u| = \log|u| + C##

or

##2 \log|1-(y-2)/(x-1)| - 3 \log|2 - (y-2)/(x-1)| = \log|x-1| + C##

A quick calculation leads to:

##\log\frac{(x-y+1)^2}{(2x-y)^3} = C \implies \frac{(x-y+1)^2}{(2x-y)^3} = K, K > 0##

assuming all the absolute values fall away (this gives conditions on the domain on which the solution is defined).
 
Last edited by a moderator:
  • Like
Likes QuantumQuest
  • #50
Math_QED said:
Integrate both sides and substitute back the variables, we obtain the solution.

Could you do that?
 

Similar threads

Replies
61
Views
12K
2
Replies
93
Views
15K
Replies
28
Views
6K
Replies
33
Views
8K
Replies
42
Views
10K
3
Replies
100
Views
11K
2
Replies
93
Views
11K
2
Replies
67
Views
11K
2
Replies
77
Views
15K
2
Replies
69
Views
8K
Back
Top