Math Challenge - May 2021

Click For Summary
SUMMARY

The forum discussion centers on advanced mathematical topics including Group Theory, Integrals, and Representation Theory. Key problems addressed include the evaluation of double integrals, properties of free groups, and the behavior of sequences defined by recursive relations. Notable contributions were made by users @Infrared, @julian, and @etotheipi, who provided solutions to complex integrals and proofs related to algebraic structures and geometric properties.

PREREQUISITES
  • Understanding of Group Theory and free groups
  • Familiarity with multivariable calculus and double integrals
  • Knowledge of Representation Theory and linear mappings
  • Basic concepts of sequences and limits in real analysis
NEXT STEPS
  • Study advanced topics in Group Theory, focusing on free groups and their properties
  • Explore techniques for evaluating complex double integrals in multivariable calculus
  • Learn about the applications of Representation Theory in various mathematical contexts
  • Investigate the behavior of recursive sequences and their convergence properties
USEFUL FOR

Mathematicians, graduate students in mathematics, and anyone interested in advanced topics in algebra, calculus, and mathematical proofs.

  • #91
fresh_42 said:
These are two. So you have 8/10 now.
I think I already have 10,
$$(x,y)=(-9,-12);(-9,12);(-8,0);(-7,0);(-4,-12);(-4,12);(-1,0);(0,0);(1,12);(1,-12)$$

Edit: I'll add an explanation for t=0 as well.
 
  • Like
Likes   Reactions: fresh_42
Physics news on Phys.org
  • #92
Problem #12 (second attempt)
$$y^2=x\cdot (x+1)\cdot (x+7)\cdot (x+8)$$
substitute ##x+4 \rightarrow t##
then the equation becomes,
$$\begin{align}
y^2&=(t-4)\cdot(t-3)\cdot(t+3)\cdot(t+4)\nonumber\\
y^2&=(t^2-16)\cdot(t^2-9)\nonumber
\end{align}$$
so, for the R.H.S to be a perfect square,
the only possibilities are ##t=0,\pm3,\pm4,\pm5##

as for product of two numbers (say ##a,b##) to be a perfect square, the only possibilities are,
if ##a=b,a=0,b=0,(a=l^2\space \text{&}\space b=m^2)## or ##(a=-p^2\space \text{&}\space b=-q^2)## {where ##l,m,p,q## are any real numbers}

if we use ##t^2-16=t^2-9##, then we won't get any solutions,

From ##t^2-16=l^2\space \text{&} \space t^2-9=m^2##
we get,
##t^2=16+l^2=m^2+9##
clearly ##t=5,-5## are the only possibility as 3,4,5 are pythagorean triplets.

And from ##t^2-16=-p^2\space \text{&} \space t^2-9=-q^2##
we get,
##t^2+p^2=16## & ##t^2+q^2=9##
as mentioned earlier, ##3^2+4^2=5^2## is the only possible triplet with 3 & 4, for the above expression to be true, we have either ##(t=0,p=\pm4)## or ##(t=\pm4,p=0)## and similarly, ##(t=0,q=\pm3)## or ##(t=\pm3,q=0)##

Also we can have either ##t^2-9=0## or ##t^2-16=0##
from this we get ##t=3,4,-3,-4##

So putting the obtained values back in ##t=x+4## we get ##x=-9,-8,-7,-4,-1,0,1##

So ordered pairs ##(x,y)## are ##(-9,-12);(-9,12);(-8,0);(-7,0);(-4,-12);(-4,12);(-1,0);(0,0);(1,12);(1,-12)##
 
  • Like
Likes   Reactions: fresh_42
  • #93
Problem #12 (alternate method)
$$y^2=x\cdot (x+1)\cdot (x+7)\cdot (x+8)$$
substitute ##x+4 \rightarrow t##
then the equation becomes,
$$\begin{align}
y^2&=(t-4)\cdot(t-3)\cdot(t+3)\cdot(t+4)\nonumber\\
y^2&=(t^2-16)\cdot(t^2-9)\nonumber\\
y^2&=t^4-25t^2+144\nonumber\\
y^2&=\left(t^2-\frac {25} 2\right)^2+144-\frac{625} 4\nonumber\\
4y^2&=(2t^2-25)^2-49\nonumber
\end{align}$$
from this we get,$$\begin{align}(7)^2+(2y)^2&=(2t^2-25)^2\nonumber\end{align}$$
But we know that there is only one possible pythagorean triplet with 7, i.e.,
$$(7)^2+(24)^2=(25)^2$$
So, now there are only two possible cases,

either,
##y=0## and ##2t^2-25=\pm7\Rightarrow t=\pm3,\pm4##

or,
##y=\pm12## and ##2t^2-25=\pm25\Rightarrow t=0,\pm5##

Putting the respective values of ##t## in ##x=t-4## we get,
$$(x,y)\equiv (-9,-12);(-9,12);(-8,0);(-7,0);(-4,-12);(-4,12);(-1,0);(0,0);(1,12);(1,-12)$$
 
  • Like
Likes   Reactions: fresh_42
  • #94
Do stupid things when V. sleep deprived, but have a go at #3 anyway:

\begin{align*}
I = \int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \frac{1}{1 - \cos x \cos y \cos z} dx dy dz
\end{align*}

From the standard half angle formula substitution:

\begin{align*}
\int_0^{\pi} f ( \cos x ) dx = \int_0^\infty \frac{2}{1 + t^2} f ( \frac{1 - t^2}{1 + t^2} ) dt
\end{align*}

(didn't notice this was given as a hint until after written this up!). So we can write

\begin{align*}
I & = \int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \frac{1}{1 - \cos x \cos y \cos z} dx dy dz
\nonumber \\
& = \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{1 - \frac{1 - t^2}{1 + t^2} \frac{1 - u^2}{1 + u^2} \frac{1 - v^2}{1 + v^2}} \frac{2}{1 + t^2} \frac{2}{1 + u^2} \frac{2}{1 + v^2}
\nonumber \\
& = 4 \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{t^2 + u^2 + v^2 + t^2 u^2 v^2} dt du dv
\nonumber \\
& = 4 \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{t^2 u^2 + u^2 v^2 + v^2 t^2 + 1} dt du dv
\end{align*}

where in the last step we made the substitution ##t \rightarrow 1/t##, ##u \rightarrow 1/u##, ##v \rightarrow 1/v##.

Write ##p = tu##, ##q = uv##, ##r = vt##.

\begin{align*}
J (p,q,r) & =
\left|
\frac{( \partial t , \partial u , \partial v) }{( \partial p , \partial q , \partial r) }
\right|
\nonumber \\
& = 1/
\left|
\frac{ ( \partial p , \partial q , \partial r) }{( \partial t , \partial u , \partial v)}
\right|
\nonumber \\
& = 1/
\begin{vmatrix}
u & t & 0 \\
0 & v & u \\
v & 0 & t
\end{vmatrix}
\nonumber \\
&= 1 / 2 tuv
\nonumber \\
&= \frac{1}{2 \sqrt{pqr}}
\end{align*}

So ##J (p,q,r) = \frac{1}{2 \sqrt{pqr}}## with ##0 \leq p < \infty##, ##0 \leq q < \infty##, ##0 \leq r < \infty##:

\begin{align*}
I & = 2 \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{p^2 + q^2 + r^2 + 1} \frac{dp dq dr}{\sqrt{pqr}}
\nonumber \\
&= \frac{1}{4} \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{x + y + z + 1} \frac{dx dy dz}{(xyz)^{3/4}} \qquad (\text{used } x = p^2 \text { etc})
\nonumber \\
& = \frac{1}{4} \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{(xyz)^{3/4}} \left( \int_0^\infty e^{- \alpha (x + y + z + 1)} d \alpha \right) dx dy dz
\nonumber \\
& = \frac{1}{4} \int_0^\infty e^{- \alpha} \left( \int_0^\infty \frac{1}{x^{3/4}} e^{- \alpha x} dx \right)^3 d \alpha
\nonumber \\
& = \frac{1}{4} \left( \int_0^\infty \frac{e^{- \alpha}}{\alpha^{3/4}} d \alpha \right) \left( \int_0^\infty \frac{1}{x^{3/4}} e^{-x} dx \right)^3
\nonumber \\
& = \frac{1}{4} \left( \int_0^\infty e^{-x} x^{\frac{1}{4} - 1} dx \right)^4
\nonumber \\
& = \frac{1}{4} \left[ \Gamma \left( \frac{1}{4} \right) \right]^4
\end{align*}
 
  • Wow
  • Like
Likes   Reactions: romsofia and Infrared
  • #95
julian said:
Do stupid things when V. sleep deprived, but have a go at #3 anyway:

\begin{align*}
I = \int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \frac{1}{1 - \cos x \cos y \cos z} dx dy dz
\end{align*}

From the standard half angle formula substitution:

\begin{align*}
\int_0^{\pi} f ( \cos x ) dx = \int_0^\infty \frac{2}{1 + t^2} f ( \frac{1 - t^2}{1 + t^2} ) dt
\end{align*}

(didn't notice this was given as a hint until after written this up!). So we can write

\begin{align*}
I & = \int_0^{\pi} \int_0^{\pi} \int_0^{\pi} \frac{1}{1 - \cos x \cos y \cos z} dx dy dz
\nonumber \\
& = \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{1 - \frac{1 - t^2}{1 + t^2} \frac{1 - u^2}{1 + u^2} \frac{1 - v^2}{1 + v^2}} \frac{2}{1 + t^2} \frac{2}{1 + u^2} \frac{2}{1 + v^2}
\nonumber \\
& = 4 \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{t^2 + u^2 + v^2 + t^2 u^2 v^2} dt du dv
\nonumber \\
& = 4 \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{t^2 u^2 + u^2 v^2 + v^2 t^2 + 1} dt du dv
\end{align*}

where in the last step we made the substitution ##t \rightarrow 1/t##, ##u \rightarrow 1/u##, ##v \rightarrow 1/v##.

Write ##p = tu##, ##q = uv##, ##r = vt##.

\begin{align*}
J (p,q,r) & =
\left|
\frac{( \partial t , \partial u , \partial v) }{( \partial p , \partial q , \partial r) }
\right|
\nonumber \\
& = 1/
\left|
\frac{ ( \partial p , \partial q , \partial r) }{( \partial t , \partial u , \partial v)}
\right|
\nonumber \\
& = 1/
\begin{vmatrix}
u & t & 0 \\
0 & v & u \\
v & 0 & t
\end{vmatrix}
\nonumber \\
&= 1 / 2 tuv
\nonumber \\
&= \frac{1}{2 \sqrt{pqr}}
\end{align*}

So ##J (p,q,r) = \frac{1}{2 \sqrt{pqr}}## with ##0 \leq p < \infty##, ##0 \leq q < \infty##, ##0 \leq r < \infty##:

\begin{align*}
I & = 2 \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{p^2 + q^2 + r^2 + 1} \frac{dp dq dr}{\sqrt{pqr}}
\nonumber \\
&= \frac{1}{4} \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{x + y + z + 1} \frac{dx dy dz}{(xyz)^{3/4}} \qquad (\text{used } x = p^2 \text { etc})
\nonumber \\
& = \frac{1}{4} \int_0^\infty \int_0^\infty \int_0^\infty \frac{1}{(xyz)^{3/4}} \left( \int_0^\infty e^{- \alpha (x + y + z + 1)} d \alpha \right) dx dy dz
\nonumber \\
& = \frac{1}{4} \int_0^\infty e^{- \alpha} \left( \int_0^\infty \frac{1}{x^{3/4}} e^{- \alpha x} dx \right)^3 d \alpha
\nonumber \\
& = \frac{1}{4} \left( \int_0^\infty \frac{e^{- \alpha}}{\alpha^{3/4}} d \alpha \right) \left( \int_0^\infty \frac{1}{x^{3/4}} e^{-x} dx \right)^3
\nonumber \\
& = \frac{1}{4} \left( \int_0^\infty e^{-x} x^{\frac{1}{4} - 1} dx \right)^4
\nonumber \\
& = \frac{1}{4} \left[ \Gamma \left( \frac{1}{4} \right) \right]^4
\end{align*}
The integral is known as Watson integral. Its value is
$$
\int_0^\pi \int_0^\pi \int_0^\pi \dfrac{1}{1-\cos x\,\cos y\,\cos z}\,dx\,dy\,dz=\dfrac{1}{4}\,\Gamma\left(\dfrac{1}{4}\right)^4=2\pi {\overline\omega}^2=2G^2\pi^3\approx 43.198
$$
with the Gauß constant ##G=\displaystyle{\dfrac{2}{\pi}}\int_0^1\dfrac{ds}{\sqrt{1-s^4}}\,.##
 
  • Wow
Likes   Reactions: romsofia
  • #96
I have been trying problem #14 for a long long time, I hope that I finally got it right,

$$f(x)=a_nx^n+\ldots+a_1x+a_0$$

(case I)

Let ##a_n>0##, then ##f(x)>0 \space \forall \space x \in\mathbb{R}##
And ##f(x) \to +\infty## when ##x \to \pm \infty## {as ##f(x)## has no real roots, thus n is even}

Also, the coefficient of ##x^n## in ##F(x)## is also ##a_n##
##\therefore F(x) \to +\infty## when ##x \to \pm \infty## {as n is even}
##\therefore## maximum value of F(x) is ##+\infty##

Now,
$$\begin{align}
F(x)&=f(x)+h\cdot f'(x)+h^2\cdot f''(x)+\ldots+h^n\cdot f^{(n)}(x)\nonumber\\
F(x)&=f(x)+h\left(f'(x)+h\cdot f''(x)+\ldots+h^{(n-1)}\cdot f^{(n)}(x)\right)\nonumber\\
F(x)&=f(x)+h\cdot F'(x)
\end{align}$$

Let the minimum value of ##F(x)## is at ##x=a##, thus ##F'(a)=0##, so using equation (1),
$$F(a)=f(a)>0\space \left(\text{as}\space f(x)>0 \space \forall \space x \in\mathbb{R}\right)$$
##\therefore## minimum value of ##F(x)## is ##F(a)## which is greater than zero.
##\therefore F(x)>0 \space \forall \space x \in\mathbb{R}##

(case II)

Similarly, if ##a_n<0##, then ##f(x)<0 \space \forall \space x \in\mathbb{R}##
And ##f(x) \to -\infty## when ##x \to \pm \infty##

##\therefore F(x) \to -\infty## when ##x \to \pm \infty##
##\therefore## minimum value of F(x) is ##-\infty##

Let the maximum value of ##F(x)## is at ##x=b##, thus ##F'(b)=0##, and using equation (1),
$$F(b)=f(b)<0\space \left(\text{as}\space f(x)<0 \space \forall \space x \in\mathbb{R}\right)$$
##\therefore## maximum value of ##F(x)## is ##F(b)## which is less than zero.
##\therefore F(x)<0 \space \forall \space x \in\mathbb{R}##

Thus, we can see that ##F(x)## is never equal to zero for any real value of ##x##
##\therefore## it doesn't have any real zeros.
 
Last edited:
  • Like
Likes   Reactions: fresh_42
  • #97
Problem 13
$$\underbrace{\left|x-\dfrac{\sin(x)(14+\cos(x))}{9+6\cos(x)}\right|}_{=:f(x)}\leq 10^{-4}\text{ for } x\in \left[0,\dfrac{\pi}{4}\right]$$
let,
$$\begin{align}
f(x)&=x-\frac{\sin(x)(14+\cos(x))}{9+6\cos(x)}\nonumber\\
f'(x)&=1-\frac{\cos(x)(14+\cos(x))(9+6\cos(x))-\sin^2(x)(9+6\cos(x))+6\sin^2(x)(14+\cos(x))}{(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{(9+6\cos(x))^2-(14\cos(x)+\cos^2(x))(9+6\cos(x))+(1-\cos^2(x))(9+6\cos(x))-6(1-\cos^2(x))(14+\cos(x))}{(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{81+36\cos^2(x)+108\cos(x)-126\cos(x)-84\cos^2(x)-9\cos^2(x)-6\cos^3(x)+9+6\cos(x)-9\cos^2(x)-6\cos^3(x)-84-6\cos(x)+84\cos^2(x)+6\cos^3(x)} {(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{6-18\cos(x)+18\cos^2(x)-6\cos^3(x)} {(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{6(1-\cos(x))^3}{(9+6\cos(x))^2}\nonumber
\end{align}$$
We can see that ##f'(x)>0 \space \forall \space x \in \mathbb{R}##
##\therefore f(x)## is always increasing

Thus, the minimum value of ##f(x)\text{ for } x\in \left[0,\dfrac{\pi}{4}\right]## is 0 at ##x=0##
And, maximum value is,
$$\begin{align}
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{\frac{1}{\sqrt2}\left(14+\frac{1}{\sqrt2}\right)}{(9+3\sqrt2)}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{(14\sqrt2+1)}{(6(3+\sqrt2))}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{(14\sqrt2+1)(3-\sqrt2)}{42}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{42\sqrt2-28+3-\sqrt2}{42}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{41\sqrt2}{42}+\frac{25}{42}\nonumber
\end{align}$$
On putting the values of ##\pi## and ##\sqrt2##, we get
$$f(x)_{max}=9.7261908 \times 10^{-5}$$
##\therefore f(x)\leq 10^{-4}\text{ for } x\in \left[0,\dfrac{\pi}{4}\right]##
 
Last edited:
  • #98
kshitij said:
I have been trying problem #14 for a long long time, I hope that I finally got it right,

$$f(x)=a_nx^n+\ldots+a_1x+a_0$$

(case I)

Let ##a_n>0##, then ##f(x)>0 \space \forall \space x \in\mathbb{R}##
And ##f(x) \to +\infty## when ##x \to \pm \infty## {as ##f(x)## has no real roots, thus n is even}

Also, the coefficient of ##x^n## in ##F(x)## is also ##a_n##
##\therefore F(x) \to +\infty## when ##x \to \pm \infty## {as n is even}
##\therefore## maximum value of F(x) is ##+\infty##

Now,
$$\begin{align}
F(x)&=f(x)+h\cdot f'(x)+h^2\cdot f''(x)+\ldots+h^n\cdot f^{(n)}(x)\nonumber\\
F(x)&=f(x)+h\left(f'(x)+h\cdot f''(x)+\ldots+h^{(n-1)}\cdot f^{(n)}(x)\right)\nonumber\\
F(x)&=f(x)+h\cdot F'(x)
\end{align}$$

Let the minimum value of ##F(x)## is at ##x=a##, thus ##F'(a)=0##, so using equation (1),
$$F(a)=f(a)>0\space \left(\text{as}\space f(x)>0 \space \forall \space x \in\mathbb{R}\right)$$
##\therefore## minimum value of ##F(x)## is ##F(a)## which is greater than zero.
##\therefore F(x)>0 \space \forall \space x \in\mathbb{R}##

(case II)

Similarly, if ##a_n<0##, then ##f(x)<0 \space \forall \space x \in\mathbb{R}##
And ##f(x) \to -\infty## when ##x \to \pm \infty##

##\therefore F(x) \to -\infty## when ##x \to \pm \infty##
##\therefore## minimum value of F(x) is ##-\infty##

Let the maximum value of ##F(x)## is at ##x=b##, thus ##F'(b)=0##, and using equation (1),
$$F(b)=f(b)<0\space \left(\text{as}\space f(x)<0 \space \forall \space x \in\mathbb{R}\right)$$
##\therefore## maximum value of ##F(x)## is ##F(b)## which is less than zero.
##\therefore F(x)<0 \space \forall \space x \in\mathbb{R}##

Thus, we can see that ##F(x)## is never equal to zero for any real value of ##x##
##\therefore## it doesn't have any real zeros.
You can abbreviate the second case by simply mention that we can use ##-f(x)## instead.
 
  • Like
Likes   Reactions: kshitij
  • #99
fresh_42 said:
You can abbreviate the second case by simply mention that we can use ##-f(x)## instead.
As I said that I was trying this problem for a long time, I didn't think much after I got an idea about how to prove it, I was just too excited to post it here
 
  • Like
Likes   Reactions: fresh_42
  • #100
kshitij said:
Problem 13
$$\underbrace{\left|x-\dfrac{\sin(x)(14+\cos(x))}{9+6\cos(x)}\right|}_{=:f(x)}\leq 10^{-4}\text{ for } x\in \left[0,\dfrac{\pi}{4}\right]$$
let,
$$\begin{align}
f(x)&=x-\frac{\sin(x)(14+\cos(x))}{9+6\cos(x)}\nonumber\\
f'(x)&=1-\frac{\cos(x)(14+\cos(x))(9+6\cos(x))-\sin^2(x)(9+6\cos(x))+6\sin^2(x)(14+\cos(x))}{(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{(9+6\cos(x))^2-(14\cos(x)+\cos^2(x))(9+6\cos(x))+(1-\cos^2(x))(9+6\cos(x))-6(1-\cos^2(x))(14+\cos(x))}{(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{81+36\cos^2(x)+108\cos(x)-126\cos(x)-84\cos^2(x)-9\cos^2(x)-6\cos^3(x)+9+6\cos(x)-9\cos^2(x)-6\cos^3(x)-84-6\cos(x)+84\cos^2(x)+6\cos^3(x)} {(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{6-18\cos(x)+18\cos^2(x)-6\cos^3(x)} {(9+6\cos(x))^2}\nonumber\\
f'(x)&=\frac{6(1-\cos(x))^3}{(9+6\cos(x))^2}\nonumber
\end{align}$$
We can see that ##f'(x)>0 \space \forall \space x \in \mathbb{R}##
##\therefore f(x)## is always increasing

Thus, the minimum value of ##f(x)\text{ for } x\in \left[0,\dfrac{\pi}{4}\right]## is 0 at ##x=0##
And, maximum value is,
$$\begin{align}
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{\frac{1}{\sqrt2}\left(14+\frac{1}{\sqrt2}\right)}{(9+3\sqrt2)}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{(14\sqrt2+1)}{(6(3+\sqrt2))}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{(14\sqrt2+1)(3-\sqrt2)}{42}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{42\sqrt2-28+3-\sqrt2}{42}\nonumber\\
f\left(\frac{\pi}{4}\right)=\frac{\pi}{4}-\frac{41\sqrt2}{42}+\frac{25}{42}\nonumber
\end{align}$$
On putting the values of ##\pi## and ##\sqrt2##, we get
$$f(x)_{max}=9.7261908 \times 10^{-5}$$
##\therefore f(x)\leq 10^{-4}\text{ for } x\in \left[0,\dfrac{\pi}{4}\right]##
You should have used the approximations that I gave in the problem statement, not a calculator so that the final conclusion would be
Now ##\pi/4= 0.7853975+\dfrac{\delta}{4} < 0.7854## and ##\dfrac{41\sqrt{2}-25}{42}=\dfrac{41\cdot 1.41421-25+41 \varepsilon }{42}>\dfrac{32.98261}{42}>0.7853\,,## i.e. ##0\leq f(x)<0.7854-0.7853=10^{-4}.##
but, yes, this is correct.

The reason is: If you use a calculator, then you make implicitly the assumption, that it is more precise than the values you have been given. Well, this is probably correct, as long as you didn't use a slide rule. Nevertheless, it is an assumption about a device you have no control of and you should be aware of it, e.g. if you write a protocol of an experiment.
 
  • Like
Likes   Reactions: kshitij
  • #101
fresh_42 said:
You should have used the approximations that I gave in the problem statement, not a calculator so that the final conclusion would be

but, yes, this is correct.

The reason is: If you use a calculator, then you make implicitly the assumption, that it is more precise than the values you have been given. Well, this is probably correct, as long as you didn't use a slide rule. Nevertheless, it is an assumption about a device you have no control of and you should be aware of it, e.g. if you write a protocol of an experiment.
I'll keep that in mind next time
 
  • #102
kshitij said:
I'll keep that in mind next time
It is not important in a case like this. I still have the dream that people can learn from those problems now and then, and the problem was all about precision. Hence the lesson to be learned is, that the precision of a result is always as good as the precision of the measurement, or calculating devices is. My remark was meant to sharpen your senses, not to criticize anything.
 
  • Like
Likes   Reactions: kshitij
  • #103
fresh_42 said:
7. Let ##\alpha ## be an algebraic number of degree ##n\geq 1.## Then there is a real number ##c>0## such that for all ##\mathbb{Q}\ni\dfrac{p}{q}\neq \alpha ##
$$
\left|\alpha -\dfrac{p}{q}\right|\geq \dfrac{c}{q^n}
$$
Hi guys, I didn't read all your post but it seems that no one say anything about Q7. I remember that is a beautiful theorem called Liouville Theorem to describe algebraic number and transcendental number a little bit. Here's some hint:
let f(x) be the polynomial for ##\alpha## . consider the nearest p/q to ##\alpha## , when p is variable and q is settled. Then try f(##\alpha##)-f(p/q), factorize it and get ##\alpha - p/q##.
 
  • #104
I may have done it, must go to bed now:

The ##\alpha## is the root of an ##n-##th order polynomial with integer coefficients:

\begin{align*}
f (x) = \sum_{i=0}^n a_i x^i
\end{align*}

Put

\begin{align*}
\tilde{\alpha} = \frac{p}{q} .
\end{align*}

If ##f (x)## has rational roots then denote them as ##\{ r_1 , \dots , r_k \}##.

Case (a) ##\tilde{\alpha} \not\in \{ r_1 , \dots , r_k \}## and ##|\alpha - \tilde{\alpha}| \leq 1##.

We have the identity

\begin{align*}
f (x) & = \sum_{i=1}^n a_i x^i - \sum_{i=1}^n a_i a^i
\nonumber \\
& = \sum_{i=1}^n a_i (x - a) (x^{i-1} + x^{i-2} a + \cdots + a^{i-1})
\nonumber \\
& = (x - a) \sum_{i=1}^n a_i \sum_{j=0}^{i-1} x^{i-1-j} a^j
\end{align*}

Application of the triangle inequality ##|\tilde{\alpha}| = |\tilde{\alpha} - \alpha + \alpha| \leq |\alpha - \tilde{\alpha}| + |\alpha|## together with ##|\alpha - \tilde{\alpha}| \leq 1## implies ##|\tilde{\alpha}| \leq 1 + |\alpha|##. We use this to obtain the inequality:

\begin{align*}
|f (\alpha) - f (\tilde{\alpha})| & \leq |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \sum_{j=0}^{i-1} |\alpha^{i-1-j} \tilde{\alpha}^j |
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \sum_{j=0}^{i-1} |\alpha^{i-1}| | \alpha^{-j} (1 + |\alpha|)^j |
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i \alpha^{i-1}| \sum_{j=0}^{i-1} \left( 1 + \frac{1}{|\alpha|} \right)^j
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i \alpha^{i-1}| \frac{\left( 1 + \frac{1}{|\alpha|} \right)^i - 1}{\left( 1 + \frac{1}{|\alpha|} \right) - 1}
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \left( (|\alpha| + 1)^i - |\alpha|^i \right)
\nonumber \\
& = |\alpha - \tilde{\alpha}| C_\alpha
\end{align*}

where ##C_\alpha = \sum_{i=1}^n |a_i| \left( (|\alpha| + 1)^i - |\alpha|^i \right) > 0##. We have

\begin{align*}
|f ( \tilde{\alpha} )| & = \left|\frac{a_0 q^n + a_1 q^{n-1} p + \cdots + a_n p^n}{q^n} \right| \geq \frac{1}{q^n}
\end{align*}

as the numerator is a nonzero integer, and so

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq \frac{1}{C_\alpha} |f ( \tilde{\alpha} )| \geq \frac{1}{C_\alpha} \frac{1}{q^n}
\end{align*}

Case (b) ##\tilde{\alpha} \not\in \{ r_1 , \dots , r_k \}## and ##|\alpha - \tilde{\alpha}| > 1##.

\begin{align*}
|\alpha - \tilde{\alpha}| & > 1 \geq \frac{1}{q^n}
\end{align*}

Case (c) ##\alpha \not\in \{ r_1 , \dots , r_k \}##, ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}##.

Choose ##C_r = \min_i | \alpha - r_i| > 0##. Then

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq C_r \geq \frac{C_r}{q^n}
\end{align*}

(If there are no rational roots, ignore case (c)).

Case (d) Say ##\alpha \in \{ r_1 , \dots , r_k \}## and ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}## (in which case ##k > 1##). Set ##\overline{C}_r = \min_{i \not= j}|r_i - r_j| > 0##,

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq \overline{C}_r \geq \overline{C}_r \frac{1}{q^n}
\end{align*}

(If there are no rational roots, ignore case (d)).

Finally, we can write

\begin{align*}
c = \min \left\{
\begin{matrix}
\frac{1}{C_\alpha} & : |\alpha - \frac{p}{q}| \leq 1, \quad \frac{p}{q} \not\in \{ r_1 , \dots , r_k \} \\
1 & : |\alpha - \frac{p}{q}| > 1, \quad \frac{p}{q} \not\in \{ r_1 , \dots , r_k \} \\
C_r & : \alpha \not\in \{ r_1 , \dots , r_k \} , \quad \frac{p}{q} \in \{ r_1 , \dots , r_k \} \\
\overline{C}_r & : \alpha , \frac{p}{q} \in \{ r_1 , \dots , r_k \}
\end{matrix}
\right.
\end{align*}

then

\begin{align*}
\left| \alpha - \frac{p}{q} \right| & \geq \frac{c}{q^n}
\end{align*}

where ##c > 0##.
 
Last edited:
  • #105
julian said:
I may have done it, must go to bed now:

The ##\alpha## is the root of an ##n-##th order polynomial with integer coefficients:

\begin{align*}
f (x) = \sum_{i=0}^n a_i x^i
\end{align*}

Put

\begin{align*}
\tilde{\alpha} = \frac{p}{q} .
\end{align*}

If ##f (x)## has rational roots then denote them as ##\{ r_1 , \dots , r_k \}##.

Case (a) ##\tilde{\alpha} \not\in \{ r_1 , \dots , r_k \}## and ##|\alpha - \tilde{\alpha}| \leq 1##.

We have the identity

\begin{align*}
f (x) & = \sum_{i=1}^n a_i x^i - \sum_{i=1}^n a_i a^i
\nonumber \\
& = \sum_{i=1}^n a_i (x - a) (x^{i-1} + x^{i-2} a + \cdots + a^{i-1})
\nonumber \\
& = (x - a) \sum_{i=1}^n a_i \sum_{j=0}^{i-1} x^{i-1-j} a^j
\end{align*}
The ##a## on the right-hand side should be an ##\alpha .##
julian said:
Application of the triangle inequality ##|\tilde{\alpha}| = |\tilde{\alpha} - \alpha + \alpha| \leq |\alpha - \tilde{\alpha}| + |\alpha|## together with ##|\alpha - \tilde{\alpha}| \leq 1## implies ##|\tilde{\alpha}| \leq 1 + |\alpha|##. We use this to obtain the inequality:

\begin{align*}
|f (\alpha) - f (\tilde{\alpha})| & \leq |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \sum_{j=0}^{i-1} |\alpha^{i-1-j} \tilde{\alpha}^j |
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \sum_{j=0}^{i-1} |\alpha^{i-1}| | \alpha^{-j} (1 + |\alpha|)^j |
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i \alpha^{i-1}| \sum_{j=0}^{i-1} \left( 1 + \frac{1}{|\alpha|} \right)^j
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i \alpha^{i-1}| \frac{\left( 1 + \frac{1}{|\alpha|} \right)^i - 1}{\left( 1 + \frac{1}{|\alpha|} \right) - 1}
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \left( (|\alpha| + 1)^i - |\alpha|^i \right)
\nonumber \\
& = |\alpha - \tilde{\alpha}| C_\alpha
\end{align*}
The first equality sign has to be less or equal.
julian said:
where ##C_\alpha = \sum_{i=1}^n |a_i| \left( (|\alpha| + 1)^i - |\alpha|^i \right) > 0##. We have

\begin{align*}
|f ( \tilde{\alpha} )| & = \left|\frac{a_0 q^n + a_1 q^{n-1} p + \cdots + a_n p^n}{q^n} \right| \geq \frac{1}{q^n}
\end{align*}

as the numerator is a nonzero integer, and so

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq \frac{1}{C_\alpha} |f ( \tilde{\alpha} )| \geq \frac{1}{C_\alpha} \frac{1}{q^n}
\end{align*}

Case (b) ##\tilde{\alpha} \not\in \{ r_1 , \dots , r_k \}## and ##|\alpha - \tilde{\alpha}| > 1##.

\begin{align*}
|\alpha - \tilde{\alpha}| & > 1 \geq \frac{1}{q^n}
\end{align*}
Sloppy, since ##q^n < 0## isn't ruled out and ##c## cannot swallow the sign.
julian said:
Case (c) ##\alpha \not\in \{ r_1 , \dots , r_k \}##, ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}##.
I assume this is meant to be the other way around since we already covered all cases of ##\alpha \not\in \{ r_1 , \dots , r_k \}.##

The good news is that it is irrelevant because we may assume ##f(x)## to be irreducible over ##\mathbb{Q}##.
julian said:
Choose ##C_r = \min_i | \alpha - r_i| > 0##. Then

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq C_r \geq \frac{C_r}{q^n}
\end{align*}

(If there are no rational roots, ignore case (c)).

Case (d) Say ##\alpha \in \{ r_1 , \dots , r_k \}## and ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}## (in which case ##k > 1##). Set ##\overline{C}_r = \min_{i \not= j}|r_i - r_j| > 0##,

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq \overline{C}_r \geq \overline{C}_r \frac{1}{q^n}
\end{align*}

(If there are no rational roots, ignore case (d)).

Finally, we can write

\begin{align*}
c = \min \left\{
\begin{matrix}
\frac{1}{C_\alpha} & : |\alpha - \frac{p}{q}| \leq 1, \quad \frac{p}{q} \not\in \{ r_1 , \dots , r_k \} \\
1 & : |\alpha - \frac{p}{q}| > 1, \quad \frac{p}{q} \not\in \{ r_1 , \dots , r_k \} \\
C_r & : \alpha \not\in \{ r_1 , \dots , r_k \} , \quad \frac{p}{q} \in \{ r_1 , \dots , r_k \} \\
\overline{C}_r & : \alpha , \frac{p}{q} \in \{ r_1 , \dots , r_k \}
\end{matrix}
\right.
\end{align*}

then

\begin{align*}
\left| \alpha - \frac{p}{q} \right| & \geq \frac{c}{q^n}
\end{align*}

where ##c > 0##.
It would have been shorter to write ##f(x)=(x-\alpha )g(x)\in \mathbb{C}[x]## and then discuss the neighborhood of ##\alpha ## by continuity of ##g(x)##.
 
  • #106
fresh_42 said:
The ##a## on the right-hand side should be an ##\alpha .##
I meant to write ##f(x) - f(a)## on the LHS. I was just simply stating an identity, I make appropriate replacements for ##x## and ##a## in the next part.
fresh_42 said:
The first equality sign has to be less or equal.
Typo.
fresh_42 said:
I assume this is meant to be the other way around since we already covered all cases of ##\alpha \not\in \{ r_1 , \dots , r_k \}.##

The good news is that it is irrelevant because we may assume ##f(x)## to be irreducible over ##\mathbb{Q}##.
I thought I had to consider ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}## separately as the proof used in case (a) used that ##f(\tilde{\alpha}) \not= 0##. But of course I didn't have to do this because, as you alluded to, if ##p/q## were a root then you could factor out ##x - p/q## from ##f (x)## and ##\alpha## would satisfy a polynomial with rational coefficients whose degree is less than ##n##.
 
Last edited:
  • #107
@julian, your proof was fine. My remarks' meaning was mainly to assure you, that I had actually read your proof, rather than criticism.
 
  • Like
Likes   Reactions: graphking
  • #108
julian said:
I may have done it, must go to bed now:

The ##\alpha## is the root of an ##n-##th order polynomial with integer coefficients:

\begin{align*}
f (x) = \sum_{i=0}^n a_i x^i
\end{align*}

Put

\begin{align*}
\tilde{\alpha} = \frac{p}{q} .
\end{align*}

If ##f (x)## has rational roots then denote them as ##\{ r_1 , \dots , r_k \}##.

Case (a) ##\tilde{\alpha} \not\in \{ r_1 , \dots , r_k \}## and ##|\alpha - \tilde{\alpha}| \leq 1##.

We have the identity

\begin{align*}
f (x) & = \sum_{i=1}^n a_i x^i - \sum_{i=1}^n a_i a^i
\nonumber \\
& = \sum_{i=1}^n a_i (x - a) (x^{i-1} + x^{i-2} a + \cdots + a^{i-1})
\nonumber \\
& = (x - a) \sum_{i=1}^n a_i \sum_{j=0}^{i-1} x^{i-1-j} a^j
\end{align*}

Application of the triangle inequality ##|\tilde{\alpha}| = |\tilde{\alpha} - \alpha + \alpha| \leq |\alpha - \tilde{\alpha}| + |\alpha|## together with ##|\alpha - \tilde{\alpha}| \leq 1## implies ##|\tilde{\alpha}| \leq 1 + |\alpha|##. We use this to obtain the inequality:

\begin{align*}
|f (\alpha) - f (\tilde{\alpha})| & \leq |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \sum_{j=0}^{i-1} |\alpha^{i-1-j} \tilde{\alpha}^j |
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \sum_{j=0}^{i-1} |\alpha^{i-1}| | \alpha^{-j} (1 + |\alpha|)^j |
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i \alpha^{i-1}| \sum_{j=0}^{i-1} \left( 1 + \frac{1}{|\alpha|} \right)^j
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i \alpha^{i-1}| \frac{\left( 1 + \frac{1}{|\alpha|} \right)^i - 1}{\left( 1 + \frac{1}{|\alpha|} \right) - 1}
\nonumber \\
& = |\alpha - \tilde{\alpha}| \sum_{i=1}^n |a_i| \left( (|\alpha| + 1)^i - |\alpha|^i \right)
\nonumber \\
& = |\alpha - \tilde{\alpha}| C_\alpha
\end{align*}

where ##C_\alpha = \sum_{i=1}^n |a_i| \left( (|\alpha| + 1)^i - |\alpha|^i \right) > 0##. We have

\begin{align*}
|f ( \tilde{\alpha} )| & = \left|\frac{a_0 q^n + a_1 q^{n-1} p + \cdots + a_n p^n}{q^n} \right| \geq \frac{1}{q^n}
\end{align*}

as the numerator is a nonzero integer, and so

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq \frac{1}{C_\alpha} |f ( \tilde{\alpha} )| \geq \frac{1}{C_\alpha} \frac{1}{q^n}
\end{align*}

Case (b) ##\tilde{\alpha} \not\in \{ r_1 , \dots , r_k \}## and ##|\alpha - \tilde{\alpha}| > 1##.

\begin{align*}
|\alpha - \tilde{\alpha}| & > 1 \geq \frac{1}{q^n}
\end{align*}

Case (c) ##\alpha \not\in \{ r_1 , \dots , r_k \}##, ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}##.

Choose ##C_r = \min_i | \alpha - r_i| > 0##. Then

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq C_r \geq \frac{C_r}{q^n}
\end{align*}

(If there are no rational roots, ignore case (c)).

Case (d) Say ##\alpha \in \{ r_1 , \dots , r_k \}## and ##\tilde{\alpha} \in \{ r_1 , \dots , r_k \}## (in which case ##k > 1##). Set ##\overline{C}_r = \min_{i \not= j}|r_i - r_j| > 0##,

\begin{align*}
|\alpha - \tilde{\alpha}| & \geq \overline{C}_r \geq \overline{C}_r \frac{1}{q^n}
\end{align*}

(If there are no rational roots, ignore case (d)).

Finally, we can write

\begin{align*}
c = \min \left\{
\begin{matrix}
\frac{1}{C_\alpha} & : |\alpha - \frac{p}{q}| \leq 1, \quad \frac{p}{q} \not\in \{ r_1 , \dots , r_k \} \\
1 & : |\alpha - \frac{p}{q}| > 1, \quad \frac{p}{q} \not\in \{ r_1 , \dots , r_k \} \\
C_r & : \alpha \not\in \{ r_1 , \dots , r_k \} , \quad \frac{p}{q} \in \{ r_1 , \dots , r_k \} \\
\overline{C}_r & : \alpha , \frac{p}{q} \in \{ r_1 , \dots , r_k \}
\end{matrix}
\right.
\end{align*}

then

\begin{align*}
\left| \alpha - \frac{p}{q} \right| & \geq \frac{c}{q^n}
\end{align*}

where ##c > 0##.
Yeah, that was what I mean! But as @fresh_42 has figured out, the f(x) is actually the minimal polynomial (minimal in the degree of polynomial) , that would make the proof shorter (like case c,d can be erased). Also, like I said:
graphking said:
consider the nearest p/q to α , when p is variable and q is settled.
so case b could be avoided. I think this quote is a first understanding of this problem, we should discover p is variable and q is settled.

by the way, @fresh_42 you mentioned your way to do so:
fresh_42 said:
It would have been shorter to write f(x)=(x−α)g(x)∈C[x] and then discuss the neighborhood of α by continuity of g(x).
it is really short, indeed. But what if f(x) consists more than one time's (x-##\alpha##)?
 
  • #109
graphking said:
it is really short, indeed. But what if ##f(x)## consists more than one time's ##(x-\alpha )##?
Field extensions of fields with characteristic zero are separable, i.e. such a case cannot occur. However, as far as I could see, the proof doesn't change if we split ##f(x)=(x-\alpha )^kg(x).##
 
  • Like
Likes   Reactions: graphking
  • #110
Hi @graphking. I figured case (b) was needed because the question asked for a ##c > 0## for all rational numbers ##\not= \alpha##. In the combined cases (a) and (b) ##p## and ##q## are any integers you want them to be (as long as ##p/q \not= \alpha##) and I was able to eliminate ##p## from the determination of such a ##c## (assuming ##p/q \not= \alpha##).
 
Last edited:
  • #111
I mean the inequation we needed to proof is irrelevant with p, so p can be variable(q is fitted/set/settled)and when it comes to the nearest case to ##\alpha##, we know that is the case we need to proof for this certain q, because, with other p, the distance would be bigger, so the inequation would hold if the nearest case hold.
 
  • #112
The Q7 can be used to find transcendental numbers. I see in Zorich's analysis, a exercise in chapter 2.2, the Liouville Theorem: Any irrational number can be "well approximated" by rational numbers is a transcendental number, the "well approximated" means:
we call an irrational number ##\alpha## is well approximated by rational numbers when for any natural numbers n, N, there exist a rational number p/q satisfiy: |##\alpha##-p/q|<1/(N*q^n).
 
  • Like
Likes   Reactions: fresh_42
  • #113
If someone wants to complete the proof, i.e. find a proof where the choice of ##c## is independent from ##p,q## as required (##\forall \,\alpha \,\exists\, c \,\forall \,p,q##) instead of (##\forall \,\alpha \, \forall \,p,q\,\exists\, c##) which I think @julian has proven, you can either use my hint at the end of post #105 or investigate the problem along the lines ##\alpha =a+ib \in \not\in \mathbb{R}## with the MVT.
 
  • #114
So @Fred Wright got for problem 1:

\begin{align*}
I = \Gamma (\frac{1}{3}) \Gamma (\frac{2}{3}) \sum_{n=0}^\infty (-1)^n \frac{x^{3n}}{(3n)!}
\end{align*}

where ##x = 3 \lambda##, which I'm getting as well. And he found that

\begin{align*}
\sum_{n=0}^\infty (-1)^n \frac{x^{3n}}{(3n)!} = \frac{2}{3} e^{\frac{x}{2}} \cos \left( \frac{\sqrt{3} x}{2} \right) + \frac{1}{3} e^{-x}
\end{align*}

@fresh_42 said the cosine function looks wrong, but I'm getting the same result as @Fred Wright.

@Fred Wright's answer can be simplified slightly by using the multiplication theorem:

\begin{align*}
\Gamma (z) \Gamma \left( z + \frac{1}{n} \right) \Gamma \left( z + \frac{2}{n} \right) \cdots \Gamma \left( z + \frac{n-1}{n} \right) = (2 \pi)^{(n-1)/2} n^{1/2 - nz} \Gamma (nz)
\end{align*}

From which we have

\begin{align*}
\Gamma \left( \frac{1}{3} \right) \Gamma \left( \frac{2}{3} \right)
&\ = \lim_{z \rightarrow 0} \Gamma \left( z + \frac{1}{3} \right) \Gamma \left( z + \frac{2}{3} \right)
\nonumber \\
& = (2 \pi) 3^{1/2} \lim_{z \rightarrow 0} \frac{\Gamma (3z)}{\Gamma (z)}
\nonumber \\
& = (2 \pi) 3^{1/2} \lim_{z \rightarrow 0} \frac{1}{3} \frac{3z \Gamma (3z)}{z \Gamma (z)}
\nonumber \\
& = 2 \pi \sqrt{3} \lim_{z \rightarrow 0} \frac{1}{3} \frac{\Gamma (3z + 1)}{\Gamma (z + 1)}
\nonumber \\
& = \frac{2 \pi}{\sqrt{3}} .
\end{align*}

So @Fred Wright's answer reads:

\begin{align*}
I = \frac{2 \pi}{3 \sqrt{3}} \left[ 2 e^{\frac{3 \lambda}{2}} \cos \left( \frac{3 \sqrt{3} \lambda}{2} \right) + e^{-3 \lambda} \right]
\end{align*}
 
Last edited:
  • #115
julian said:
So @Fred Wright got for problem 1:

\begin{align*}
I = \Gamma (\frac{1}{3}) \Gamma (\frac{2}{3}) \sum_{n=0}^\infty (-1)^n \frac{x^{3n}}{(3n)!}
\end{align*}

where ##x = 3 \lambda##, which I'm getting as well. And he found that

\begin{align*}
\sum_{n=0}^\infty (-1)^n \frac{x^{3n}}{(3n)!} = \frac{2}{3} e^{\frac{x}{2}} \cos \left( \frac{\sqrt{3} x}{2} \right) + \frac{1}{3} e^{-x}
\end{align*}

@fresh_42 said the cosine function looks wrong, but I'm getting the same result as @Fred Wright.

@Fred Wright's answer can be simplified slightly by using the multiplication theorem:

\begin{align*}
\Gamma (z) \Gamma \left( z + \frac{1}{n} \right) \Gamma \left( z + \frac{2}{n} \right) \cdots \Gamma \left( z + \frac{n-1}{n} \right) = (2 \pi)^{(n-1)/2} n^{1/2 - nz} \Gamma (nz)
\end{align*}

From which we have

\begin{align*}
\Gamma \left( \frac{1}{3} \right) \Gamma \left( \frac{2}{3} \right)
&\ = \lim_{z \rightarrow 0} \Gamma \left( z + \frac{1}{3} \right) \Gamma \left( z + \frac{2}{3} \right)
\nonumber \\
& = (2 \pi) 3^{1/2} \lim_{z \rightarrow 0} \frac{\Gamma (3z)}{\Gamma (z)}
\nonumber \\
& = (2 \pi) 3^{1/2} \lim_{z \rightarrow 0} \frac{1}{3} \frac{3z \Gamma (3z)}{z \Gamma (z)}
\nonumber \\
& = 2 \pi \sqrt{3} \lim_{z \rightarrow 0} \frac{1}{3} \frac{\Gamma (3z + 1)}{\Gamma (z + 1)}
\nonumber \\
& = \frac{2 \pi}{\sqrt{3}} .
\end{align*}

So @Fred Wright's answer reads:

\begin{align*}
I = \frac{2 \pi}{3 \sqrt{3}} \left[ 2 e^{\frac{3 \lambda}{2}} \cos \left( \frac{3 \sqrt{3} \lambda}{2} \right) + e^{-3 \lambda} \right]
\end{align*}
My answer reads ##\dfrac{2\pi}{\sqrt{3}}\cdot e^{-3\lambda }##.
 

Similar threads

  • · Replies 100 ·
4
Replies
100
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 93 ·
4
Replies
93
Views
15K
  • · Replies 42 ·
2
Replies
42
Views
11K
  • · Replies 61 ·
3
Replies
61
Views
10K
  • · Replies 86 ·
3
Replies
86
Views
14K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 64 ·
3
Replies
64
Views
16K
  • · Replies 102 ·
4
Replies
102
Views
11K