Challenge Math Challenge - July 2019

Click For Summary
The Math Challenge from July 2019 featured a variety of problems, many of which were solved by participants. Key discussions included proving inequalities involving arctangents, demonstrating the uniqueness of real roots in polynomial equations, and finding orthogonal matrices for diagonalization. Participants also explored concepts in calculus, Galois theory, and properties of cyclic subspaces. The thread highlighted collaborative problem-solving and the importance of clear mathematical communication, especially in using LaTeX for presenting solutions. Overall, the discussions emphasized both the complexity and the enjoyment of tackling challenging mathematical problems.
  • #31
Pi-is-3 said:
Oh, come on ...!

No graphics, handwritings and even more, no uploads on external servers please!
https://www.physicsforums.com/help/latexhelp/Presentation is part of what should be learned with these riddles. To know a solution is far from being able to convince others that your solution is one.
 
Physics news on Phys.org
  • #32
I am having a lot of problem with latex. I am not that bad at latex in AoPS or Stack exchange but I am not able to use it properly here :(. Example, my first line of answer should be this-

By AM-GM

$$ \frac{(u+(1-v)+v+(1-w)+w+(1-u))}{6} \geq
(u(1-v)v(1-w)w(1-u))^{\frac{1}{6}$$

but the Latex is not working. What am I doing wrong?
 
  • #33
Pi-is-3 said:
I am having a lot of problem with latex. I am not that bad at latex in AoPS or Stack exchange but I am not able to use it properly here :(. Example, my first line of answer should be this-

By AM-GM

$$ \frac{(u+(1-v)+v+(1-w)+w+(1-u))}{6} \geq
(u(1-v)v(1-w)w(1-u))^{\frac{1}{6}$$

but the Latex is not working. What am I doing wrong?

This is due to your laziness. If you copy and paste instead of actually writing it, you also copy hidden control sequences as colors or fonts. This doesn't work in LaTeX here.

If it helps you, you can download a script program, e.g. AutoHotkey, which allows you to abbreviate certain key sequences by keyboard shortcuts. For example I have \frac{}{} on Ctrl+F or \ begin{bmatrix} \ end{bmatrix} on Alt+M (without the blanks).
 
  • #34
@fresh_42 Thanks for telling my mistake! Now I can latex again 😁
By AM-GM

$$ \frac {(u)+(1-v)+(v)+(1-w)+(w)+(1-u)} {6} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{6}}$$


Implies

$$ \frac {1} {2} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{6}}$$


Squaring both sides (works in this inequality as both sides are positive)

$$ \frac {1} {4} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{3}}$$


Now using GM-HM

$$ \frac {1} {4} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{3}} \geq \frac {3} {\frac{1}{u(1-v)}+\frac{1}{v(1-w)} + \frac{1}{w(1-u)}}$$


Implies

$$ \frac{1}{u(1-v)} + \frac{1}{v(1-w)} + \frac{1}{w(1-u)} \geq 12 $$


W.L.O.G assume ## u(1-v) \leq v(1-w) \leq w(1-u) ##

then ## \frac{3}{u(1-v)} \geq 12 ##

implies $$\frac{1}{4} \geq u(1-v) $$

Hence Proved
 
  • #35
Pi-is-3 said:
@fresh_42 Thanks for telling my mistake! Now I can latex again 😁
By AM-GM

$$ \frac {(u)+(1-v)+(v)+(1-w)+(w)+(1-u)} {6} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{6}}$$

Implies

$$ \frac {1} {2} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{6}}$$

Squaring both sides (works in this inequality as both sides are positive)

$$ \frac {1} {4} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{3}}$$

Now using GM-HM

$$ \frac {1} {4} \geq [(u)(1-v)(v)(1-w)(w)(1-u)]^{\frac{1}{3}} \geq \frac {3} {\frac{1}{u(1-v)}+\frac{1}{v(1-w)} + \frac{1}{w(1-u)}}$$

Implies

$$ \frac{1}{u(1-v)} + \frac{1}{v(1-w)} + \frac{1}{w(1-u)} \geq 12 $$

W.L.O.G assume ## u(1-v) \leq v(1-w) \leq w(1-u) ##

then ## \frac{3}{u(1-v)} \geq 12 ##

implies $$\frac{1}{4} \geq u(1-v) $$

Hence Proved


Well done and far better written than your first attempts, and I do not mean LaTeX, I mean the structure of your proof!

You should consider to download this little helper. It saves so much time, that I can almost type at a normal speed without all these special characters. I even use keys for \alpha , \beta, \omega etc.

If anyone still wants to try: there is another proof using a binomial formula.
 
Last edited:
  • Like
Likes Pi-is-3
  • #36
#7
Isnt #7 just the variance of W?
Var(W)=E[W^2]-E^2[W]
E[W]=E^2[W]=0
So
var(w)=E[W^2]=t
 
Last edited:
  • #37
Well, from science advisors and such experienced members of PF as you are we can certainly expect that they can solve problems meant for the kids still in school.
 
  • #38
For anyone learning proof writing and presentation skills in mathematics, it's important to remember that a proof never becomes any better by making it look like something esoteric is taking place there. Most "normal" people won't understand it anyway, even when it's made "as simple as possible but no simpler" (a quote from Einstein). When a professional mathematician writes a new long proof of some theorem, it's difficult enough to follow the reasoning in it even when it contains all the necessary detail and the reader is a professional of equal standing.
 
  • #39
Great, I am not normal :cry:
 
  • #40
Set ##A=1+i, B=1-i, C=-1-i, D=-1+i## as the vertices of the square.
Our circle passes through ##A, D## and ##\frac{C+B}{2}=M=-i##
The center lies on the intersection of y-axis and perpendicular bisector of ##DM##.
equating slopes (or equations lines, depends on you), we get the center ##K= \frac{i}{4}##
Using distance formula we find circle intersects the side DC at point ##L=(-1,-0.5)##.
So we get, the circle divides the side DC into the ratio 3:1
 
  • Like
Likes fresh_42
  • #41
BWV said:
#7
Isnt #7 just the variance of W?
Var(W)=E[W^2]-E^2[W]
E[W]=E^2[W]=0
So
var(w)=E[W^2]=t

It is not. Why do you think so?

Note that the question is changed to calculating ##\mathbb{E}[X]##, which is a lot easier.
 
  • #42
I know P13 was solved, but I like the exercise so I tried something, too.
For every 0<x<1 it holds x(1-x)\leq 1/4 (find the vertex of the parabola). Assume u(1-v) > 1/4 and v(1-w) > 1/4. Need to show w(1-u) \leq 1/4.

If 1-u \leq 1-w, then
w(1-u) \leq w(1-w) \leq 1/4.

Assume 1-u>1-w. From
<br /> v(1-u) &gt; v(1-w) &gt; 1/4 \geq w(1-w)<br />
we further obtain u&lt;w&lt;v. But now
<br /> 1/4 &lt; u(1-v) &lt; u(1-u) \leq 1/4,<br />
which is impossible. Thus 1-u \leq 1-w must hold.
 
  • #43
Problem 9
I recognize this problem as Cholesky decomposition - Wikipedia

The proof of uniqueness is by construction, and this construction uses the Cholesky-Banachiewicz-Crout algorithm, as it might be called. One must find lower-triangular matrix L for matrix A such that they suffer ## \mathbf{A} = \mathbf{L} \mathbf{L}^T ##. It is easy to show that taking the transpose of both sides yields this equation again. Here is that algorithm, for A having size n*n:

For j = 1 to n do:
$$ L_{jj} = \sqrt{ A_{jj} - \sum_{k=1}^{j-1} L_{jk}^2 } $$
For i = (j+1) to n do:
$$ L_{ij} = \frac{1}{L_{jj}} \left( A_{ij} - \sum_{k=1}^{j-1} L_{ik} L_{jk} \right) $$
Next i
Next j

Each new L component depends only on an A component and on previously-calculated L components, so after one pass, the calculation is complete.

I must also calculate L for
$$ A = \begin{pmatrix} 4 & 2 & 4 & 4 \\ 2 & 10 & 17 & 11 \\ 4 & 17 & 33 & 29 \\ 4 & 11 & 29 & 39 \end{pmatrix} $$

I used Mathematica's CholeskyDecomposition[] function and took the result's transpose, since that function calculates an upper triangular matrix. I then verified that that result is correct. It is
$$ L = \begin{pmatrix} 2 & 0 & 0 & 0 \\ 1 & 3 & 0 & 0 \\ 2 & 5 & 2 & 0 \\ 2 & 3 & 5 & 1 \end{pmatrix} $$
 
  • #44
Problem 8, partial solution
It is necessary to find the Galois group for the splitting field of ##x^4 - 2x^2 - 2## over Q.

The roots of that equation are ##\pm \sqrt{1 \pm \sqrt{3} } ##. Their four symmetry operations are (identity), (reverse outer square-root sign), (reverse inner square-root sign), and (reverse both square-root signs). The group of these operations is rather obviously ##Z_2 \times Z_2##.
 
  • #45
lpetrich said:
Problem 8, partial solution
It is necessary to find the Galois group for the splitting field of ##x^4 - 2x^2 - 2## over Q.

The roots of that equation are ##\pm \sqrt{1 \pm \sqrt{3} } ##. Their four symmetry operations are (identity), (reverse outer square-root sign), (reverse inner square-root sign), and (reverse both square-root signs). The group of these operations is rather obviously ##Z_2 \times Z_2##.

You are right about the roots, but that's about it. The Galoisgroup is not the Klein-group, nor has it order 4.
 
  • #46
HI! I just came across those questions and thought if it would be okay if I could send you some math questions for the August math challenge 2019!
 
  • #47
Not really my area, but here goes nothing. Based on Brownian motion characterisation under section titled "Mathematics"
Put X := \int _0^t W_s^2 ds. Based on conditions 1 and 4 of the characterisation we have W_s \sim \mathcal N(0,s), where s\geq 0. Therefore
<br /> s = \mbox{var}(W_s) = \mathbb E (W_s - \mathbb EW_s)^2 = \mathbb E(W_s^2).<br />
Fubini allows us to change order of integration, so we get
<br /> \mathbb EX = \mathbb E \left ( \int _0^t W_s^2ds\right ) = \int _0^t \mathbb E(W_s^2)ds = \int_0^t sds = \frac{t^2}{2}<br />
Initially, P7 asked for variance, which turns out to be
<br /> \mbox{var}(X) = \mathbb E (X - \mathbb EX)^2 = \mathbb E\left (X - \frac{t^2}{2}\right )^2 = \mathbb E(X^2) - \frac{t^4}{4}<br />
So we need to calculate the second moment of X. I'm not sure at the moment what's happening here.
 
Last edited:
  • Like
Likes member 587159
  • #48
nuuskur said:
Not really my area, but here goes nothing. Based on Brownian motion characterisation under section titled "Mathematics"
Put X := \int _0^t W_s^2 ds. Based on conditions 1 and 4 of the characterisation we have W_s \sim \mathcal N(0,s), where s\geq 0. Therefore
<br /> s = \mbox{var}(W_s) = \mathbb E (W_s - \mathbb EW_s)^2 = \mathbb E(W_s^2).<br />
Fubini allows us to change order of integration, so we get
<br /> \mathbb EX = \mathbb E \left ( \int _0^t W_s^2ds\right ) = \int _0^t \mathbb E(W_s^2)ds = \int_0^t sds = \frac{t^2}{2}<br />
Initially, P7 asked for variance, which turns out to be
<br /> \mbox{var}(X) = \mathbb E (X - \mathbb EX)^2 = \mathbb E\left (X - \frac{t^2}{2}\right )^2 = \mathbb E(X^2) - \frac{t^4}{4}<br />
So we need to calculate the second moment of X. I'm not sure at the moment what's happening here.

Correct! It should be noted that is non-trivial that the map ##(t,\omega) \mapsto W_t(\omega)## is jointly measurable (i.e. measurable w.r.t. the product sigma algebra), which is necessary to apply Fubini. But you could use this without a proof.

The calculation of ##\mathbb{E}[X^2]## is harder. If you want to attempt it anyway, here is a hint to get you (or someone else who wants to try) started:

$$X^2 = \left(\int_0^t W_u^2 du\right) \left(\int_0^t W_v^2 dv\right)=\int_0^t \int_0^t W_u^2 W_v^2 dudv$$

Now, taking expectations on both sides you can use Fubini on a triple integral and the problem reduces to finding ##\mathbb{E}[W_u^2 W_v^2]##.
 
Last edited by a moderator:
  • Like
Likes nuuskur
  • #49
It gets weird, there has to be some kind of algebraic trick involved, which I can't think of.
We could try
<br /> \mathbb E(W_uW_v)^2 = \mbox{var}(W_uW_v) + \mathbb E^2 (W_uW_v)<br />
This brings more complications, though. By linearity we get
<br /> v\leq u \implies u-v=\mbox{var}(W_u-W_v) = \mathbb E(W_u-W_v)^2 = u - 2\mathbb E(W_uW_v) + v<br />
from which
<br /> v\leq u \implies \mathbb E(W_uW_v) = v<br />
not sure how that's helpful, though.
 
  • Like
Likes member 587159
  • #50
nuuskur said:
It gets weird, there has to be some kind of algebraic trick involved, which I can't think of.
We could try
<br /> \mathbb E(W_uW_v)^2 = \mbox{var}(W_uW_v) + \mathbb E^2 (W_uW_v)<br />
This brings more complications, though. By linearity we get
<br /> v\leq u \implies u-v=\mbox{var}(W_u-W_v) = \mathbb E(W_u-W_v)^2 = u - 2\mathbb E(W_uW_v) + v<br />
from which
<br /> v\leq u \implies \mathbb E(W_uW_v) = v<br />
not sure how that's helpful, though.

Your attempt shows in fact that ##E(W_u W_v) = \min\{u,v\}##, which is correct. But you don't need it here. What you need is ##E(W_u^2 W_v^2)##. Here is a hint:

Suppose ##u \geq v##. Then

$$E(W_u^2 W_v^2) = E[(\{W_u-W_v\}+W_v)^2 W_v^2]$$

and ##W_u- W_v, W_v## are independent variables. What do you know about the expectation of the product of two independent random variables?
 
  • Like
Likes nuuskur
  • #51
Here's what comes to mind
In case of independent random variables E(XY) = E(X)E(Y). By first expanding the expression inside expected value:
<br /> \begin{align*}<br /> &amp;[(W_u-W_v)^2 + 2(W_u-W_v)W_v + W_v^2]W_v^2 \\<br /> =&amp;(W_u-W_v)^2W_v^2 + 2(W_uW_v -W_v^2)W_v^2 + W_v^4 \\<br /> =&amp;(W_u-W_v)^2W_v^2 + 2W_uW_v^3 - W_v^4<br /> \end{align*}<br />
To get the mgf of Brownian, one computes
<br /> M_W(x) = \mathbb E(e^{xW_s}) = \mbox{exp}\left ( \frac{1}{2}x^2s\right )<br />
Fourth derivative (w.r.t ##x##) at x=0 gives us the fourth moment, which is (if I calculated correctly) \mathbb E(W_v^4) = 3v^2.

Since W_u-W_v, W_v are independent, squaring them preserves independence, so
<br /> \mathbb E(W_u-W_v)^2W_v^2 = \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v<br />

Not sure what is happening with the middle part, though. Does it vanish?
 
Last edited:
  • #52
nuuskur said:
Here's what comes to mind
In case of independent random variables E(XY) = E(X)E(Y). By first expanding the expression inside expected value:
<br /> \begin{align*}<br /> &amp;[(W_u-W_v)^2 + 2(W_u-W_v)W_v + W_v^2]W_v^2 \\<br /> =&amp;(W_u-W_v)^2W_v^2 + 2(W_uW_v -W_v^2)W_v^2 + W_v^4 \\<br /> =&amp;(W_u-W_v)^2W_v^2 + 2W_uW_v^3 - W_v^4<br /> \end{align*}<br />
To get the mgf of Brownian, one computes
<br /> M_W(x) = \mathbb E(e^{xW_s}) = \mbox{exp}\left ( \frac{1}{2}x^2s\right )<br />
Fourth derivative (w.r.t ##x##) at x=0 gives us the fourth moment, which is (if I calculated correctly) \mathbb E(W_v^4) = 3v^2.

Since W_u-W_v, W_v are independent, squaring them preserves independence, so
<br /> \mathbb E(W_u-W_v)^2W_v^2 = \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v<br />

Not sure what is happening with the middle part, though. Does it vanish?

You don't have to expand the middle term:

$$E(2(W_u-W_v)W_v^3)=0$$

by independency and the expectation of the third moment here is 0.
 
  • #53
Hmm, does it hold that X,Y independent implies X, f(Y) independent? I think so, now that I think about it. The sigma algebra generated by f can only get smaller.
 
  • #54
nuuskur said:
Hmm, does it hold that X,Y independent implies X, f(Y) independent? I think so, now that I think about it. The sigma algebra generated by f can only get smaller.

Yes, it does. If ##X,Y## are independent and ##f, g:\mathbb{R} \to \mathbb{R}## are measurable functions, then ##f(X), g(X)## are independent. This follows from the following simple calculation:

$$P(f(X) \in A, g(Y) \in B) = P(X\in f^{-1}(A), Y\in g^{-1}(B)) = P(X\in f^{-1}(A))P(Y\in g^{-1}(B)) = P(f(X)\in A) P(g(Y)\in B)$$

and ##A,B## are Borel sets.
 
  • #55
I may have computed something incorrectly. I get that \mbox{var}(X) is negative.
 
  • #56
nuuskur said:
I may have computed something incorrectly. I get that \mbox{var}(X) is negative.

Well, you didn't include any calculations so I can't see where you made a mistake:

What expression did you get for ##E(W_u^2 W_v^2)##? Then you have to calculate a double integral and one has to be careful with the bounds there, but nothing too difficult.
 
  • #57
I was assuming
<br /> \mbox{var}(X) = \mathbb E(X^2) - \frac{t^4}{4} = \int_0^t\int _0^t ((u-v)v - 3v^2) dudv - \frac{t^4}{4}<br />
Having my doubts about the integrand, most likely incorrectly computed the fourth moment.
 
  • #58
nuuskur said:
I was assuming
<br /> \mbox{var}(X) = \mathbb E(X^2) - \frac{t^4}{4} = \int_0^t\int _0^t ((u-v)v - 3v^2) dudv - \frac{t^4}{4}<br />
Having my doubts about the integrand, most likely incorrectly computed the fourth moment.

The integrand should contain a minimum! We assumed ##u \geq v## and got an expression. What happens when ##u \leq v## (use symmetry)? Then find the correct expression for ##E(W_u^2 W_v^2)## that should involve ##\min\{u,v\}## somehow.
 
  • #59
*smacks forehead*
<br /> u\geq v \implies \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v,<br />
because the second moment of W_u-W_v is its variance. By symmetry it should hold
v\geq u \implies \mathbb E(W_u-W_v)^2 = v-u<br />
So the inner integral should work out as follows,
<br /> \int _0^t \mathbb E(W_u^2W_v^2) du = \int _0^v ((v-u)v - 3v^2)du + \int _v^t ((u-v)v -3v^2)du<br />

Still not right :/ I am braindead, making some elementary mistake.
 
  • #60
nuuskur said:
*smacks forehead*
<br /> u\geq v \implies \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v,<br />
because the second moment of W_u-W_v is its variance. By symmetry it should hold
v\geq u \implies \mathbb E(W_u-W_v)^2 = v-u<br />
So the inner integral should work out as follows,
<br /> \int _0^t \mathbb E(W_u^2W_v^2) du = \int _0^v ((v-u)v - 3v^2)du + \int _v^t ((u-v)v -3v^2)du<br />

Still not right :/ I am braindead, making some elementary mistake.

I get ##E(W_u^2 W_v^2) = 2 \min \{u^2,v^2\} + u v ##
 

Similar threads

  • · Replies 42 ·
2
Replies
42
Views
10K
  • · Replies 121 ·
5
Replies
121
Views
23K
  • · Replies 80 ·
3
Replies
80
Views
9K
  • · Replies 69 ·
3
Replies
69
Views
8K
  • · Replies 137 ·
5
Replies
137
Views
19K
  • · Replies 93 ·
4
Replies
93
Views
15K
  • · Replies 104 ·
4
Replies
104
Views
17K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 102 ·
4
Replies
102
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
11K