Math Challenge - July 2019

In summary, we discussed various mathematical problems and solutions, including showing inequalities, finding roots of equations, calculating integrals, and proving properties of matrices and other mathematical objects. Some of the problems required the use of calculus and other advanced techniques, while others could be solved with basic concepts. We also discussed the importance of writing in LaTeX for better readability and understanding.
  • #36
#7
Isnt #7 just the variance of W?
Var(W)=E[W^2]-E^2[W]
E[W]=E^2[W]=0
So
var(w)=E[W^2]=t
 
Last edited:
Physics news on Phys.org
  • #37
Well, from science advisors and such experienced members of PF as you are we can certainly expect that they can solve problems meant for the kids still in school.
 
  • #38
For anyone learning proof writing and presentation skills in mathematics, it's important to remember that a proof never becomes any better by making it look like something esoteric is taking place there. Most "normal" people won't understand it anyway, even when it's made "as simple as possible but no simpler" (a quote from Einstein). When a professional mathematician writes a new long proof of some theorem, it's difficult enough to follow the reasoning in it even when it contains all the necessary detail and the reader is a professional of equal standing.
 
  • #39
Great, I am not normal :cry:
 
  • #40
Set ##A=1+i, B=1-i, C=-1-i, D=-1+i## as the vertices of the square.
Our circle passes through ##A, D## and ##\frac{C+B}{2}=M=-i##
The center lies on the intersection of y-axis and perpendicular bisector of ##DM##.
equating slopes (or equations lines, depends on you), we get the center ##K= \frac{i}{4}##
Using distance formula we find circle intersects the side DC at point ##L=(-1,-0.5)##.
So we get, the circle divides the side DC into the ratio 3:1
 
  • Like
Likes fresh_42
  • #41
BWV said:
#7
Isnt #7 just the variance of W?
Var(W)=E[W^2]-E^2[W]
E[W]=E^2[W]=0
So
var(w)=E[W^2]=t

It is not. Why do you think so?

Note that the question is changed to calculating ##\mathbb{E}[X]##, which is a lot easier.
 
  • #42
I know P13 was solved, but I like the exercise so I tried something, too.
For every [itex]0<x<1[/itex] it holds [itex]x(1-x)\leq 1/4[/itex] (find the vertex of the parabola). Assume [itex]u(1-v) > 1/4[/itex] and [itex]v(1-w) > 1/4[/itex]. Need to show [itex]w(1-u) \leq 1/4[/itex].

If [itex]1-u \leq 1-w[/itex], then
[tex]w(1-u) \leq w(1-w) \leq 1/4.[/tex]

Assume [itex]1-u>1-w[/itex]. From
[tex]
v(1-u) > v(1-w) > 1/4 \geq w(1-w)
[/tex]
we further obtain [itex]u<w<v[/itex]. But now
[tex]
1/4 < u(1-v) < u(1-u) \leq 1/4,
[/tex]
which is impossible. Thus [itex]1-u \leq 1-w[/itex] must hold.
 
  • #43
Problem 9
I recognize this problem as Cholesky decomposition - Wikipedia

The proof of uniqueness is by construction, and this construction uses the Cholesky-Banachiewicz-Crout algorithm, as it might be called. One must find lower-triangular matrix L for matrix A such that they suffer ## \mathbf{A} = \mathbf{L} \mathbf{L}^T ##. It is easy to show that taking the transpose of both sides yields this equation again. Here is that algorithm, for A having size n*n:

For j = 1 to n do:
$$ L_{jj} = \sqrt{ A_{jj} - \sum_{k=1}^{j-1} L_{jk}^2 } $$
For i = (j+1) to n do:
$$ L_{ij} = \frac{1}{L_{jj}} \left( A_{ij} - \sum_{k=1}^{j-1} L_{ik} L_{jk} \right) $$
Next i
Next j

Each new L component depends only on an A component and on previously-calculated L components, so after one pass, the calculation is complete.

I must also calculate L for
$$ A = \begin{pmatrix} 4 & 2 & 4 & 4 \\ 2 & 10 & 17 & 11 \\ 4 & 17 & 33 & 29 \\ 4 & 11 & 29 & 39 \end{pmatrix} $$

I used Mathematica's CholeskyDecomposition[] function and took the result's transpose, since that function calculates an upper triangular matrix. I then verified that that result is correct. It is
$$ L = \begin{pmatrix} 2 & 0 & 0 & 0 \\ 1 & 3 & 0 & 0 \\ 2 & 5 & 2 & 0 \\ 2 & 3 & 5 & 1 \end{pmatrix} $$
 
  • #44
Problem 8, partial solution
It is necessary to find the Galois group for the splitting field of ##x^4 - 2x^2 - 2## over Q.

The roots of that equation are ##\pm \sqrt{1 \pm \sqrt{3} } ##. Their four symmetry operations are (identity), (reverse outer square-root sign), (reverse inner square-root sign), and (reverse both square-root signs). The group of these operations is rather obviously ##Z_2 \times Z_2##.
 
  • #45
lpetrich said:
Problem 8, partial solution
It is necessary to find the Galois group for the splitting field of ##x^4 - 2x^2 - 2## over Q.

The roots of that equation are ##\pm \sqrt{1 \pm \sqrt{3} } ##. Their four symmetry operations are (identity), (reverse outer square-root sign), (reverse inner square-root sign), and (reverse both square-root signs). The group of these operations is rather obviously ##Z_2 \times Z_2##.

You are right about the roots, but that's about it. The Galoisgroup is not the Klein-group, nor has it order 4.
 
  • #46
HI! I just came across those questions and thought if it would be okay if I could send you some math questions for the August math challenge 2019!
 
  • #47
Not really my area, but here goes nothing. Based on Brownian motion characterisation under section titled "Mathematics"
Put [itex]X := \int _0^t W_s^2 ds[/itex]. Based on conditions 1 and 4 of the characterisation we have [itex]W_s \sim \mathcal N(0,s)[/itex], where [itex]s\geq 0[/itex]. Therefore
[tex]
s = \mbox{var}(W_s) = \mathbb E (W_s - \mathbb EW_s)^2 = \mathbb E(W_s^2).
[/tex]
Fubini allows us to change order of integration, so we get
[tex]
\mathbb EX = \mathbb E \left ( \int _0^t W_s^2ds\right ) = \int _0^t \mathbb E(W_s^2)ds = \int_0^t sds = \frac{t^2}{2}
[/tex]
Initially, P7 asked for variance, which turns out to be
[tex]
\mbox{var}(X) = \mathbb E (X - \mathbb EX)^2 = \mathbb E\left (X - \frac{t^2}{2}\right )^2 = \mathbb E(X^2) - \frac{t^4}{4}
[/tex]
So we need to calculate the second moment of [itex]X[/itex]. I'm not sure at the moment what's happening here.
 
Last edited:
  • Like
Likes member 587159
  • #48
nuuskur said:
Not really my area, but here goes nothing. Based on Brownian motion characterisation under section titled "Mathematics"
Put [itex]X := \int _0^t W_s^2 ds[/itex]. Based on conditions 1 and 4 of the characterisation we have [itex]W_s \sim \mathcal N(0,s)[/itex], where [itex]s\geq 0[/itex]. Therefore
[tex]
s = \mbox{var}(W_s) = \mathbb E (W_s - \mathbb EW_s)^2 = \mathbb E(W_s^2).
[/tex]
Fubini allows us to change order of integration, so we get
[tex]
\mathbb EX = \mathbb E \left ( \int _0^t W_s^2ds\right ) = \int _0^t \mathbb E(W_s^2)ds = \int_0^t sds = \frac{t^2}{2}
[/tex]
Initially, P7 asked for variance, which turns out to be
[tex]
\mbox{var}(X) = \mathbb E (X - \mathbb EX)^2 = \mathbb E\left (X - \frac{t^2}{2}\right )^2 = \mathbb E(X^2) - \frac{t^4}{4}
[/tex]
So we need to calculate the second moment of [itex]X[/itex]. I'm not sure at the moment what's happening here.

Correct! It should be noted that is non-trivial that the map ##(t,\omega) \mapsto W_t(\omega)## is jointly measurable (i.e. measurable w.r.t. the product sigma algebra), which is necessary to apply Fubini. But you could use this without a proof.

The calculation of ##\mathbb{E}[X^2]## is harder. If you want to attempt it anyway, here is a hint to get you (or someone else who wants to try) started:

$$X^2 = \left(\int_0^t W_u^2 du\right) \left(\int_0^t W_v^2 dv\right)=\int_0^t \int_0^t W_u^2 W_v^2 dudv$$

Now, taking expectations on both sides you can use Fubini on a triple integral and the problem reduces to finding ##\mathbb{E}[W_u^2 W_v^2]##.
 
Last edited by a moderator:
  • Like
Likes nuuskur
  • #49
It gets weird, there has to be some kind of algebraic trick involved, which I can't think of.
We could try
[tex]
\mathbb E(W_uW_v)^2 = \mbox{var}(W_uW_v) + \mathbb E^2 (W_uW_v)
[/tex]
This brings more complications, though. By linearity we get
[tex]
v\leq u \implies u-v=\mbox{var}(W_u-W_v) = \mathbb E(W_u-W_v)^2 = u - 2\mathbb E(W_uW_v) + v
[/tex]
from which
[tex]
v\leq u \implies \mathbb E(W_uW_v) = v
[/tex]
not sure how that's helpful, though.
 
  • Like
Likes member 587159
  • #50
nuuskur said:
It gets weird, there has to be some kind of algebraic trick involved, which I can't think of.
We could try
[tex]
\mathbb E(W_uW_v)^2 = \mbox{var}(W_uW_v) + \mathbb E^2 (W_uW_v)
[/tex]
This brings more complications, though. By linearity we get
[tex]
v\leq u \implies u-v=\mbox{var}(W_u-W_v) = \mathbb E(W_u-W_v)^2 = u - 2\mathbb E(W_uW_v) + v
[/tex]
from which
[tex]
v\leq u \implies \mathbb E(W_uW_v) = v
[/tex]
not sure how that's helpful, though.

Your attempt shows in fact that ##E(W_u W_v) = \min\{u,v\}##, which is correct. But you don't need it here. What you need is ##E(W_u^2 W_v^2)##. Here is a hint:

Suppose ##u \geq v##. Then

$$E(W_u^2 W_v^2) = E[(\{W_u-W_v\}+W_v)^2 W_v^2]$$

and ##W_u- W_v, W_v## are independent variables. What do you know about the expectation of the product of two independent random variables?
 
  • Like
Likes nuuskur
  • #51
Here's what comes to mind
In case of independent random variables [itex]E(XY) = E(X)E(Y)[/itex]. By first expanding the expression inside expected value:
[tex]
\begin{align*}
&[(W_u-W_v)^2 + 2(W_u-W_v)W_v + W_v^2]W_v^2 \\
=&(W_u-W_v)^2W_v^2 + 2(W_uW_v -W_v^2)W_v^2 + W_v^4 \\
=&(W_u-W_v)^2W_v^2 + 2W_uW_v^3 - W_v^4
\end{align*}
[/tex]
To get the mgf of Brownian, one computes
[tex]
M_W(x) = \mathbb E(e^{xW_s}) = \mbox{exp}\left ( \frac{1}{2}x^2s\right )
[/tex]
Fourth derivative (w.r.t ##x##) at [itex]x=0[/itex] gives us the fourth moment, which is (if I calculated correctly) [itex]\mathbb E(W_v^4) = 3v^2[/itex].

Since [itex]W_u-W_v, W_v[/itex] are independent, squaring them preserves independence, so
[tex]
\mathbb E(W_u-W_v)^2W_v^2 = \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v
[/tex]

Not sure what is happening with the middle part, though. Does it vanish?
 
Last edited:
  • #52
nuuskur said:
Here's what comes to mind
In case of independent random variables [itex]E(XY) = E(X)E(Y)[/itex]. By first expanding the expression inside expected value:
[tex]
\begin{align*}
&[(W_u-W_v)^2 + 2(W_u-W_v)W_v + W_v^2]W_v^2 \\
=&(W_u-W_v)^2W_v^2 + 2(W_uW_v -W_v^2)W_v^2 + W_v^4 \\
=&(W_u-W_v)^2W_v^2 + 2W_uW_v^3 - W_v^4
\end{align*}
[/tex]
To get the mgf of Brownian, one computes
[tex]
M_W(x) = \mathbb E(e^{xW_s}) = \mbox{exp}\left ( \frac{1}{2}x^2s\right )
[/tex]
Fourth derivative (w.r.t ##x##) at [itex]x=0[/itex] gives us the fourth moment, which is (if I calculated correctly) [itex]\mathbb E(W_v^4) = 3v^2[/itex].

Since [itex]W_u-W_v, W_v[/itex] are independent, squaring them preserves independence, so
[tex]
\mathbb E(W_u-W_v)^2W_v^2 = \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v
[/tex]

Not sure what is happening with the middle part, though. Does it vanish?

You don't have to expand the middle term:

$$E(2(W_u-W_v)W_v^3)=0$$

by independency and the expectation of the third moment here is 0.
 
  • #53
Hmm, does it hold that [itex]X,Y[/itex] independent implies [itex]X, f(Y)[/itex] independent? I think so, now that I think about it. The sigma algebra generated by [itex]f[/itex] can only get smaller.
 
  • #54
nuuskur said:
Hmm, does it hold that [itex]X,Y[/itex] independent implies [itex]X, f(Y)[/itex] independent? I think so, now that I think about it. The sigma algebra generated by [itex]f[/itex] can only get smaller.

Yes, it does. If ##X,Y## are independent and ##f, g:\mathbb{R} \to \mathbb{R}## are measurable functions, then ##f(X), g(X)## are independent. This follows from the following simple calculation:

$$P(f(X) \in A, g(Y) \in B) = P(X\in f^{-1}(A), Y\in g^{-1}(B)) = P(X\in f^{-1}(A))P(Y\in g^{-1}(B)) = P(f(X)\in A) P(g(Y)\in B)$$

and ##A,B## are Borel sets.
 
  • #55
I may have computed something incorrectly. I get that [itex]\mbox{var}(X)[/itex] is negative.
 
  • #56
nuuskur said:
I may have computed something incorrectly. I get that [itex]\mbox{var}(X)[/itex] is negative.

Well, you didn't include any calculations so I can't see where you made a mistake:

What expression did you get for ##E(W_u^2 W_v^2)##? Then you have to calculate a double integral and one has to be careful with the bounds there, but nothing too difficult.
 
  • #57
I was assuming
[tex]
\mbox{var}(X) = \mathbb E(X^2) - \frac{t^4}{4} = \int_0^t\int _0^t ((u-v)v - 3v^2) dudv - \frac{t^4}{4}
[/tex]
Having my doubts about the integrand, most likely incorrectly computed the fourth moment.
 
  • #58
nuuskur said:
I was assuming
[tex]
\mbox{var}(X) = \mathbb E(X^2) - \frac{t^4}{4} = \int_0^t\int _0^t ((u-v)v - 3v^2) dudv - \frac{t^4}{4}
[/tex]
Having my doubts about the integrand, most likely incorrectly computed the fourth moment.

The integrand should contain a minimum! We assumed ##u \geq v## and got an expression. What happens when ##u \leq v## (use symmetry)? Then find the correct expression for ##E(W_u^2 W_v^2)## that should involve ##\min\{u,v\}## somehow.
 
  • #59
*smacks forehead*
[tex]
u\geq v \implies \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v,
[/tex]
because the second moment of [itex]W_u-W_v[/itex] is its variance. By symmetry it should hold
[tex]v\geq u \implies \mathbb E(W_u-W_v)^2 = v-u
[/tex]
So the inner integral should work out as follows,
[tex]
\int _0^t \mathbb E(W_u^2W_v^2) du = \int _0^v ((v-u)v - 3v^2)du + \int _v^t ((u-v)v -3v^2)du
[/tex]

Still not right :/ I am braindead, making some elementary mistake.
 
  • #60
nuuskur said:
*smacks forehead*
[tex]
u\geq v \implies \mathbb E(W_u-W_v)^2\mathbb E(W_v^2) = (u-v)v,
[/tex]
because the second moment of [itex]W_u-W_v[/itex] is its variance. By symmetry it should hold
[tex]v\geq u \implies \mathbb E(W_u-W_v)^2 = v-u
[/tex]
So the inner integral should work out as follows,
[tex]
\int _0^t \mathbb E(W_u^2W_v^2) du = \int _0^v ((v-u)v - 3v^2)du + \int _v^t ((u-v)v -3v^2)du
[/tex]

Still not right :/ I am braindead, making some elementary mistake.

I get ##E(W_u^2 W_v^2) = 2 \min \{u^2,v^2\} + u v ##
 
  • #61
nuuskur said:
Curious, how you arrive at your expression for [itex]\mathbb E(W_u^2W_v^2)[/itex].
Suppose ##u \geq v##. Then
$$E(W_u^2W_v^2) = E[((W_u-W_v) + W_v)^2 W_v^2]$$
$$= E[(W_u-W_v)^2W_v^2] + 2E[(W_u-W_v)W_v^3] + E[W_v^4]$$
$$= E[(W_u-W_v)^2]E[W_v^2] + 2E[W_u-W_v]E[W_v^3] + E[W_v^ 4]$$
$$= (u-v)v + 3v^2 = uv-v^2 + 3v^2 = uv+ 2v^ 2$$
By symmetry:
$$v \geq u \implies E(W_u^2 W_v^2) = uv + 2u^2$$
Conclusion: $$E(W_u^2W_v^2) = uv + 2 \min\{u^2, v^2\}$$
_____________________________

Just like you, I get that ##E(X) = t^2/2##.

Hence,
$$Var(X) = EX^2 - (EX)^2 = E[X^2] - t^4/4$$
But unlike you I get
$$E[X^2] = 7t^4/12$$
So the end result should be
$$Var(X) = 7t^4/12 - t^4/4 = 7t^4/12 - 3t^4/12 = 4t^4/12 = t^4/3$$

Did you maybe make a mistake calculating the integral?
 
Last edited by a moderator:
  • #62
nuuskur said:
By symmetry
[tex]
v\geq u \implies \mathbb E(W_u^2W_v^2) = (v-u)v + 3v^2 = 4v^2 -uv.
[/tex]
What am I missing? :eek:

By symmetry I just mean that we can interchange the roles of ##u,v## in the final expression of the formula. I don't see how you get that. If you don't see it, you can just perform the same calculation that I did but with ##u## and ##v## interchanged and you will get the right answer.

A bit more formally maybe:

I have derived

$$\forall u \geq v \geq 0: E(W_u^2 W_v^2) = uv + 2v^2$$

Now, suppose ##v' \geq u'##. The above formula then implies:

##E(W_{u'}^2 W_{v'}^2) = u'v' + 2u'^2 ##

Now, substitute ##u' = u, v' = v##.
 
  • #63
nuuskur said:
Graah. I am lost again.
[tex]
v\geq u \implies \mathbb E(W_u-W_v)^2 = \mathbb E(W_v-W_u)^2 = v-u,
[/tex]
no?

Yes, that's correct. But why is this relevant? Also note my edit of the previous post.
 
  • #64
Nevermind, now I understand what you are applying symmetry to.
 
  • Like
Likes member 587159
  • #65
nuuskur said:
Nevermind, now I understand what you are applying symmetry to.

Yeah, it can be tricky. I think you learned a lot of this question! Thanks for participating in the harder one!
 
  • Like
Likes nuuskur
  • #66
Uhmm, guys, what am I missing here?
nuuskur said:
Eventually, the variance ought to be$$
\mbox{var}(X) = \mathbb E(X^2) - \frac{t^4}{4} = \frac{7}{12}t^4$$
Math_QED said:
But unlike you I get
$$ E[X^2]=7t^4/12E[X^2]=7t^4/12$$
P.S.: I just saw that one post has been edited, so maybe this is the reason for this apparently contradicting quotations. If this should have been the case, please mark edits as such, so that the thread will remain readable and contradictions as above can be explained to readers.
 
  • #67
Oh boy, here we go..
The roots of [itex]x^4-2x^2-2[/itex] are given by [itex]\lvert x\rvert = \sqrt{1\pm\sqrt{3}}[/itex]. The polynomial is irreducible.

Put [itex]\alpha := \sqrt{1-\sqrt{3}}, \beta := \sqrt{1+\sqrt{3}}[/itex]. Find
[tex]
[\mathbb Q(\alpha,\beta),\mathbb Q] = [\mathbb Q(\alpha,\beta),\mathbb Q(\beta)] [\mathbb Q(\beta),\mathbb Q] = 2[\mathbb Q(\alpha,\beta),\mathbb Q(\beta)]
[/tex]
where the degree of [itex]\mathbb Q(\beta) /\mathbb Q[/itex] is clearly [itex]2[/itex], since [itex]1,\beta[/itex] are linearly independent over [itex]\mathbb Q[/itex]. It should hold that [itex]1,\alpha[/itex] are a basis of [itex]\mathbb Q(\alpha,\beta)[/itex] over [itex]\mathbb Q(\beta)[/itex], but let's check. Linear independence is clear. Let [itex]k\cdot 1 + l\alpha + m\beta \in\mathbb Q(\alpha,\beta)[/itex], where [itex]k,l,m\in\mathbb Q(\beta)[/itex], then
[tex]
k + l\alpha + m\beta = k + m\beta + l\alpha = (k+m\beta) \cdot 1 + l\alpha
[/tex]
where [itex](k+m\beta)\in\mathbb Q(\beta)[/itex]. So this must mean
[tex]
[\mathbb Q(\alpha,\beta),\mathbb Q] = 4.
[/tex]
But apparently it doesn't agree with #45.

It remains to study the automorphisms, but I'm not sure of anything at the moment.
 
Last edited:
  • #68
nuuskur said:
The polynomial is irreducible.

Why?

nuuskur said:
Put [itex]\alpha := \sqrt{1-\sqrt{3}}, \beta := \sqrt{1+\sqrt{3}}[/itex]. Find
[tex]
[\mathbb Q(\alpha,\beta),\mathbb Q] = [\mathbb Q(\alpha,\beta),\mathbb Q(\beta)] [\mathbb Q(\beta),\mathbb Q] = 2[\mathbb Q(\alpha,\beta),\mathbb Q(\beta)]
[/tex]
where the degree of [itex]\mathbb Q(\beta) /\mathbb Q[/itex] is clearly [itex]2[/itex], since [itex]1,\beta[/itex] are linearly independent over [itex]\mathbb Q[/itex].

Linearly independent yes, but generating? How would you write ##\beta^2 \in \mathbb{Q}(\beta)## as a ##\mathbb{Q}##-linear combination of ##1,\beta?##

The following can be very useful: Consider a field extension ##K/F## with ##\alpha \in K## algebraic over ##F## then

$$[F(\alpha): F] = \deg \left(\min_\mathbb{Q} \alpha\right)$$
 
  • #69
Wait, are we supposed to compute modulo the polynomial? It's irreducible by Eisenstein (take ##p=2##).
So we'd need ##{1,\beta,\beta ^2,\beta ^3,\beta ^4}## to span ##\mathbb Q(\beta)##?
 
  • #70
nuuskur said:
Wait, are we supposed to compute modulo the polynomial? It's irreducible by Eisenstein (take ##p=2##).
So we'd need ##{1,\beta,\beta ^2,\beta ^3,\beta ^4}## to span ##\mathbb Q(\beta)##?

Yes, irreducible by Eisenstein!

We don't need ##\beta^4##. Please look at the statement I wrote down with the minimal polynomial. What is the minimal polynomial of ##\beta## over ##\mathbb{Q}##?
 

Similar threads

  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
69
Views
3K
  • Math Proof Training and Practice
4
Replies
121
Views
18K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
4
Replies
137
Views
15K
  • Math Proof Training and Practice
3
Replies
104
Views
13K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
2
Replies
67
Views
7K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
Back
Top