Challenge Micromass' big September challenge

Click For Summary
The September challenge on Physics Forums invites participants to solve various mathematical problems, emphasizing the need for full derivations or proofs for solutions to be considered valid. Notable problems include demonstrating the impossibility of finding four distinct squares in an arithmetic progression and showing that the roots of a polynomial depend continuously on its coefficients. Participants can use external resources for techniques but cannot directly search for solutions to the posed questions. The challenge encourages collaboration and rigorous proof-based problem-solving among mathematicians and enthusiasts alike.
  • #61
Biker said:
## \lim_{x \rightarrow 0+} { \frac{(2n)(2n-1)...(2n-k+1)x^{-2n+k}}{e^{\frac 1 x}}} ##

So you claim this is what is left after applying L'Hospital ##k## times? Can you prove this by induction?
 
Physics news on Phys.org
  • #62
micromass said:
So you claim this is what is left after applying L'Hospital ##k## times? Can you prove this by induction?

Is 3 (high school challenges) entirely solved?
 
  • #63
Pretty much yes. The presentation might be a bit lacking, but that's normal for high school. So I consider it solved.
 
  • #64
micromass said:
Pretty much yes. The presentation might be a bit lacking, but that's normal for high school. So I consider it solved.

I don't see a taylor series though.
 
  • #65
Like I said, the presentation is lacking. But everything is there. He found all derivatives at ##0##, so finding the Taylor series is completely trivial now.
 
  • Like
Likes Biker
  • #66
micromass said:
So you claim this is what is left after applying L'Hospital ##k## times? Can you prove this by induction?
Base case:
## \frac{x^4}{e^{\frac 1 x }} ##
By applying l'Hôpital's rule let's say 2 times or by using the formula that I claim to be true we get:
## \frac{12x^{-2}}{e^{\frac 1 x}} ##

Inductive step:
So we know that our formula is true for k
By substituting k+1 in our formula we get
## \frac{(2n)(2n-1)...(2n-k) x^{-2n+k+1}}{e^{\frac 1 x}} ##
so taking the derivative of numerator and denominator after k applications should yield to the above.
## \frac{(2n)(2n-1)...(2n-k+1) x^{-2n+k}}{e^{\frac 1 x }} ##
After taking the derivatives we get:
## \frac{(2n)(2n-1)...(2n-k+1) (2n-k) x^{-2n+k-1}}{e^{\frac 1 x } \frac{1}{x^2}} ##
taking ## x^2 ## to the top yields:
## \frac{(2n)(2n-1)...(2n-k+1) (2n-k) x^{-2n+k+1}}{e^{\frac 1 x }} ##

P.S Sorry for late reply. I was a bit busy.
 
  • Like
Likes micromass
  • #67
Biker said:
Base case:
## \frac{x^4}{e^{\frac 1 x }} ##
By applying l'Hôpital's rule let's say 2 times or by using the formula that I claim to be true we get:
## \frac{12x^{-2}}{e^{\frac 1 x}} ##

Inductive step:
So we know that our formula is true for k
By substituting k+1 in our formula we get
## \frac{(2n)(2n-1)...(2n-k) x^{-2n+k+1}}{e^{\frac 1 x}} ##
so taking the derivative of numerator and denominator after k applications should yield to the above.
## \frac{(2n)(2n-1)...(2n-k+1) x^{-2n+k}}{e^{\frac 1 x }} ##
After taking the derivatives we get:
## \frac{(2n)(2n-1)...(2n-k+1) (2n-k) x^{-2n+k-1}}{e^{\frac 1 x } \frac{1}{x^2}} ##
taking ## x^2 ## to the top yields:
## \frac{(2n)(2n-1)...(2n-k+1) (2n-k) x^{-2n+k+1}}{e^{\frac 1 x }} ##

P.S Sorry for late reply. I was a bit busy.

Right, so what will the Taylor series be?
 
  • #68
Math_QED said:
Is 3 (high school challenges) entirely solved?

You could look for a slightly more indirect proof. What's the crux of the argument?

For example, note that for ##x > 0##:

##f'(x) = \frac{1}{x^2}f(x)##

Can you do anything with that?
 
Last edited:
  • #69
micromass said:
Right, so what will the Taylor series be?
Tbh, Taylor series is a bit advanced to me. All I took was calculus I which only includes derivatives atm that is why I went for it.

But looking at wikipedia, It looks like from what I proved that all the derivatives of f(x) are zero at zero the taylor series is zero?

That is why I was a bit hesitant to reply. You could take away part b if you want. Thank you for the awesome question literally learned new concepts in calculus

Anyway, I will keep working on the bells problem in my quest to find a rigorous proof :P
 
  • Like
Likes micromass
  • #70
I believe I have a solution to problem 2.

Consider the complex polynomial p(z) = a_n z^n + a_{n-1} z^{n-1} + \cdots + a_1 z + a_0. Then, the roots r_i of p(z) are precisely the points of \mathbb{C} such that
\oint_{C_\epsilon(r_i)} \frac{p'(z)}{p(z)} dz \not= 0 \tag{1}
for all \epsilon sufficiently small, where C_\epsilon(r_i) is the circle of radius \epsilon about r_i.

Now, fix \epsilon > 0, and consider another polynomial q(z) = b_n z^ + b_{n-1} z^{n-1} + \cdots + b_1 z + b_0. Take \mathbf{a} = (a_0, a_1, \cdots, a_n), \mathbf{b} = (b_0, b_1, \cdots, b_n). Define
f(\mathbf{b}) = \oint_{C_\epsilon(r_i)} \frac{p'(z)}{p(z)} - \frac{q'(z)}{q(z)} dz
Now, f is continuous at \mathbf{b} = \mathbf{a}. Consequently, for all \mathbf{b} sufficiently close to \mathbf{a}, we have that |f(\mathbf{b})| = |f(\mathbf{b}) - f(\mathbf{a})| < \pi
But, by the argument principle, f(\mathbf{b}) is 2\pi i (n-m), where n and m are the number of roots of p and q, respectively, inside of C_\epsilon(r_i). Thus, the above inequality gives us that n = m, so for all \mathbf{b} sufficiently close to \mathbf{a}, the roots of q lie within a distance \epsilon of the roots of p. Since \epsilon was arbitrary, the roots of a pair of polynomials can be made as close as we desire by making the coefficients of the polynomials sufficiently close. This is equivalent to the result we seek.
 
  • Like
Likes micromass and mfb
  • #71
Advanced Problem 7a.

I refer to post #102 here, and use the notation from that post. In the post, a real linear functional ##F## on ##\mathcal B## such that ##\underline\lim x_n\le F(\{x_n\}_{n=1}^\infty)\le\overline\lim\{x_n\}## for all ##\{x_n\}_{n=1}^\infty\in\mathcal B## is constructed, and Lemma 2 in the post holds for ##F##.

We will give two real linear operators ##G## and ##G'## on ##\mathcal B##, which both satisfy the conditions of Lemma 2, so that, by that Lemma, ##L=F\circ G## and ##L'=F\circ G'## both are generalized limits. We will also show that ##L\neq L'##, and thus we have two unequal generalized limits.

For ##\{x_n\}_{n=1}^\infty##, we put
##G(\{x_n\}_{n=1}^\infty)=\{y_n\}_{n=1}^\infty##, with ##y_n=\frac1{2^{2n-2}}\sum_{2^{2n-2}}^{2^{2n-1}-1}x_k##, and
##G'(\{x_n\}_{n=1}^\infty)=\{y'_n\}_{n=1}^\infty##, with ##y'_n=\frac1{2^{2n-1}}\sum_{2^{2n-1}}^{2^{2n}-1}x_k##, for all ##n\in\mathbb Z_+##.

It is easy to see that ##G## and ##G'## are real linear operators on ##\mathcal B##. Also, with ##\{x_n\}_{n=1}^\infty##, ##\{y_n\}_{n=1}^\infty##, and ##\{y'_n\}_{n=1}^\infty## as above, it is easy to see that for all ##N\in \mathbb Z_+##, ##\sup_{n\ge N} y_n\le \sup_{n\ge N} x_n##. Hence, ##\overline\lim y_n\le \overline \lim x_n##. Applying this to ##-\{x_n\}_{n=1}^\infty## and using linearity and ##\underline\lim x_n=-\overline\lim (-x_n)##, and likewise for ##\{y_n\}_{n=1}^\infty##, we obtain ##\underline\lim x_n\le\underline\lim y_n##, so that
##\underline\lim x_n\le\underline\lim y_n\le\overline\lim y_n\le\overline\lim x_n##, and a similar argument gives
##\underline\lim x_n\le\underline\lim y'_n\le\overline\lim y'_n\le\overline\lim x_n##.

Also, let ##u_n=x_{n+1}## for all ##n\in\mathbb Z_+##, and put ##\{z_n\}_{n=1}^\infty=G(\{u_n\}_{n=1}^\infty)## and ##\{z'_n\}_{n=1}^\infty=G'(\{u_n\}_{n=1}^\infty)##.
Then, for ##n\in\mathbb Z_+##: ##z_n-y_n=\frac1{2^{2n-2}}(\sum_{2^{2n-2}}^{2^{2n-1}-1}x_{k+1}-\sum_{2^{2n-2}}^{2^{2n-1}-1}x_k)=(x_{2^{2n-1}}-x_{2^{2n-2}})/2^{2n-2}\to 0## as ##n\to \infty##, and a similar calculation gives ##z'_n-y'_n=(x_{2^{2n}}-x_{2^{2n-1}})/2^{2n-1}\to 0##, as ##n\to \infty##.
So, ##\lim_{n\to\infty}(z_n-y_n)=0## and ##\lim_{n\to\infty}(z'_n-y'_n)=0##.

All this holds for all ##\{x_n\}_{n=1}^\infty##. This means that both 1) and 2) of Lemma 2 are satisfied for both ##G## and ##G'##. Therefore, Lemma 2 gives that both ##L=F\circ G## and ##L'=F\circ G'## are generalized limits.

We must show that ##L\neq L'##. Put ##x_n=(-1)^{\lfloor\log_2 n\rfloor}##, for all ##n\in \mathbb Z_+##. With ##\{y_n\}_{n=1}^\infty=G(\{x_n\}_{n=1}^\infty)## and ##y'_n=G'(\{x_n\}_{n=1}^\infty)##, we have ##y_n=1## and ##y'_n=-1## for all ##n\in\mathbb Z_+##.
Since ##\{y_n\}_{n=1}^\infty## converges and ##\underline\lim y_n\le F(\{y_n\}_{n=1}^\infty)\le\overline\lim\{y_n\}##, we have ##L(\{x_n\}_{n=1}^\infty)=F(\{y_n\}_{n=1}^\infty)=\lim_{n\to\infty} y_n=1##. Likewise, ##L'(\{x_n\}_{n=1}^\infty)=F(\{y'_n\}_{n=1}^\infty)=\lim_{n\to\infty} y'_n=-1##.

Hence, ##L(\{x_n\}_{n=1}^\infty)\neq L'(\{x_n\}_{n=1}^\infty)##, so ##L\neq L'##.

We have proved that there exists two unequal generalized limits.
 
  • Like
Likes micromass
  • #72
Advanced Problem 7d.

The generalized limit ##L## I construct in post #102 here has the desired property.
With the notation of that post, we have ##y_n=\frac1n\sum_{k=1}^n x_n##, so if ##\lim_{n\to\infty} y_n=\lim_{n\to \infty} \frac1n\sum_{k=1}^n x_n## exists and is ##y##, then, since ##\underline\lim y_n\le F(\{y_n\}_{n=1}^\infty)=\overline\lim y_n##, we have ##F(\{y_n\}_{n=1}^\infty)=\lim_{n\to \infty} y_n=y##.
Thus, in that case, ##L(\{x_n\}_{n=1}^\infty)=F(\{y_n\}_{n=1}^\infty)=y##.
 
  • Like
Likes micromass
  • #73
Advanced Problem 7b.

I use nonstandard analysis in the proof of the direct part (##\Rightarrow##). I am working on converting that to a standard proof, but such a standard proof will be long and ugly, while the nonstandard proof is short and pretty. :wink:
In don't use nonstandard analysis in the proof of the converse part (##\Leftarrow##), which is much simpler.

First, some preliminaries.

Let ##\mathcal B## be the space of all bounded real sequences ##\{x_n\}_{n=1}^\infty## (we will drop the sub- an superscripts).
##\mathcal B## is a Banach space with the sup-norm, given by ##\Vert \{x_n\}\Vert_\infty = \sup_{n\in \mathbb Z_+}|x_n|##, for ##\{x_n\}\in\mathcal B##.
Since each generalized limit ##L## is (real) linear and satisfies ##\underline\lim x_n \le L\{x_n\} \le \overline\lim x_n## for all ##\{x_n\}\in\mathcal B##, it follows that ##L## is a bounded, hence continuous, linear functional on ##\mathcal B##, with norm ##\Vert L\Vert \le1## in ##\mathcal B^*## (looking a constant sequence, we i fact see that ##\Vert L\Vert =1##).

Let ##S:\mathcal B \to\mathcal B## be the shift operator, such that if ##u_n=x_{n+1}## for all ##n\in\mathbb Z_+##, then ##\{u_n\}=S(\{x_n\})##, for all ##\{x_n\}## in ##\mathcal B##.
Clearly, ##S## is a linear operator on ##\mathcal B##, and if ##S(\{x_n\})=\{u_n\}##, then
##\underline\lim x_n\le\underline\lim u_n\le\overline\lim u_n\le\overline\lim x_n##.
Thus, ##S## is a bounded linear operator on ##\mathcal B## with norm ##\Vert L\Vert =1## (we get equality by looking at a constant sequence).

For all ##m,p\in \mathbb Z_+##, and ##\{x_n\}\in \mathcal B##, we put ##M_p^m(\{x_n\})=\frac1p\sum_{k=0}^{p-1}
x_{m+k}##.
It is easy to see that ##\underline\lim x_n\le M_p^m(\{x_n\})\le \overline\lim x_n##.
For each ##p\in\mathbb Z_+## and ##\{x_n\}\in \mathcal B##, we put ##M_p(\{x_n\})=\{y_n\}##, where ##y_m=M_p^m(\{x_n\})## for all ##m\in \mathbb Z_+##.
It is clear that for all ##p\in\mathbb Z_+##, ##M_p(\{x_n\})\in \mathcal B## for all ##\{x_n\}\in \mathcal B##. Indeed, it is easy to see that if ##M_p(\{x_n\})=\{y_n\}##, then ##\underline\lim x_n\le\underline\lim y_n\le\overline\lim y_n\le\overline\lim x_n##.

Notice that, for all ##p\in \mathbb Z_+##, ##M_p=\frac1p\sum_{k=0}^{p-1}S^{m+k}##. Thus, ##M_p## is a bounded linear operator on ##\mathcal B##, with (again, by looking at constant sequences) ##\Vert M_p\Vert =1##.

If ##L## is a generalized limit, then ##L\circ S=L##. It follows from this and the linearity of ##L## that ##L\circ M_p=L##, for all ##p\in \mathbb Z_+##.

Now let ##\{x_n\}\in \mathbb B## and ##l\in\mathbb R##.
We must prove that ##L(\{x_n\})=l## for all generalized limits ##L##, if and only if ##M_p^m(\{x_n\})\to l## as ##p\to \infty##, uniformly in ##m\in \mathbb Z_+##

Assume first that ##L(\{x_n\})=l## for all generalized limits ##L##.

We work in a nonstandard extenstion of a superstructure containing ##\mathbb R##, with the transform denoted as ##^*##.

Pick ##K\in\,^*\mathbb Z_+## and ##H\in \,^*\mathbb Z_+\setminus\mathbb Z_+##.
For each ##\{t_n\}\in \mathcal B##, put ##F_H^K(\{t_n\})=\text{st}(\,^*M_H^K(\,^*\{t_n\}))## ("st"for "standard part"). (We can consider ##M## as a map assigning the operator ##M_p^m## to ##(m,p)\in\mathbb (\mathbb Z_+)^2##. Then, we interprete ##\,^*M_H^K## as ##(\,^*M)_H^K##.)
From the corresponding property for all ##M_p^m## (##m,p\in\mathbb Z_+##), we obtain that ##\underline\lim t_n\le F_H^K(\{t_n\})\le \overline\lim t_n## (, with ##\underline\lim## and ##\overline\lim## taken over all ##n\in\mathbb Z_+##). This implies that ##F_H^K## is a bounded (standard) functional on ##\mathcal B##, and it is also easy to see that ##F_H^K## is linear.
Also, ##\,^*M_H^K(\,^*(S(\{t_n\})))-\,^*M_H^K(\,^*\{t_n\})=\frac1H(t_{K+H}-t_K)\approx 0##, so ##F_H^K(S(\{t_n\}))=F_H^K(\{t_n\})##, for all ##\{t_n\}\in\mathcal B##.

All this means that ##F_H^K## is a generalized limit. Hence ##F_H^K(\{x_n\})=l##, so ##\,^*M_H^K(\,^*\{x_n\})\approx l##. This holds for all ##K\in\,^*\mathbb Z_+## and ##H\in \,^*\mathbb Z_+\setminus\mathbb Z_+##.
But this latter condition is the nonstandard formulation of our desired conclusion, that is:

##M_p^m(\{x_n\})\to l##, as ##p\to \infty##, uniformly in ##m\in \mathbb Z_+##. This conclusion therefore holds.

Conversely, assume that ##M_p^m(\{x_n\})\to l##, as ##p\to \infty##, uniformly in ##m\in \mathbb Z_+##.
Put ##l_n=l## for all ##n\in\mathbb Z_+##. Our assumption is then eqiuvalent to ##\lim_{p\to\infty} M_p(\{x_n\})=\{l_n\}##, in the sup-norm in ##\mathcal B##. Since each generalized limit ##L## is continuous and ##L\circ M_p=L##, for all ##p\in \mathbb Z_+##, it follows that ##L(\{x_n\})=L(M_p(\{x_n\}))\to L(\{l_n\})=l## as ##p\to\infty##.

Hence ## L(\{x_n\})=l##. This holds for all generalized limits ##L##.

We have proved that ##L(\{x_n\})=l## for all generalized limits ##L##, if and only if ##M_p^m(\{x_n\})\to l##, as ##p\to \infty##, uniformly in ##m\in \mathbb Z_+##, which solves the problem.
 
Last edited:
  • Like
Likes micromass
  • #74
Advanced Problem 7c.

Put x_n=\begin{cases}2^q,\quad \text{if}\quad n=4^q,\quad \text{for some}\quad q\ge 0,\\ 0,\quad\text{otherwise.}\end{cases}
Then, with ##s=\lfloor\log_4 n\rfloor##,
0\le\frac1n\sum_{k=1}^n x_k\le \frac{2^{s+1}-1}{4^s}<\frac1{ 2^{s-1}}=
=\frac1{\sqrt{4^{s-1}}}<\frac1{\sqrt{4^{\log_4 n -2}}}=\frac 4{\sqrt n}\to 0,\quad\text{as}\quad n\to\infty.
Hence, ##\lim_{n\to \infty}\frac1n\sum_{k=1}^n x_k=0##.

On the other hand, pick ##p\ge 1##. Put ##r=\lceil\log_2 p\rceil## and ##m=4^r##. Then, since ##4^r+2^r-1<4^{r+1}##,
\frac1p\sum_{k=0}^{p-1}x_{m+k}=\frac1p\sum_{k=0}^{2^r-1}x_{m+k}&gt;\frac1{2^r}\sum_{k=0}^{2^r-1}x_{m+k}=\frac{2^r}{2^r}=1.
Thus, to each ##p\ge 1## there is an ##m\ge 1## such that ##\frac1p\sum_{k=0}^{p-1}x_{m+k}> 1##.
Therefore, it is not the case that ##\frac1p\sum_{k=1}^{p-1}x_{m+k}\to 0## as ##p\to\infty##, uniformly in ##m\in\mathbb Z_+##.

So, this sequence ##\{x_n\}## satisfies the given conditions.
 
  • Like
Likes micromass
  • #75
Erland said:
Advanced Problem 7c.

Put x_n=\begin{cases}2^q,\quad \text{if}\quad n=4^q,\quad \text{for some}\quad q\ge 0,\\ 0,\quad\text{otherwise.}\end{cases}
Then, with ##s=\lfloor\log_4 n\rfloor##,
0\le\frac1n\sum_{k=1}^n x_k\le \frac{2^{s+1}-1}{4^s}&lt;\frac1{ 2^{s-1}}=
=\frac1{\sqrt{4^{s-1}}}&lt;\frac1{\sqrt{4^{\log_4 n -2}}}=\frac 4{\sqrt n}\to 0,\quad\text{as}\quad n\to\infty.
Hence, ##\lim_{n\to \infty}\frac1n\sum_{k=1}^n x_k=0##.

On the other hand, pick ##p\ge 1##. Put ##r=\lceil\log_2 p\rceil## and ##m=4^r##. Then, since ##4^s+2^s-1<4^{s+1}##,
\frac1p\sum_{k=0}^{p-1}x_{m+k}=\frac1p\sum_{k=0}^{2^r-1}x_{m+k}&gt;\frac1{2^r}\sum_{k=0}^{2^r-1}x_{m+k}=\frac{2^r}{2^r}=1.
Thus, to each ##p\ge 1## there is an ##m\ge 1## such that ##\frac1p\sum_{k=0}^{p-1}x_{m+k}> 1##.
Therefore, it is not the case that ##\frac1p\sum_{k=1}^{p-1}x_{m+k}\to 0## as ##p\to\infty##, uniformly in ##m\in\mathbb Z_+##.

So, this sequence ##\{x_n\}## satisfies the given conditions.

Stupid mistake by me. This example doesn't work, because the sequence is not bounded!

It's embarrassing to make such a mistake when I just was appointed to Science Advisor here at PF. :oops:

The example I gave can, however, be modified:

Put x_n=\begin{cases}1,\quad \text{if}\quad 4^q\le n&lt;4^q+2^q,\quad \text{for some}\quad q\ge 0,\\ 0,\quad\text{otherwise.}\end{cases}
Then, with ##s=\lfloor\log_4 n\rfloor##, since ##4^s+2^s-1<4^{s+1}##,
0&lt;\frac1n\sum_{k=1}^n x_k\le \frac1{4^s}\sum_{k=1}^{4^s+2^s-1} x_k=\frac1{4^s}\sum_{q=0}^s 2^q= \frac{2^{s+1}-1}{4^s}&lt;\frac1{ 2^{s-1}}=
=\frac1{\sqrt{4^{s-1}}}&lt;\frac1{\sqrt{4^{\log_4 n -2}}}=\frac 4{\sqrt n}\to 0,\quad\text{as}\quad n\to\infty.
Hence, ##\lim_{n\to \infty}\frac1n\sum_{k=1}^n x_k=0##.

On the other hand, pick ##N\ge 1##. Put ##r=\lceil\log_2 N\rceil##, ##p=2^r##, and ##m=4^r##. Then, since ##p\ge N## and ##4^r+2^r-1<4^{r+1}##,
\frac1p\sum_{k=0}^{p-1}x_{m+k}=\frac1{2^r}\sum_{k=0}^{2^r-1}1=1.
Thus, to each ##N\ge 1## there is a ##p \ge N## and an ##m\ge 1## such that ##\frac1p\sum_{k=0}^{p-1}x_{m+k}= 1##.
Therefore, it is not the case that ##\frac1p\sum_{k=1}^{p-1}x_{m+k}\to 0## as ##p\to\infty##, uniformly in ##m\in\mathbb Z_+##.

And since this sequence ##\{x_n\}## is bounded, this is a valid solution of the problem... :wink:
 
  • Like
Likes micromass
  • #76
Birds on a wire: Instead of taking the limit of bird number to infinity, we can also let the length of the wire go to infinity while keeping the bird density constant. The density is arbitrary, set it to 1. The problem is equivalent to "for a given arbitrary point, what is the probability that it is painted?" There are two possible ways to get it painted, due to the next bird to the left and due to the next bird to the right. In total we have to consider 4 birds, because we also need the left/right neighbor of those two birds. Let’s call the bird positions, from left to right, A, B, D and E, while our chosen point in the centre is C.
Due to symmetry, “A is closer to B” has the same probability as “D is closer to B”, and the same for bird C. The length between B and D correlates those events, however, so the probability that the wire is painted is not 3/4. We can solve this via an integral.

The length distributions of (AB), (BC), (CD) and (DE) all follow an exponential distribution with expectation value 1. Note that the expected length (BD) between two birds is now 2. This is the difference between “we pick a random bird interval” and “we pick a bird interval covering a random point” - where longer distances are overrepresented.

The sum of two exponential distributions is an Erlang distribution ##f(x;k,\lambda)=\frac{\lambda^k x^{k-1} e^{-\lambda x}}{(k-1)!}##, here k=2 and ##\lambda=2## which simplifies to f(x;2,2)=4 x e-2x. For a given length x of (BD), the probability that our point C is not painted is ##P(x) = \left(1-e^{-x}\right)^2## as 1-e-x is the probability to have a bird separation smaller than x, and we need two of those smaller bird separations.

The total probability is then given by integrating the Erlang function with this additional factor:
$$p_{clean}= \int_0^\infty 4 x e^{-2x } \left(1-e^{-x}\right)^2 dx$$
Collecting all the terms is a task for WolframAlpha, the result is 13/36, about 0.361.

The fraction of painted wire is then 23/36, about 0.639.
 
  • #77
In my proof of Advanced Problem 7b (Post #73 here), I used nonstandard analysis in the proof of the direct part (##\Rightarrow##). But it could be instructive to also have a "standard" proof, without nonstandard analysis, so I converted the nonstandard proof to a standard proof.

We will rely on the notation and preliminaries in Post #73, and also on Post #102 here. In particular, we will use the ultrafilter ##\mathcal U##, the functional ##F##, and Lemma 2 in this latter post.

Let ##\{x_n\}\in\mathcal B## and ##l\in\mathbb R##. Assume that it is not the case that ##M_p^m(\{x_n\})\to l## as ##p\to\infty##, uniformly in ##m\in\mathbb Z_+##. We will prove that there exists a generalized limit ##L## such that ##L(\{x_n\})\neq l##.

By our assumption, there exists an ##\epsilon>0## such that there exists to each ##N\in\mathbb Z_+## a pair ##(m_N,p_N)\in(\mathbb Z_+)^2## with ##p_N\ge N##, such that ##|M_{p_N}^{m_N}(\{x_n\})-l|\ge\epsilon##.

We now define an operator ##G:\mathcal B\to\mathcal B## by ##G(\{t_n\})=\{s_N\}##, for all ##\{t_n\}\in\mathcal B##, where ##s_N=M_{p_N}^{m_N}(\{t_n\})## for all ##N\in \mathbb Z_+##.
(Notice that we used the axiom of choice in the definition of ##G##.) It is clear that, for all ##\{t_n\}\in\mathcal B##, ##\{s_N\}=G(\{t_n\})\in \mathcal B##, with ##\Vert\{s_N\}\Vert_\infty\le \Vert \{t_n\}\Vert_\infty##. Also, it is clear that ##G## is linear.

We will show that for ##\{t_n\}## and ##\{s_N\}## as above: ##\underline\lim t_n\le\underline\lim s_N\le \overline\lim s_N\le\overline\lim t_n##.

Put ##t=\overline\lim t_n##. Pick ##\delta>0##. Then, there is a ##K_1\in\mathbb Z## such that ##\sup_{N \ge K_1} s_N < t+\delta/4##. Choose ##K\in\mathbb Z_+## such that ##K>\max(K_1,\frac{4K_1\|\{t_n\}\|_\infty}\delta)##. Then, for ##N\ge K##, since ##p_N\ge N##:

##s_N=M_{p_N}^{m_N}(\{t_n\})=\frac1{p_N}\sum_{k=0}^{K_1-1}t_{m_N+k}+\frac1{p_N}\sum_{k=K_1}^{p-1}t_{m_N+k}\le##
##\le\frac{K_1\|\{t_n\}\|_\infty}K+\frac{(p_N-K_1)(t+\delta/4)}{p_N}<\frac{K_1\|\{t_n\}\|_\infty}K+\frac{p_N (t+\delta/4)}{p_N}+\frac{K_1 \|\{t_n\}\|_\infty}K+
\frac{K_1\delta/4}{K_1}\le##
##\le\delta/4+(t+\delta/4)+\delta/4+\delta/4=t+\delta##.
Thus, ##s_N<t+\delta## if ##N\ge K##. There is such a ##K## to each ##\delta>0##. This means that ##\overline\lim_N s_N\le \overline\lim_n t_n.## By looking att ##-\{t_n\}## and using linearity, we obtain the correspoding result for ##\underline\lim##, so that ##\underline\lim t_n\le\underline\lim s_N\le \overline\lim s_N\le\overline\lim t_n##.

Also, if we put ##s_N=G(\{t_n\})## and ##\{z_N\}=G(S(\{t_n\}))##, for ##\{t_n\}\in \mathcal B##, then ##|z_N-s_N|=|M_{p_N}^{m_N}(S(\{t_n\})-M_{p_N}^{m_N}(\{t_n\})|=\frac1{p_N}|(t_{m_N+p_N}-t_{m_N})|\le \frac{2\|\{t_n\}\|_\infty}{p_N}\to 0## as ##N\to\infty##.
Thus, ##\lim_{n\to\infty}(z_N-s_N)=0##.

All this means that the operator ##G## satisfies the assumptions in Lemma 2 mentioned above. By that lemma, ##L=F\circ G## (with ##F## from the same post) is a generalized limit.

Next, put ##\{y_N\}=G(\{x_n\})## and ##r=L(\{x_n\})=F(\{y_N\})##. By the properties of ##F##, this implies that ##\{N\in\mathbb Z_+\,|\,|y_N-r|<\epsilon\}\in\mathcal U##. But ##y_N=M_{p_N}^{m_N}(\{x_n\})##, for all ##N\in\mathbb Z_+##, so ##\{N\in\mathbb Z_+\,|\,|y_N-l|<\epsilon\}=\varnothing\notin\mathcal U##.
Thus, ##L(\{x_n\})=r\neq l##.

We have found a generalized limit ##L## such that ##L(\{x_n\})\neq l##.

Hence, if ##L(\{x_n\})=l## for all generalized limits ##L##, then ##M_p^m(\{x_n\})\to l## as ##p\to\infty##, uniformly in ##m\in\mathbb Z_+##, Q. E. D.

The converse part (##\Leftarrow##) was proved in the previous post without nonstandard analysis, so I leave the proof of the converse part unchanged.
 
  • Like
Likes micromass
  • #78
mfb said:
Birds on a wire: Instead of taking the limit of bird number to infinity, we can also let the length of the wire go to infinity while keeping the bird density constant. The density is arbitrary, set it to 1. The problem is equivalent to "for a given arbitrary point, what is the probability that it is painted?" There are two possible ways to get it painted, due to the next bird to the left and due to the next bird to the right. In total we have to consider 4 birds, because we also need the left/right neighbor of those two birds. Let’s call the bird positions, from left to right, A, B, D and E, while our chosen point in the centre is C.
Due to symmetry, “A is closer to B” has the same probability as “D is closer to B”, and the same for bird C. The length between B and D correlates those events, however, so the probability that the wire is painted is not 3/4. We can solve this via an integral.

The length distributions of (AB), (BC), (CD) and (DE) all follow an exponential distribution with expectation value 1. Note that the expected length (BD) between two birds is now 2. This is the difference between “we pick a random bird interval” and “we pick a bird interval covering a random point” - where longer distances are overrepresented.

The sum of two exponential distributions is an Erlang distribution ##f(x;k,\lambda)=\frac{\lambda^k x^{k-1} e^{-\lambda x}}{(k-1)!}##, here k=2 and ##\lambda=2## which simplifies to f(x;2,2)=4 x e-2x. For a given length x of (BD), the probability that our point C is not painted is ##P(x) = \left(1-e^{-x}\right)^2## as 1-e-x is the probability to have a bird separation smaller than x, and we need two of those smaller bird separations.

The total probability is then given by integrating the Erlang function with this additional factor:
$$p_{clean}= \int_0^\infty 4 x e^{-2x } \left(1-e^{-x}\right)^2 dx$$
Collecting all the terms is a task for WolframAlpha, the result is 13/36, about 0.361.

The fraction of painted wire is then 23/36, about 0.639.

Sorry mfb, that is not the correct answer to the problem. It's more than double the right answer.
 

Similar threads

  • · Replies 105 ·
4
Replies
105
Views
14K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 83 ·
3
Replies
83
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
11K
  • · Replies 52 ·
2
Replies
52
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
10K
  • · Replies 86 ·
3
Replies
86
Views
13K
  • · Replies 23 ·
Replies
23
Views
4K