Micromass' big summer challenge

In summary, the operations "*" and "x" are the same and commutative and associative with e as the neutral element.
  • #36
@PeroK: The probabilities are not independent - if the drunk man is home after step 1, he is more likely to be also at home after step 3. All you can show that way (which would need a proper proof) is that the expectation value of hitting the origin is larger than 1.

I know the puzzle, so no answer from me.
 
Physics news on Phys.org
  • #37
mfb said:
@PeroK: The probabilities are not independent - if the drunk man is home after step 1, he is more likely to be also at home after step 3. All you can show that way (which would need a proper proof) is that the expectation value of hitting the origin is larger than 1.

I know the puzzle, so no answer from me.

Those are cumulative probabilities, assuming he stops when he gets home. Have edited the post above.
 
  • #38
To me it heavily looks like a classical Markov-chain. But my stochastic test had been on a Rosenmontag and the only probability ##1## I can see is, that I would make some severe mistakes.
 
  • #39
Ah, should have looked at the probabilities more carefully, or checked the OEIS sequence.
 
  • #40
Problem 5

In this problem we have a simple random walk. A simple random walk on ##Z^d## is a random walk in which the probability of moving from a point to anyone of its 2d (nearest) neighbors is ##\frac{1}{2d}##. In our problem, ##d = 2## so the aforementioned probability is ##\frac{1}{4}##.

Let ##rw## denote a random walk that starts at the origin ##(0,0)##
##rw## is recurrent iff probability that never returns to origin is ##0##.
##rw## is transient iff probability that never returns to origin is ##>0##.

We'll utilize G.Polya's theorem: A simple random walk on the d-dimensional lattice ##Z^d## is recurrent for ##d = 1## and ##d = 2##, but is transient for ##d ≥ 3##.

We'll prove this theorem for ##d = 1## and ##d = 2##, where ##rw## is recurrent. Recurrence implies that we will have an infinite number of returns to the origin.

Before beginning our proof, let ##u_n = Pr\{ rw## starting at ##0## is at ##0## after exactly ##n## steps##\}##, ##u_0 = 1##.
Let ##X_n## be a random variable. We define $$X_n = \begin {cases}
1, & \text{rw is at 0 at time n}\\
0, & \text{otherwise}
\end {cases} $$

If ##TT## is the total number of times for ##rw## being at ##0##: ##TT = \sum_{n=0}^{\infty} X_n##. Let ##m## denote the expected total number of times at 0. Then ##m = \sum_{n=0}^{\infty}E(X_n)##. Now, ##E(X_n) = 1 \cdot u_n + 0 \cdot (1 - u_n)##, hence ##m = \sum_{n=0}^{\infty} u_n##. If this sum diverges then ##rw## is recurrent and vice versa.

Proof

Lets begin with the case ##d = 1## ( ##rw## on a line ). In order for the drunk man to return to 0, same number of steps must be taken for left and right. Note that we can have only even return times. So, ##u_{2n} = {2n\choose n}(\frac{1}{2})^{2n}##. Now, we can use Stirling formula to asymptotically approximate ##u_{2n}##: ##n! \approx \sqrt{2\pi n}\cdot e^{-n}\cdot n^n##. We have:
$$\begin{align}
u_{2n} & = \frac{(2n)!}{n!(2n - n)!}(\frac{1}{2})^{2n} \nonumber \\
& \approx \frac{\sqrt{2\pi 2n}\cdot e^{-2n}(2n)^{2n}}{(\sqrt{2\pi n} \cdot e^{-n}\cdot n^n)^2 \cdot 2^{2n}} \nonumber \\
& = \frac{1}{\sqrt{\pi n}} \nonumber
\end{align}$$
So, $$\sum_{n}^{} u_{2n} \approx \sum_{n}^{}\frac{1}{\sqrt{\pi n}}$$. This last series is divergent, so simple ##rw## in d = 1 is recurrent.

##d = 2##
Now, this is the case we're interested in. The drunk man must take (analogously to the previous case) the same number of steps left as right and the same number of steps up as down. So, we take paths that return in ##2n## steps, but now with probability ##(\frac{1}{4})^{2n}##. Now, ##u_{2n}## is the square of its counterpart in ##d = 1## (this is obvious but in order to see it, we can formulate ##u_{2n}## analogously to the previous case as ##u_{2n} = (\frac{1}{4})^{2n}\cdot \sum_{k = 0}^{n} \frac{(2n)!}{k!(n - k)!k!(n - k)!}## and doing the math we end up to ##u_{2n} = (\frac{1}{2^{2n}}{2n \choose n})^2##). Now, $$m = \sum_{n}^{} u_{2n} \approx \sum_{n}^{} \frac{1}{\sqrt{\pi n}} \cdot \frac{1}{\sqrt{\pi n}} = \sum_{n}^{}\frac{1}{\pi n}$$. This last series diverges, so ##rw## in two dimensions is also recurrent.

Now, I won't go into the proof of the theorem for ##d = 3## and beyond, as this is out of the scope of our problem. Just for some completeness, the proof is done in an analogous to the previous cases way, but with probability of ##(\frac{1}{6})^{2n}## for each path to occur and the ##rw## is proved transient.

Back to our problem, the fact that the drunk man will return to the origin (i.e bar) infinitely many times having followed absolutely random paths means two things: First, that he will necessarily visit all intersections of the grid infinitely many times and so his home at (10,10) and second, the probability of such an event is ##1## (actually, there is a lemma for that infinitely many times visit implies probability ##1##).
 
Last edited:
  • Like
Likes PeroK, micromass and fresh_42
  • #41
QuantumQuest said:
Back to our problem, the fact that the drunk man will return to the origin (i.e bar) infinitely many times having followed absolutely random paths means two things: First, that he will necessarily visit all intersections of the grid infinitely many times and so his home at (10,10) and second, the probability of such an event is ##1## (actually, there is a lemma for that infinitely many times visit implies probability ##1##).

If ##p## is the probability that he ever returns to the bar and ##p_n## is the probability that he returns precisely ##n## times, then:

##TT = 1 + \sum_{n=1}^{\infty}np_n = 1 + \sum_{n=1}^{\infty}p^n(1-p) = 1 + p(1-p)^{-1} = \frac{1}{1-p} \ \ (p \ne 1)##

Hence ##TT## is finite iff ##p < 1##

For the 1D case I got some nice formulas but, unfortunately, they don't seem to extend to more than 1D. For one dimension:

##x(n)## = number of paths that do not return to the starting point on or before ##2n## steps
##y(n)## = number of paths that return to the starting point on or before ##2n## steps
##r(n)## = number of paths that end at the starting point after ##2n## steps
##s(n)## = number of paths that return to the starting point for the first time after ##2n## steps, then:

The key recurrence relation is:

##y(n) = 4y(n-1) + s(n)##

And, we get:

##x(n) = r(n) = \binom{2n}{n}##

##y(n) = 4^n - x(n) = 4^n - \binom{2n}{n}##

##s(n) = 4r(n-1) - r(n) = 4\binom{2n-2}{n-1} - \binom{2n}{n}##

It's easy to show from this that the probability of returning to the starting point at some stage is 1, as ##y(n)/4^n \rightarrow 1## as ##n \rightarrow \infty##.

In two dimensions, we have:

##r(n) = \binom{2n}{n}^2##

The key recurrence relation is:

##y(n) = 16y(n-1) + s(n)##

And:

##s(n) = 4, 20, 176, 1876 \dots##

Which I also found on OEIS:

https://oeis.org/A054474

But, no formula is given. I couldn't find ##x(n)## or ##y(n)## on OEIS. They are:

##y(n) = 4, 84, 1520, 26196 \dots##
##x(n) = 12, 172, 2576, 39340 \dots##

Interesting and frustrating that the inductive solution for 1D doesn't extend to 2D. I thought I'd post this anyway.
 
  • Like
Likes QuantumQuest
  • #42
Some thoughts on a combinatorial approach to question (1), though it still needs some work. I use some common identities in the study of integer compositions, which can be found almost anywhere, including

https://en.wikipedia.org/wiki/Composition_(combinatorics)

Every distribution of birthdays determines a weak composition of length ##k = 365## of the integer ##n = 2016##; that is, a way of writing ##n## as the sum of ##k## non-negative integers, with distinct orderings counted separately. The total number of weak compositions is given by

## \begin{align*}
T &= {n+k-1 \choose k-1}
\end{align*}##

The the number of weak compositions with exactly ##x## zeros ##Z_x## is given by the number of strict compositions (that is, excluding zeros) with ##k-x## elements, times the number of choices for the locations of the zeros, giving

## \begin{align*}
Z_x &= {k \choose x} {n-1 \choose k-x-1}
\end{align*}##

The probability of ##x## zeros is then

## \begin{align*}
P(x) &= \frac{Z_x}{T} \\
&= {k \choose x} {n-1 \choose k-x-1} \bigg / {n+k-1 \choose k-1} \\
&\approx \frac{k^x}{x!} {n-1 \choose k-x-1} \bigg / {n+k-1 \choose k-1}
\end{align*}##

where we use the approximation

## {N \choose k} = \frac{N^k}{k!} \left ( 1 + \mathcal{O} \left ( \frac{1}{N}\right ) \right ) \text{for} \ k = \mathcal{O}(1)##

which gets us part way to something resembling a Poisson density function, though I'm not sure exactly how to deal with the remaining binomial coefficients. Using the above approximation, and some others, we can get

##\begin{align*}
P(x) \approx \frac{k^x}{x!} \left ( \frac{(n-1)^{k-x+1}}{(k-x+1)!} \frac{\sqrt{\pi (n+k-1)}}{2^{(n+k-1)/2}} \right )
\end{align*}##

though I'm not sure if this gets us closer.
 
  • #43
a hint on #8. I hope you don't mind, micromass, but this problem seems to be languishing. If this is inappropriate, feel free to remove this post, with my permission.
Hint:
Almost ones only way to show a subgroup is normal is to find a homomorphism with that subgroup as kernel. Every subgroup H yields two natural homomorphisms into permutation groups, since every group G acts both on the cosets and the conjugacy class of a subgroup H, i.e. by translation and by conjugacy. In both cases, the subgroup H at least belongs to the kernel of the map. This is basic knowledge learned early in a beginning group theory class. The second part of the problem seems a good bit harder to me, and I usually do it using a knowledge of semi direct products, as well as the elementary consequences of the class equation, e.g. that all groups of order p^2 are abelian. Perhaps people could be encouraged to publish partial results on this problem, so as to identify the hardest parts.
 

Similar threads

  • Math Proof Training and Practice
2
Replies
69
Views
3K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
3
Replies
93
Views
6K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
2
Replies
67
Views
7K
  • Math Proof Training and Practice
3
Replies
86
Views
9K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
Back
Top