Micromass' big August challenge

In summary, this thread contains challenges for both high school and college freshmen, as well as for more advanced individuals. Previous unsolved challenges have been omitted. Participants must provide a full derivation or proof for their solution to count. It is allowed to use nontrivial results, but they must be cited and considered common knowledge to all mathematicians. The challenges can be found in the provided link.
  • #36
For high school 4th question I got total area coverd by donkey = 7646.786, is it correct , I will post my working if the answer would be correct.
 
Physics news on Phys.org
  • #37
question 3 , previous high school challenges
if we observe the numbers are increasing as the sequence progresses, and the presence of sq.roots indicate that there are squares present too...therefore according to my calculations the ans should be 3.
this is because, 3 can be written as root 9...which is root of 8+1...further 8 can be written as root 64, here we take the 4 common and get it out of the root 64 (therefore we have 2 multiplied by root 16), again 16 can be written as 15+1 and 15 as root 225, from which the factor 9 can be taken out of the root as 3...and the sequence keeps on going...
 
  • Like
Likes micromass
  • #38
micromass said:
The idea is that your construction must work for all realizations of ##Y_i##, not just special cases. Sure for some realizations of ##Y_i##, you don't need much to make ##s## explode. But it must work for all realizations.
That is a weird min/max situation then. We are looking for potential issues with s, but then we take the case where those problems appear last, and ask when they can first appear.

Anyway, that case happens if all differences between different Yi are different: n(n-1)/2 different differences, and "0" is in the set as always, for a total of n(n-1)/2 + 1 elements.

We look for the largest k such that the remaining k elements don't lead to more than half of the total number of elements in the set any more. They have k(k-1)/2 + 1 distinct differences.
$$\frac{k(k-1)}{2} + 1 < \frac 1 2 \left( \frac{n(n-1)}{2} + 1\right)$$
$$2k^2 - 2k + 2 - n(n-1) < 0$$
$$k < \frac{1 + \sqrt{2n(n-1)-3}}{2}$$
The k we look for is the largest integer strictly smaller than the expression on the right. Which can be expressed in weird ways with ceil() of negative values but that looks ugly.

The explosion value is then given by (n-k)/n.

As an example, for n=10 we get k<7.15 => k=7=> explosion value 0.3.
If we change 3 elements, then the 7 unchanged values lead to 22 elements in the set of differences, while the 3 changed values can contribute 3*7+3*2/2 = 24 large differences => median blows up.
If we change just 2 elements, then the 8 unchanged values lead to 29 elements in the set of differences, while the 2 changed values can contribute 2*8+2*1/2 = 17 large differences => median does not blow up.

-------------

Why is "1" twice in your set in high school challenge 5, and why did you leave out 9?
 
  • Like
Likes micromass
  • #39
Regarding the statement of advanced problem #6# I have three questions:

(1) Nitpicking: I suppose that for the ##\sigma##-additivity we must have ##A_k \in \mathcal{B}## for all ##k \in \mathbb{N}##.
(2) A similar remark can be made regarding the definition of "non-atomic".

Maybe these questions are superfluous, if the convention in probability is that sets are understood to be measurable if not stated otherwise? (I don't know, I'm not a probabilist.)

(3) I was a bit confused by the formulation of what needs to be proved. Should it be: "Prove that in a non-atomic probability space, for each ##A \in \mathcal{A}## with ##\mathbb{P}(A) > 0## and for every sequence ##p_1,p_2,\ldots \ge 0## with ##\sum_k{p_k} = 1## it holds that ##A## can be decomposed into a disjoint union ##A = \bigcup_k{A_k}## with ##A_k \in \mathcal{A}## and ##\mathbb{P}(A_k) = p_k\mathbb{P}(A)##."?

Thank you.
 
  • #40
Krylov said:
Regarding the statement of advanced problem #6# I have three questions:

(1) Nitpicking: I suppose that for the ##\sigma##-additivity we must have ##A_k \in \mathcal{B}## for all ##k \in \mathbb{N}##.
(2) A similar remark can be made regarding the definition of "non-atomic".

Maybe these questions are superfluous, if the convention in probability is that sets are understood to be measurable if not stated otherwise? (I don't know, I'm not a probabilist.)

(3) I was a bit confused by the formulation of what needs to be proved. Should it be: "Prove that in a non-atomic probability space, for each ##A \in \mathcal{A}## with ##\mathbb{P}(A) > 0## and for every sequence ##p_1,p_2,\ldots \ge 0## with ##\sum_k{p_k} = 1## it holds that ##A## can be decomposed into a disjoint union ##A = \bigcup_k{A_k}## with ##A_k \in \mathcal{A}## and ##\mathbb{P}(A_k) = p_k\mathbb{P}(A)##."?

Thank you.

That is all correct.
 
  • #41
For 1, it is easily noted that ##f(0) = 1##. And that ##f(x)## is non-zero for all ##x##. Assume continuity at some number ##x##. Then for every ##\epsilon>0##, there exists a ##\delta>0## such that ##|f(x+t)-f(x)| < \epsilon## whenever ##|t| < \delta##. I.e. ##|f(x)||f(t)-1| < \epsilon## whenever ##|t| < \delta##. Hence ##f(t) \to 1## when ##t \to 0##.

So let ##y## be any real number and ##\epsilon > 0##. Then choosing ##t## small enough forces the expression ##|f(y+t)-f(y)| = |f(y)||(f(t)-1|## to be less than ##\epsilon##. This shows continuity at ##y##. So ##f## is continuous.

By induction we easily see that ##f(n) = 2^n## for every integer ##n##. Furthermore, for integers ##m,n##, we have ##2^m = f(m) = f(\frac{mn}{n}) = f(\frac{m}{n})^n## which implies that ##f(\frac{m}{n}) = 2^{\frac{m}{n}}##. So ##f(x) = 2^x## for every rational ##x##. The same formula holds for all real numbers by approximating with rational numbers.
 
  • Like
Likes micromass
  • #42
@stevendaryl: Technically you have to show f(x)>0 before you can take its logarithm, but replacing everything by f and 2whatever gives the same proof without that issue.

Here is a shorter way to use continuity: We know g is continuous at x, and using g(q) with rational q we get g(x)=x. Now assume g(y) != y for some y. Then g(y)=cy for some c != 1, which implies g(y+q)=cy+q for every rational q. Now make a series of rational qi such that y+qi converges to x => g(x)=cx. Contradiction.
 
  • Like
Likes micromass
  • #43
Advanced 1b:

Clearly, ##f(x)=2^x## is a function satifying the conditions in 1a (in this case including the contnuity condition).

To find another one (which is then not continuous at any ##x\in \mathbb R##, by 1a), we choose a (real) irrational number ##s##. Let us consider ##\mathbb R## as vector space over ##\mathbb Q## (the rational numbers). Then ##\Bbb R## has a Hamel basis ##B## over ##\Bbb Q##, which means that every ##r\in\mathbb R## can be written as a unique linear combination ##r=\sum_{b\in B}r_b b##, where all ##r_b\in \Bbb Q## and only finitely many of them are ##\neq 0##, so the sum is well defined. We can also choose ##B## so that ##1,s\in B##. (All this can be proved by the Axiom of Choice. I can give the proof if you want.)

Next, choose ##t\in \mathbb R_+## such that ##t\neq 2^s##.

We can now define ##g:\mathbb R\to \mathbb R## by ##g(r)=2^{r_1}t^{r_s}## for all ##r\in \mathbb R##, with ##r_1, r_s## as above, associated to the given ##r##.

Now, we have, for ##r,r'\in\mathbb R##: ##g(r+r')=2^{r_1+r'_1}t^{r_s+r'_t}=2^{r_1}t^{r_s}2^{r'_1}t^{r'_s}=g(r)g(r').## Also ##f(1)=2^1t^0=2##.
But ##g(s)=2^0t^1=t\neq2^s=f(s)##.

Thus, ##f## and ##g## are two unequal functions satisfying the conditions in 1a, except the continuity condition (for ##g##).
 
  • Like
Likes mfb and micromass
  • #44
Here is part 1 of 2 of my attempt for advanced problem #6. All sets are measurable, unless stated otherwise. I have to say that although I am not such a fan of probability, I enjoyed this problem quite a bit, thank you @micromass. It made me open my neglected copy of Billingsley's "Probability and Measure", 3rd edition. There I found problem #6 as problem 2.19(d). Since I never saw it before, I believe I am allowed to try the solution?

In the preceding problem 2.19(c) it is asked: Show in the nonatomic case that ##0 \le x \le \mathbb{P}(A)## implies that there exists ##B \subseteq A## such that ##\mathbb{P}(B) = x##. Also, a hint is given.

Of course, this is a very intuitive, "intermediate value like" result, but it is not something that I consider obvious or "common knowledge". Probably its proof is somewhere else in the literature, but I find it nicer (and more instructive for myself) to prove it in a subsequent post in this thread. In the present post I will explain how [Billingsley, 2.19(c)] can be used to solve challenge #6.

Hence, let ##p_1,p_2,\ldots \ge 0## with ##\sum_k{}{p_k} = 1## be given and let ##A## be an event with ##\mathbb{P}(A) > 0##. Then
$$
0 \le p_1\mathbb{P}(A) \le \mathbb{P}(A)
$$
so by [Billingsley, 2.19(c)] there exists ##A_1 \subseteq A## such that ##\mathbb{P}(A_1) = p_1\mathbb{P}(A)##. To construct ##A_2## we note that
$$
0 \le p_2\mathbb{P}(A) \le (1 - p_1)\mathbb{P}(A) = \mathbb{P}(A) - \mathbb{P}(A_1) = \mathbb{P}(A \setminus A_1)
$$
so again by [Billingsley, 2.19(c)] there exists ##A_2 \subseteq A \setminus A_1## such that ##\mathbb{P}(A_2) = p_2\mathbb{P}(A)##. Clearly ##A_1## and ##A_2## are disjoint. This construction of ##A_1## and ##A_2## suggests how to proceed inductively. Suppose we have constructed the disjoint sets
$$
A_1 \subseteq A, \quad A_2 \subseteq A \setminus A_1, \quad \ldots \quad A_n \subseteq A \setminus \bigcup_{k=1}^{n-1}{A_k}
$$
such that ##\mathbb{P}(A_k) = p_k\mathbb{P}(A)## for all ##k = 1,\ldots,n##. Then
$$
0 \le p_{n+1}\mathbb{P}(A) \le \left(1 - \sum_{k=1}^n{p_k} \right)\mathbb{P}(A) = \mathbb{P}(A) - \sum_{k=1}^n{\mathbb{P}(A_k)} = \mathbb{P}\left(A \setminus \bigcup_{k=1}^n{A_k}\right)
$$
so by [Billingsley, 2.19(c)] there exists ##A_{n+1} \subseteq A \setminus \bigcup_{k=1}^n{A_k}## such that ##\mathbb{P}(A_{n+1}) = p_{n+1}\mathbb{P}(A)##.

What remains to be shown, is that the union of all ##A_k## covers ##A##. We observe that ##\left(\bigcup_{k=1}^n{A_k} \right)_{n \in \mathbb{N}}## is increasing with respect to inclusion, say to a limit set ##A_{\infty} \subseteq A##. By continuity of the probability measure from below and disjointness of the ##A_k## we have
$$
\mathbb{P}(A_{\infty}) = \lim_{n \to \infty}{\mathbb{P}\left(\bigcup_{k=1}^n{A_k} \right)} = \sum_{k=1}^{\infty}{\mathbb{P}(A_k)} = \mathbb{P}(A)
$$
Therefore the difference ##A_0 := A \setminus A_{\infty}## has probability zero and does not intersect any of the other ##A_k##. Hence I take the liberty to add ##p_0 := 0## satisfying ##\mathbb{P}(A_0) = p_0\mathbb{P}(A)## to the start of the sequence ##p_1,p_2,\ldots##. If this is acceptable, then what remains is the solution of [Billingsley, 2.19(c)] which I will post here today or tomorrow.
 
  • Like
Likes micromass
  • #45
Krylov said:
Here is part 1 of 2 of my attempt for advanced problem #6. All sets are measurable, unless stated otherwise. I have to say that although I am not such a fan of probability, I enjoyed this problem quite a bit, thank you @micromass. It made me open my neglected copy of Billingsley's "Probability and Measure", 3rd edition. There I found problem #6 as problem 2.19(d). Since I never saw it before, I believe I am allowed to try the solution?

In the preceding problem 2.19(c) it is asked: Show in the nonatomic case that ##0 \le x \le \mathbb{P}(A)## implies that there exists ##B \subseteq A## such that ##\mathbb{P}(B) = x##. Also, a hint is given.

Of course, this is a very intuitive, "intermediate value like" result, but it is not something that I consider obvious or "common knowledge". Probably its proof is somewhere else in the literature, but I find it nicer (and more instructive for myself) to prove it in a subsequent post in this thread. In the present post I will explain how [Billingsley, 2.19(c)] can be used to solve challenge #6.

Hence, let ##p_1,p_2,\ldots \ge 0## with ##\sum_k{}{p_k} = 1## be given and let ##A## be an event with ##\mathbb{P}(A) > 0##. Then
$$
0 \le p_1\mathbb{P}(A) \le \mathbb{P}(A)
$$
so by [Billingsley, 2.19(c)] there exists ##A_1 \subseteq A## such that ##\mathbb{P}(A_1) = p_1\mathbb{P}(A)##. To construct ##A_2## we note that
$$
0 \le p_2\mathbb{P}(A) \le (1 - p_1)\mathbb{P}(A) = \mathbb{P}(A) - \mathbb{P}(A_1) = \mathbb{P}(A \setminus A_1)
$$
so again by [Billingsley, 2.19(c)] there exists ##A_2 \subseteq A \setminus A_1## such that ##\mathbb{P}(A_2) = p_2\mathbb{P}(A)##. Clearly ##A_1## and ##A_2## are disjoint. This construction of ##A_1## and ##A_2## suggests how to proceed inductively. Suppose we have constructed the disjoint sets
$$
A_1 \subseteq A, \quad A_2 \subseteq A \setminus A_1, \quad \ldots \quad A_n \subseteq A \setminus \bigcup_{k=1}^{n-1}{A_k}
$$
such that ##\mathbb{P}(A_k) = p_k\mathbb{P}(A)## for all ##k = 1,\ldots,n##. Then
$$
0 \le p_{n+1}\mathbb{P}(A) \le \left(1 - \sum_{k=1}^n{p_k} \right)\mathbb{P}(A) = \mathbb{P}(A) - \sum_{k=1}^n{\mathbb{P}(A_k)} = \mathbb{P}\left(A \setminus \bigcup_{k=1}^n{A_k}\right)
$$
so by [Billingsley, 2.19(c)] there exists ##A_{n+1} \subseteq A \setminus \bigcup_{k=1}^n{A_k}## such that ##\mathbb{P}(A_{n+1}) = p_{n+1}\mathbb{P}(A)##.

What remains to be shown, is that the union of all ##A_k## covers ##A##. We observe that ##\left(\bigcup_{k=1}^n{A_k} \right)_{n \in \mathbb{N}}## is increasing with respect to inclusion, say to a limit set ##A_{\infty} \subseteq A##. By continuity of the probability measure from below and disjointness of the ##A_k## we have
$$
\mathbb{P}(A_{\infty}) = \lim_{n \to \infty}{\mathbb{P}\left(\bigcup_{k=1}^n{A_k} \right)} = \sum_{k=1}^{\infty}{\mathbb{P}(A_k)} = \mathbb{P}(A)
$$
Therefore the difference ##A_0 := A \setminus A_{\infty}## has probability zero and does not intersect any of the other ##A_k##. Hence I take the liberty to add ##p_0 := 0## satisfying ##\mathbb{P}(A_0) = p_0\mathbb{P}(A)## to the start of the sequence ##p_1,p_2,\ldots##. If this is acceptable, then what remains is the solution of [Billingsley, 2.19(c)] which I will post here today or tomorrow.

That is a really neat solution. I see you even traced back the book from where I got the problem! I await your solution to the lemma 2.19(c) that you used.
 
  • #46
I was wondering what kind of proof do you expect to be based on the idea of Question 3 of high school's challenges.
Because I couldn't figure out another way to do it and The 2nd arithmetic proof seems simple and consistent.
 
  • #47
Biker said:
I was wondering what kind of proof do you expect to be based on the idea of Question 3 of high school's challenges.
Because I couldn't figure out another way to do it and The 2nd arithmetic proof seems simple and consistent.

I am a bit unsure about the following (crucial) sentence. You'll need to formalize it.

Now in order to get back to your reversed sequence you have to trace back what you did. (doing what you have done in reversed)
 
  • #48
Following up on the lemma above for problem 6. Consider the set ##V## of measurable spaces ##X \subseteq A##, ordered by inclusion, and let ##a = \mathbb{P}(A)##. Then the set of values ##S = \{ \mathbb{P}(X) | X \in V\}## sits inside the interval ##[0,a]##. For each ##0 \leq p \leq 1##, let ##V_p## be the subset of ##V## consisting of those ##X## with ##\mathbb{P}(X) \leq pa##. If ##X_1 \subseteq X_2 \subseteq \ldots## is an increasing sequence of spaces ##X_n \in V_p##, then ##X = \bigcup_i X_i## is also in ##V_p##. Since ##V_p## is also non-empty (it contains the empty-set), then by Zorn's lemma, there exists a maximal element ##M \in V_p##.

Suppose that ##m := \mathbb{P}(M) < pa##. Let ##W_p## be the subset of ##V## consisting of those ##X## containing ##M## with ##\mathbb{P}(X) \geq pa##. By a similar argument ##W_p## has a minimal element ##N##. Now, ##n:=\mathbb{P}(N) > pa## (otherwise ##N## is in ##V_p## and contradicts maximality of ##M##). But since the measure space is non-atomic, there exists a subspace ##Y \subseteq N -M## with ##0 < y := \mathbb{P}(Y) < \mathbb{P}(N-M) = n-m##. Thus ##\mathbb{P}(M \cup Y) = m+y < n##. But ##M \cup Y## contains ##M##, contradicting minimality of ##N##. The conclusion is that ##m = pa##.
 
Last edited:
  • Like
Likes micromass
  • #49
Krylov said:
what remains is the solution of [Billingsley, 2.19(c)] which I will post here today or tomorrow.
disregardthat said:
Following up on the lemma above for problem 6.
Thank you so much, @disregardthat. I see that you live up to your nickname.
Incidentally, Zorn's lemma (or any other version of A.C.) is unnecessary, see the hint for 2.19(c) ("the lemma") in Billingsley and the corresponding "Notes on the Problems" in the back of the book.
 
Last edited:
  • Like
Likes disregardthat
  • #50
My Solution to previously unsolved High School and First Year University Question 3:

My solution to Question 6:

We can start forming a nested radical expression for ##x##:

$$|x|=\sqrt{x^2}\\=\sqrt{1+(x-1)(x+1)}\\=\sqrt{1+(x-1)\sqrt{1+x(x+2)}}\\=\sqrt{1+(x-1)\sqrt{1+x\sqrt{1+(x+1)(x+3)}}}$$
In the same way if we progress indefinitely, we get the following nested radical expression for ##x##:
$$|x|=\sqrt{1+(x-1)\sqrt{1+x\sqrt{1+(x+1)\sqrt{1+(x+2)\sqrt{1+(x+3)\sqrt{\cdots}}}}}}$$
This expression matches with the form of the given problem , so put ##x=3## to get:
$$\boxed{3=\sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{\cdots}}}}}$$
 
  • Like
Likes Biker and mfb
  • #51
Krylov said:
Thank you so much, @disregardthat. I see that you live up to your nickname.
Incidentally, Zorn's lemma (or any other version of A.C.) is unnecessary, see the hint for 2.19(c) ("the lemma") in Billingsley and the corresponding "Notes on the Problems" in the back of the book.

I don't have the book, but I would be interested to see what the hint says. My attempts not using Zorn's lemma didn't lead anywhere.
 
  • #52
Here's my weak attempt at advanced number 7. I broke it into a bunch of separate cases. I assume that ##r,s>0## and ##\binom{n}{k}=0## for ##n<k##.

First, for the trivial case of ##r=s##, we have ##r_k = s_k## for all ##k##, and thus
$$\binom {r}{s} = \prod_k {\binom{r_k}{s_k}} = 1$$
So from now on, we'll let ##r>s##. For ##r,s<p##, we have that ##r = r_0## and ##s=s_0## with all other ##r_k, s_k = 0## and again, the two sides of the congruence are equal even without the modulo. For ##r=p##, the left side is ##\binom{p}{n}## and since ##p## is prime, neither ##n!## nor ##(p-n)!## divide ##p## (by the fundamental theorem of arithmetic, neither has a necessary factor of ##p## to cancel out the ##p## in the numerator of the binomial coefficient). So the left side is ##0 \bmod p##. The right side is also ##0 \bmod p## since ##r_0 = 0## and ##s_0 >0##.

Now we're left with the case when ##r>p##. This is where things get a little dodgy. For the time being, I'm going to be completely non-rigorous and ignore my gut and simply manipulate the symbols in the hopes of some deeper insight. The modular expression is equivalent to
$$\binom{r}{s} - \prod_k {\binom{r_k}{s_k}} = 0 \bmod p$$
or
$$\frac{r!\prod_k{s_k!(r_k-s_k)!}-s!(r-s)!\prod_k{r_k!}}{s!(r-s)!\prod_k{s_k!(r_k-s_k)!}} = 0 \bmod p$$
which is satisfied if the numerator is divisible by ##p##. Now ##r!## is automatically divisible by ##p## since ##r>p##, and if one (or both) of ##s## and ##(r-s)## is greater than or equal to ##p##, then the entire numerator is divisible by ##p## and we're done. The remaining case is when both ##s## and ##(r-s)## are less than ##p##. We turn to the expression
$$\binom{r}{s} - \prod_k {\binom{r_k}{s_k}} = 0 \bmod p$$
Since ##s, (r-s) <p##, we have that ##\binom{r}{s}=0\bmod p## (from ##r!## in the numerator). On the other hand, ##s = s_0## and all the other ##s_k =0##. In addition, since ##(r-s)<p##, we have that ##p<r<2p##, so ##r=p+r_0##. This means that ##p+r_0-s_0 < p##, or in other words, ##r_0<s_0##. This makes the ##k=0## term of the product equal to zero, so the second term on the left side is also ##0 \bmod p##.

This would complete the proof, except for the sleight of hand in the expression
$$\frac{r!\prod_k{s_k!(r_k-s_k)!}-s!(r-s)!\prod_k{r_k!}}{s!(r-s)!\prod_k{s_k!(r_k-s_k)!}} = 0 \bmod p$$
Here the issue is that ##(r_k-s_k)## might be negative, naively making the factorial infinite. I'm not sure how to make this rigorous, but I'm worried it involves promoting the factorials to gamma functions and looking at the limiting behavior, in which case, I'll take a pass.
 
  • Like
Likes micromass
  • #53
I can kind of patch the hole from my post above (still not wildly confident about it though). Starting with
$$\binom{r}{s}-\prod_k{\binom{r_k}{s_k}} = 0\bmod p$$
multiply by ##s!(r-s)!## to get
$$r!-s!(r-s)!\prod_k{\binom{r_k}{s_k}} = 0\bmod p$$
Then the rest of the proof for the case where ##r>p## and one or both of ##s,(r-s)>p## is the same: since both terms in the difference are divisible by p, the whole thing is ##0\bmod p##. This way all the nasty negative factorials stay hidden in the binomial coefficients.
 
  • Like
Likes micromass
  • #54
Well, I'm going to give an outstanding problem a chance (previous advanced #1).

We start by writing ##G/H=\{g_iH| 1 \leq i \leq p\}## with ##\left. p \, \right| |G|## as the smallest of all prime divisors.
The left multiplication ##L_x : G/H \rightarrow G/H \; , \; L_x(g_iH) = xg_iH## defines a group homomorphism from ##G## to ##Sym(G/H)## whose kernel is thus a normal subgroup of ##G## and its image a subgroup of ##Sym(G/H)##. This means ##im L## is of finite order and the greatest possible divisor is ##p##, because ##|Sym(G/H)|=p! ##
Since ##im L \cong G/ker L## and ##p## is the smallest prime in ##|G|##, all greater primes have to be canceled out by ##|ker L|##.
But there is only one ##p## available in ##p! \;##, so ##|im L| \in \{1,p\}##

##im L = \{1\}## would imply ##L_x(g_iH)=xg_iH=g_iH## for all ##x## of ##G## and all ##i##. However, this would imply ##g^{-1}xg \in H## for all ##g \in G## and ##x\in H##, which is obviously wrong for at least one element ##x=g_i##.

Thus ##im L \cong \mathbb{Z}_p \cong G/ker L##. Since we haven't specified our cosets ##g_iH## so far, we may choose an element ##a \in G## such that ##L_a## generates ##im L## and ##a \notin H##. (## a \in H \Rightarrow g_iH = L_{g_i}(H) = L_{a^n}(H) = H## which cannot be the case for all ##i##.)

Now for ##x \in ker L## there is ##xH = L_x(H) = id_{G/H} (H) = H## which means ##ker L \subseteq H## and therefore ##ker L = H## because ##|G/H|\,=\,p\,\,=\, |im L| =\,|G/ ker L |##. As a kernel of a group homomorphism ##H## has to be normal.

Let's now consider a group ##G## of order ##|G|=pq## with primes ##p \leq q##.
By the previous we have a short exact sequence ##H=\mathbb{Z}_q \rightarrowtail G \twoheadrightarrow \mathbb{Z}_p = G/H##
Let ##\mathbb{Z}_q = <b\, | \, b^q = 1> \triangleleft \; G## be the normal subgroup and ##\mathbb{Z}_p = <a\, | \, a^p = 1>##
We have seen that ##L_x(\mathbb{Z}_q ) = a^n \mathbb{Z}_q## for some ##n##, i.e. we may write ##G = \mathbb{Z}_q \times \mathbb{Z}_p## as sets. Furthermore we have ##a\cdot b=b^m\cdot a## for some ##m##, because right and left cosets of ##\mathbb{Z}_q = <b>## are equal. In addition ##m## and the prime ##q## have to be coprime. Thus we have an equation ##1=rm+sq##.

Now ##\varphi (a)## with ##\varphi (a)(b) = a^{-1}b^ma## and ##\psi(a)## with ##\psi(a)(b) = ab^ra^{-1}## define two group homomorphisms of ##\mathbb{Z}_q## which are inverses of themselves. Thus ##\varphi(a)## is an automorphism of ##\mathbb{Z}_q## and ##\varphi : \mathbb{Z}_p \rightarrow Aut(\mathbb{Z}_q) \cong \mathbb{Z}^*_q \cong \mathbb{Z}_{q-1}## a group homomorphism.
This establishes a semi-direct product ##G \cong \mathbb{Z}_q \rtimes_{\varphi} \mathbb{Z}_p##.

If ##\left. p \right| (q-1)## we get ##\frac{q-1}{p}## many automorphisms in the semi-direct product, which all give rise to isomorphic groups ##G##. (I'm not quite sure here. Maybe there are as many isomorphism classes as there are prime factors in ##\frac{q-1}{p}##.)

If ##p \nmid (q-1)## then the entire subgroup ## \mathbb{Z}_p## is mapped to the identity in ##Aut( \mathbb{Z}_q)## which covers the case of a direct product ##G \cong \mathbb{Z}_q \times \mathbb{Z}_p##.
 
Last edited:
  • Like
Likes micromass
  • #55
fresh_42 said:
Well, I'm going to give an outstanding problem a chance (previous advanced #1).

We start by writing ##G/H=\{g_iH| 1 \leq i \leq p\}## with ##\left. p \, \right| |G|## as the smallest of all prime divisors.
The left multiplication ##L_x : G/H \rightarrow G/H \; , \; L_x(g_iH) = xg_iH## defines a group homomorphism from ##G## to ##Sym(G/H)## whose kernel is thus a normal subgroup of ##G## and its image a subgroup of ##Sym(G/H)##. This means ##im L## is of finite order and the greatest possible divisor is ##p##, because ##|Sym(G/H)|=p! ##
Since ##im L \cong G/ker L## and ##p## is the smallest prime in ##|G|##, all greater primes have to be canceled out by ##|ker L|##.
But there is only one ##p## available in ##p! \;##, so ##|im L| \in \{1,p\}##

##im L = \{1\}## would imply ##L_x(g_iH)=xg_iH=g_iH## for all ##x## of ##G## and all ##i##. However, this would imply ##g^{-1}xg \in H## for all ##g \in G## and ##x\in H##, which is obviously wrong for at least one element ##x=g_i##.

Thus ##im L \cong \mathbb{Z}_p \cong G/ker L##. Since we haven't specified our cosets ##g_iH## so far, we may choose an element ##a \in G## such that ##L_a## generates ##im L## and ##a \notin H##. (## a \in H \Rightarrow g_iH = L_{g_i}(H) = L_{a^n}(H) = H## which cannot be the case for all ##i##.)

Now for ##x \in ker L## there is ##xH = L_x(H) = id_{G/H} (H) = H## which means ##ker L \subseteq H## and therefore ##ker L = H## because ##|G/H|\,=\,p\,\,=\, |im L| =\,|G/ ker L |##. As a kernel of a group homomorphism ##H## has to be normal.

This is ok.

Let's now consider a group ##G## of order ##|G|=pq## with primes ##p \leq q##.
By the previous we have a short exact sequence ##H=\mathbb{Z}_q \rightarrowtail G \twoheadrightarrow \mathbb{Z}_p = G/H##
Let ##\mathbb{Z}_q = <b\, | \, b^q = 1> \triangleleft \; G## be the normal subgroup and ##\mathbb{Z}_p = <a\, | \, a^p = 1>##
We have seen that ##L_x(\mathbb{Z}_q ) = a^n \mathbb{Z}_q## for some ##n##, i.e. we may write ##G = \mathbb{Z}_q \times \mathbb{Z}_p## as sets. Furthermore we have ##a\cdot b=b^m\cdot a## for some ##m##, because right and left cosets of ##\mathbb{Z}_q = <b>## are equal. In addition ##m## and the prime ##q## have to be coprime. Thus we have an equation ##1=rm+sq##.

Now ##\varphi (a)## with ##\varphi (a)(b) = a^{-1}b^ma## and ##\psi(a)## with ##\psi(a)(b) = ab^ra^{-1}## define two group homomorphisms of ##\mathbb{Z}_q## which are inverses of themselves. Thus ##\varphi(a)## is an automorphism of ##\mathbb{Z}_q## and ##\varphi : \mathbb{Z}_p \rightarrow Aut(\mathbb{Z}_q) \cong \mathbb{Z}^*_q \cong \mathbb{Z}_{q-1}## a group homomorphism.
This establishes a semi-direct product ##G \cong \mathbb{Z}_q \rtimes_{\varphi} \mathbb{Z}_p##.

If ##\left. p \right| (q-1)## we get ##\frac{q-1}{p}## many automorphisms in the semi-direct product, which all give rise to isomorphic groups ##G##. (I'm not quite sure here. Maybe there are as many isomorphism classes as there are prime factors in ##\frac{q-1}{p}##.)

If ##p \nmid (q-1)## then the entire subgroup ## \mathbb{Z}_p## is mapped to the identity in ##Aut( \mathbb{Z}_q)## which covers the case of a direct product ##G \cong \mathbb{Z}_q \times \mathbb{Z}_p##.

You should work out the case ##p~\vert~q-1## in some more detail. But also, your proof fails in the case ##p=q##, I"ll let you find out where.
 
  • #56
micromass said:
You should work out the case ##p~\vert~q-1## in some more detail. But also, your proof fails in the case ##p=q##, I"ll let you find out where.
Yep, found it. You caught me cheating on this semi-/ direct part.
I think I have it now. I'll type it in tomorrow.

...
(1st commandment: You shall not post when tired.)

 
Last edited:
  • Like
Likes ProfuselyQuarky
  • #57
(group question, continued ...)

The starting point: A group ##G## of order ##|G|=pq## with primes ##p \leq q##.
By the previous part we have a short exact sequence ##H=\mathbb{Z}_q = <b \,|\, b^q=1> \triangleleft \;\, G \twoheadrightarrow \mathbb{Z}_p = <a \,|\, a^p=1> = G/H##

Since ##\mathbb{Z}_q## is normal in ##G## we have ##a\mathbb{Z}_q = \mathbb{Z}_q a## and may write ##b=a^{-1}b^m a## for some ##m##.
We can now define a group homomorphism ##\varphi : \mathbb{Z}_p \longrightarrow Aut(\mathbb{Z}_q) \cong \mathbb{Z}^*_q \cong \mathbb{Z}_{q-1} ## by ##\varphi(a)(b)=aba^{-1}=b^m## and get a semi-direct product ##G \cong \mathbb{Z}_q \rtimes_\varphi \mathbb{Z}_p## because ##a \cdot b = \varphi(a)(b) \cdot a##
[Remark: Conversely are all such semi-direct products groups of order ##pq##. And of course is the direct product also semi-direct with ##\varphi = id_{\mathbb{Z}_q}##.]

We now consider ##ker \varphi##. Since it is a (normal) subgroup of ##\mathbb{Z}_p## and ##p## is prime, ##ker \varphi \in \{1,\mathbb{Z}_p \}##.

If ##ker \varphi = \mathbb{Z}_p## then ##\varphi (a) = id_{\mathbb{Z}_q}## , ## ab=ba## and ##G## is abelian and especially ##\mathbb{Z}_p \; \triangleleft \; G##. The only groups with these conditions are the direct product ##G = \mathbb{Z}_q \times \mathbb{Z}_p## and the cyclic group ##G = \mathbb{Z}_{pq}## (which I have previously and mistakenly withheld in post #54). ##\mathbb{Z}_q \times \mathbb{Z}_p \cong \mathbb{Z}_{pq}## in case ##p \neq q##, but we get two different groups in case ##p=q##. [To prove this, one may consider generating elements again whose orders have to be in ##\{p,p^2\}##. Since ##G## is abelian, one doesn't have to bother about normality or the order of multiplication. ##\mathbb{Z}_{p^2}## has an element of order ##p^2## whereas ##\mathbb{Z}^2_p## has only elements of order ##p## (and ##1## of course). In the case ##p \neq q## we get a generator of ##\mathbb{Z}_{pq}## from the product of two generators from ##\mathbb{Z}_p## and ##\mathbb{Z}_q##.]

This leaves us with the case ##ker \varphi = 1## or a monomorphism ##\varphi : \mathbb{Z}_p \hookrightarrow Aut(\mathbb{Z}_q) \cong \mathbb{Z}_{q-1} =: \{\sigma \,|\, \sigma^{q-1} = 1\}##.
Let us consider two possible groups ##G_\varphi := \mathbb{Z}_q \rtimes_\varphi \mathbb{Z}_p## and ##G_\psi := \mathbb{Z}_q \rtimes_\psi \mathbb{Z}_p## with
##\varphi (a)(b) = b^m \,,\, (m,q)=1## and ##\varphi (a) = \sigma^\mu##
##\psi (a)(b) = b^n \,,\, (n,q)=1## and ##\psi (a) = \sigma^\nu##
We extend the generator ##\sigma## of ##Aut(\mathbb{Z}_q)## to a group homomorphism of (all) ##G## by setting ##\hat{\sigma} \vert_{\mathbb{Z}_q} = \sigma## and ##\hat{\sigma}\vert_{\mathbb{Z}_p} = id_{\mathbb{Z}_p}##.
This gives us an isomorphism ##\hat{\sigma}^{\mu-\nu} : G_\psi \longrightarrow G_\varphi##
$$a \cdot_{\varphi} b = \varphi (a)(b) \cdot_{\varphi} a = \hat{\sigma}^{\mu}(b) \cdot_{\varphi} a = \hat{\sigma}^{\mu-\nu} \psi (a) (b)\cdot_{\varphi} \hat{\sigma}^{\mu-\nu} (a) = \hat{\sigma}^{\mu-\nu} (a \cdot_{\psi} b)$$
and all proper semi-direct products ##{\mathbb{Z}_q} \rtimes {\mathbb{Z}_p}## are isomorphic. [The direct product with ##\varphi \equiv 1## is ruled out here by the condition on the kernel ##\ker \varphi = 1##.]
- hopefully I haven't missed something again -
 
  • Like
Likes micromass
  • #58
@micromass Was mine just wrong (posts 52, 53)?
 
  • #59
TeethWhitener said:
@micromass Was mine just wrong (posts 52, 53)?
Nope, it was ok. I just missed it.
 
  • Like
Likes TeethWhitener
  • #60
4 high school questions)

Consider the finite sequence: k! + 2, k! + 3, k! + 4, ..., k! + k-1, k! + k with k>1. Then it's obvious that there are no primes in this sequence. Because, using the definition of the factorial, we have the equivalent sequence: k*(k-1)*(k-2)*...*2*1 + 2, k*(k-1)*(k-2)*...*3*2*1 + 3, ... , k*(k-1)*(k-2)*...*2*1 + k-1, k*(k-1)*(k-2)*...*2*1 + k
and 2 divides the first term, 3 divides the third term, ...

Now take, k = 2017, then we have the sequence:

2017! + 2, 2017! + 3, ... , 2017! + 2016, 2017! + 2017. This sequence has 2016 consectutive integers in it and none of them are prime. So QED.

Note: we can use the same reasoning to show that 'prime gaps' can be as large as we want them to be.
 
  • Like
Likes MAGNIBORO, micromass, fresh_42 and 1 other person
  • #61
5a high school questions) Find all the 10-digit numbers such that each digit {0,1,2,3,4,5,6,7,8,9} is exactly used once.

If I understand the question correctly, I have to find how many numbers there are satisfying this condition.

So obviously, 0 can't be the first digit of a number. So, for the first digit, we have 9 possibilities. For the second digit, we have 9 possibilities left (now we can choose 0 as well). For the third digit, 8 possibilities, ...

Ultimately, using the product rule, we have 9*9*8*7*6*5*4*3*2*1 = 3265920 possibilities where each digit is exactly used once.
 
  • #62
I'm quite sure the numbers should satisfy (a) and (b) together, not separately.

The list in the problem statement seems to have a typo in it, and I don't understand why 9 is not listed.
 
  • Like
Likes member 587159
  • #63
mfb said:
I'm quite sure the numbers should satisfy (a) and (b) together, not separately.

This is right.

The list in the problem statement seems to have a typo in it, and I don't understand why 9 is not listed.

Thank you.
 
  • #64
The 1 is still in there twice.
Unless I missed follow-up posts to this post by @Biker, we don't have all possible points for high school #2 yet.
Biker said:
Challenges for high school:
2-
I will just pretend that the Earth is a perfect sphere.
If you move close to the south pole, You can mark a circle with a circumference of 1 mile around the earth. Now move 1 mile further to the south and and face the north.
Move one mile north and you are on the circle, Move one mile west and you will come back to the same place (assuming west from your initial location) move 1 mile south and you are back to the same place.

and you can be on the north pole, as the Earth is a sphere you will form a triangle that will leads you back to the same place (Disregarding the method I suggested) or you could use the same method but with minor changes at the north pole

Who needs a gps :P
 
  • #65
I have been trying to solve Advanced Problem no 4 for a long time now, but it seems impossible. Conditions 1 and 5 seem impossible to reconcile. I have been trying an Axiom of Choice (Zorn's Lemma) approach, but I don't get anywhere. Perhaps this is a bad approach...
 
  • #66
Erland said:
I have been trying to solve Advanced Problem no 4 for a long time now, but it seems impossible. Conditions 1 and 5 seem impossible to reconcile. I have been trying an Axiom of Choice (Zorn's Lemma) approach, but I don't get anywhere. Perhaps this is a bad approach...

It does require the axiom of choice. But it's not equivalent to it.

If it's not found by the beginning of september, I'll repost the question in the new challenge thread with a hint.
 
  • #67
micromass said:
It does require the axiom of choice.
I think it requires a result that depends on AC, but not AC directly? At least that is how I know the proof of this problem. (I will stick by the rules and not say more.)
 
  • #68
Krylov said:
I think it requires a result that depends on AC, but not AC directly? At least that is how I know the proof of this problem. (I will stick by the rules and not say more.)

That's correct.
 
  • #69
Hmm, the approach I was investigating does not need the AC, so I guess there is some counterexample to my definition. Can someone find one?

4. Let ##X## denote the set of all bounded real-valued sequences. Prove that a generalized limit exists for any such sequence. A generalized limit is any function function ##L:X\rightarrow \mathbb{R}## such that
1) ##L((x_n + y_n)_n) = L((x_n)_n) + L((y_n)_n)##
2) ##L((\alpha x_n)_n) = \alpha L((x_n)_n)##
3) ##\liminf_n x_n \leq L((x_n)_n)\leq \limsup_n x_n##
4) If ##x_n\geq 0## for all ##n##, then ##L((x_n)_n)\geq 0##.
5) If ##y_n = x_{n+1}##, then ##L((x_n)_n) = L((y_n)_n)##
6) If ##x_n\rightarrow x##, then ##L((x_n)_n) = x##.
My first idea was to define a modified sequence where the nth element is the average of the first n elements, and then look for that limit.
$$y_n = \frac{1}{n} \sum_{i=1}^n x_n$$
$$L((x_n)_n) = \lim_{n \to \infty} y_n$$

It should be clear that this satisfies all 6 criteria if that limit exists. That catches all convergent sequences and many divergent sequences, but it does not catch sequences like (1,-1,-1,1,1,1,1, then 8 times -1, 16 times 1 and so on) where the limit does not exist. Let's repeat this step:

$$z_n = \frac{1}{n} \sum_{i=1}^n y_n = \frac{1}{n} \sum_{i=1}^n \frac{1}{i} \sum_{k=1}^i x_i$$

Now you can come up with a series where this does not converge either, and I can add a third step, ... but here is the idea: for each sequence, repeat this step as often as necessary to get convergence, and then define this limit as generalized limit of the sequence.

I didn't find a proof that every sequence has to lead to convergence after a finite number of steps.
 
  • #70
mfb I have been thinking along the same lines. But I didn't consider more than one step, also that the averages need not converge... So you're smarter than me :)
 

Similar threads

  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
3
Replies
77
Views
10K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
2
Replies
67
Views
8K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
2
Replies
61
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
46
Views
4K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
Back
Top