Challenge Can You Solve These Math Challenges?

  • #51
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.

For GF2, the primitive polynomials are x + (0,1) = x, x+1.

For GF4, we consider primitive-polynomial candidates x2 + (0,1)*x + (0,1): x2, x2 + 1 = (x + 1)2, x2 + x = x(x + 1), x2 + x + 1. That last one is the only primitive polynomial for GF4.

For GF8, we consider primitive-polynomial candidates x3 + (0,1)*x2 + (0,1)*x + (0,1): x3, x3 + 1 = (x2 + x + 1)*(x + 1), x3 + x = x * (x + 1)2, x3 + x + 1, x3 + x2 = x2 * (x + 1), x3 + x2 + 1, x3 + x2 + x = x*(x2 + x + 1), x3 + x2 + x + 1 = (x + 1)3.

Thus, GF8 has primitive polynomials x3 + x + 1 and x3 + x2 + 1.

(b) There is a problem here. A basis is easy to define for addition: {1, x, x2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2)3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.

(c) That is a consequence of every finite field GF(pn) being a subfield of an infinite number of finite fields GF(pm*n), each one with a nonzero number of primitive polynomials with coefficients in GF(pn). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.

I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(pm*n) relative to GF(pn), I will call the number N(m). One can count all the possible candidate polynomials for GF(pm*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
 
Physics news on Phys.org
  • #52
lpetrich said:
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.

For GF2, the primitive polynomials are x + (0,1) = x, x+1.

For GF4, we consider primitive-polynomial candidates x2 + (0,1)*x + (0,1): x2, x2 + 1 = (x + 1)2, x2 + x = x(x + 1), x2 + x + 1. That last one is the only primitive polynomial for GF4.

For GF8, we consider primitive-polynomial candidates x3 + (0,1)*x2 + (0,1)*x + (0,1): x3, x3 + 1 = (x2 + x + 1)*(x + 1), x3 + x = x * (x + 1)2, x3 + x + 1, x3 + x2 = x2 * (x + 1), x3 + x2 + 1, x3 + x2 + x = x*(x2 + x + 1), x3 + x2 + x + 1 = (x + 1)3.

Thus, GF8 has primitive polynomials x3 + x + 1 and x3 + x2 + 1.

(b) There is a problem here. A basis is easy to define for addition: {1, x, x2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2)3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.

(c) That is a consequence of every finite field GF(pn) being a subfield of an infinite number of finite fields GF(pm*n), each one with a nonzero number of primitive polynomials with coefficients in GF(pn). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.

I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(pm*n) relative to GF(pn), I will call the number N(m). One can count all the possible candidate polynomials for GF(pm*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
Although your language is in part a bit unusual, the basic ideas are correct.
To finish what you've deduced for part a), we can write
$$
\mathbb{F}_8 \cong \mathbb{F}_2[x]/(x^3+x+1) \cong \mathbb{F}_2 / (x^3+x^2+1)
$$
Part c) is basically correct, although I haven't checked the formulas. But to consider ##\mathbb{F}_{p^n}## is the right idea. The argument can be simplified a lot. All these fields are algebraic over their prime field ##\mathbb{F}_p##, so all of them have to be included in the algebraic closure ##\mathbb{A}_p##. But with each new ##n## we get a larger field and all of them must be part of ##\mathbb{A}_p##, and ##n## doesn't stop. If we want to "construct" ##\mathbb{A}_p##, then it can be shown, that
$$
\mathbb{A}_p = \cup_{n \in \mathbb{N}} \mathbb{F}_{p^{n!}} \text{ with the chain } \mathbb{F}_{p^{1!}} < \mathbb{F}_{p^{2!}} < \ldots
$$
where the inclusions of subfields are strict.

As a hint for part b) - and this is the usual way to do it in all of these cases - simply define a number ##\xi## which satisfies ##\xi^3+\xi +1=0##. It is the same thing which we do from ##\mathbb{R} < \mathbb{C} \cong \mathbb{R}[x]/(x^2+1) \cong \mathbb{R}[ i ]##. We define a number ## i ## which satisfies ##i^2 + 1 = 0##. We simply call it ##i##. Now try to express all elements of ##\mathbb{F}_8## in terms of ##\mathbb{F}_2=\{0,1\}## and ##\xi##, i.e. ##\mathbb{F}_8 = \mathbb{F}_2[\xi]\,.##
 
Last edited:
  • Like
Likes lpetrich
  • #53
QuantumQuest said:
I appreciate your efforts but can this way lead to any safe conclusion?Can you come up with some other way?
Maybe for another challenge :p
 
  • Like
Likes QuantumQuest
  • #54
I will now continue with Problem 1
About (b), one defines GF(8) in terms of GF(2) as polynomials (0 or 1)*x2 + (0 or 1)*x + (0 or 1). One defines addition and multiplication in the usual way for polynomials, but for multiplication, one must divide by one of the primitive polynomials and take the remainder. Another way of interpreting this is to suppose that x is one of the roots of one of the primitive polynomials. Those primitive polynomials: x3 + x + 1 and x3 + x2 + 1.

(c) I have an insight into the issue of how many primitive polynomials an extended field has relative to its original field. I had earlier found a formula for N(m), the number of primitive polynomials of a field that is polynomials up to degree m-1 in the original field: F' ~ (F)m. In terms of the number N of elements of F,
$$ \sum_{\sum_k k m_k = n} \sum_k P(N(k),m_k) = N^n $$
where P is the Pochhammer symbol ##P(a,m) = a(a+1)\cdots(a+m-1)/m!##.

This looks like a difficult equation to solve for the N(k)'s, but there is a nice trick for doing so. Multiply by sn and add:
$$ \prod_k \left( \sum_m P(N(k),m) s^{km} \right) = \sum_n (Ns)^n $$
Use the negative-power generalization of the binomial theorem:
$$ \prod_k (1 - s^k)^{-N(k)} = (1 - Ns)^{-1} $$
Take the logarithm:
$$ - \sum_k N(k) \log (1 - s^k) = - \log (1 - Ns) $$
Expand in powers of s, using the familiar log(1+x) series:
$$ \sum_{m|n} \frac1m N(n/m) = \frac1n N^n $$
Instead of complicated nonlinear equations in the N(k)"s, we have linear equations in them. That should make it easier to solve for them.
 
  • #55
lpetrich said:
I will now continue with Problem 1
About (b), one defines GF(8) in terms of GF(2) as polynomials (0 or 1)*x2 + (0 or 1)*x + (0 or 1). One defines addition and multiplication in the usual way for polynomials, but for multiplication, one must divide by one of the primitive polynomials and take the remainder. Another way of interpreting this is to suppose that x is one of the roots of one of the primitive polynomials. Those primitive polynomials: x3 + x + 1 and x3 + x2 + 1.

(c) I have an insight into the issue of how many primitive polynomials an extended field has relative to its original field. I had earlier found a formula for N(m), the number of primitive polynomials of a field that is polynomials up to degree m-1 in the original field: F' ~ (F)m. In terms of the number N of elements of F,
$$ \sum_{\sum_k k m_k = n} \sum_k P(N(k),m_k) = N^n $$
where P is the Pochhammer symbol ##P(a,m) = a(a+1)\cdots(a+m-1)/m!##.

This looks like a difficult equation to solve for the N(k)'s, but there is a nice trick for doing so. Multiply by sn and add:
$$ \prod_k \left( \sum_m P(N(k),m) s^{km} \right) = \sum_n (Ns)^n $$
Use the negative-power generalization of the binomial theorem:
$$ \prod_k (1 - s^k)^{-N(k)} = (1 - Ns)^{-1} $$
Take the logarithm:
$$ - \sum_k N(k) \log (1 - s^k) = - \log (1 - Ns) $$
Expand in powers of s, using the familiar log(1+x) series:
$$ \sum_{m|n} \frac1m N(n/m) = \frac1n N^n $$
Instead of complicated nonlinear equations in the N(k)"s, we have linear equations in them. That should make it easier to solve for them.
Please call the polynomials irreducible, because that's what counts. A polynomial is primitive, if it's coefficients are all coprime, which doesn't make much sense over a field in general and especially over ##\mathbb{F}_2##. There are primitive elements of a field extension, namely those who generates it, say ##\xi##. In our example if we take the easier polynomial of the two, we have ##\mathbb{F}_8=\mathbb{F}_2[\xi]## with ##\xi^3+\xi+1=0##. The polynomial ##x^3+x+1## is called minimal polynomial and it is irreducible over ##\mathbb{F}_2##, i.e. it cannot be factored in ##\mathbb{F}_2[x]## which in this case automatically means it has no zeros in ##\mathbb{F}_2##.

Now obviously ##\xi \neq 0##, so ##\mathbb{F}^*_8 = \mathbb{F}_8 - \{0\} \cong \mathbb{Z}_7 = \langle a\,\vert \,a^7=1 \rangle## and we can chose ##a=\xi##. From there you can directly calculate all ##\xi^n## in terms of a linear ##\mathbb{F}_2-##basis of ##\mathbb{F}_8## given by ##\{1,\xi,\xi^2\}##. These seven vectors are what is asked for, because with them, we can see all addition and multiplication rules without having to to fill in complete tables.
 
  • Like
Likes Janosh89
  • #56
As a hint for problem 3. Termination conditions are stated to be:
- - - -

(For avoidance of doubt, this refers to playing one 'full' game, which is complete upon termination, and termination occurs on the first occurrence of (a) result of coin toss equals ##\text{Other}## or (b) the player elects to stop.)

If you like, you may include an additional condition: (c) the carnival operator doesn't want you to play all day and so will stop you and force a "cash out" if you've hit a payoff of 1 million (or 1,000,001 or 1,000,002 -- in the case of overshoots).

- - - -

another hint: there are several ways to solve the problem. One technique that I particularly like would be familiar to Pascal, and, I think, Cauchy would approve.
 
  • #57
StoneTemplePython said:
If you like, you may include an additional condition: (c) the carnival operator doesn't want you to play all day and so will stop you and force a "cash out" if you've hit a payoff of 1 million (or 1,000,001 or 1,000,002 -- in the case of overshoots).
At least for me "find a solution that would profit from this idea" is much more difficult than the original problem. I know ways to solve the problem, but no way where considering a payoff of 1 million would be involved.
 
  • #58
More on Problem 1
For a field F with N elements, the number of irreducible monic polynomials of degree n I call N(n), and I'd shown
$$ \sum_{\sum_k k m_k = n} \left( \prod_k P(N(k),m_k) \right) = N^n = \sum_{m|n} \frac{n}{m} N(n/m) $$
taking my final expression in my previous post and multiplying it by n.

This calls for a Möbius transform, where
$$ f(n) = \sum_{m|n} g(m) \longleftrightarrow g(n) = \sum_{m|n} \mu(m) f(n/m) $$
where ##\mu(n)## is the Möbius function of n.

Using it here gives
$$ N(n) = \frac{1}{n} \sum_{m|n} \mu(m) N^{n/m} $$
Since ##\mu(n)## can be negative as well as positive, proving that N(n) is always positive for N > 1 is a challenge.
 
  • #59
mfb said:
At least for me "find a solution that would profit from this idea" is much more difficult than the original problem. I know ways to solve the problem, but no way where considering a payoff of 1 million would be involved.

I could argue it either way as to which setup is more intuitive... which is why I put (c) as optional.

The original problem, literally interpreted, has a countable set of outcomes / state space. The use of (c) truncates that and makes the set have a finite number of outcomes-- which, I think, may be easier for some to work with.
 
  • #60
Solution to #3
The average win for heads or tails is 2 with probability 4/5.
A risk neutral player will play as long as her expectation is positive. She will quit if her expectation is ≤ 0.
If she has accrued x, then her expectation for the next play 2⋅4/5 - x⋅1/5. That is positive for x < 8 and ≤ 0 for x for x ≥ 8.
Thus she will quit when x ≥ 8, assuming "other" has not occurred.

We must now determine the expected value of that strategy. Let p = 2/5.
If she gets to x = 7, then with prob 2p her expected win will be 9.
There are various ways to get to x= 7,
by 7 1's with prob p7, or
by 4 1's and a 3 with prob 5⋅p5, or
by 2 3's and a 1 with prob 3⋅p3.
Thus the expected win win via x = 7 is (p7 2p + 5p5 2p + 3p32p)9.

She can also win by having x = 6 and getting a 3, and there are various ways to get x = 6,
by 6 1's with prob p6, or
by 3 1's and a 3 with prob 4p4, or
by 2 3's with prob p2.
Each of these pay 9 with total prob = p7 + 4p5 + p3.

Finally, she can also win by having x = 5 and getting a 3. There are 2 ways of getting x = 5,
by 5 1's with prob p5, or
by 2 1's and a 3 with prob 3p3.
Each of these pay 8 with total prob = p6 + 3p4.

Adding the three cases we get the expected win to be 3.37 which is what a neutral player should pay.
 
  • Like
Likes mfb and StoneTemplePython
  • #61
Zafa Pi said:
Solution to #3
The average win for heads or tails is 2 with probability 4/5.
A risk neutral player will play as long as her expectation is positive. She will quit if her expectation is ≤ 0.
If she has accrued x, then her expectation for the next play 2⋅4/5 - x⋅1/5. That is positive for x < 8 and ≤ 0 for x for x ≥ 8.
Thus she will quit when x ≥ 8, assuming "other" has not occurred.

We must now determine the expected value of that strategy. Let p = 2/5.
If she gets to x = 7, then with prob 2p her expected win will be 9.
There are various ways to get to x= 7,
by 7 1's with prob p7, or
by 4 1's and a 3 with prob 5⋅p5, or
by 2 3's and a 1 with prob 3⋅p3.
Thus the expected win win via x = 7 is (p7 2p + 5p5 2p + 3p32p)9.

She can also win by having x = 6 and getting a 3, and there are various ways to get x = 6,
by 6 1's with prob p6, or
by 3 1's and a 3 with prob 4p4, or
by 2 3's with prob p2.
Each of these pay 9 with total prob = p7 + 4p5 + p3.

Finally, she can also win by having x = 5 and getting a 3. There are 2 ways of getting x = 5,
by 5 1's with prob p5, or
by 2 1's and a 3 with prob 3p3.
Each of these pay 8 with total prob = p6 + 3p4.

Adding the three cases we get the expected win to be 3.37 which is what a neutral player should pay.

Nicely done.

For the benefit of other readers, I'd belabor the point that you first came up with a stopping rule: to stop when the expected value is no longer positive.

The expected cost of playing another round goes up in proportion to your current stake -- specifically the proportion is ##\frac{1}{5}##. On the other hand the expected benefit of another play is constant. Hence there is a single point of interest -- the maximum -- which you found at ##8##.
- - - -
Part 2 then is figuring out the expected value, now that you have the stopping rule in place. Your careful approach is how -- I imagine-- Fermat would solve the problem.
- - - - - - - - - - - - - - - - - - - -
It may help some people to see a state diagram of the process to work through those path probabilities. I've dropped this in below.

There is a dummy state 's' which denotes the starting state. All other node labels correspond the stake value at that state. This should be mostly interpretable, though the software I use is a bit clunky at times, e.g. with respect to node 5.

A.png


- - - -
And if one is thinking about the process in reverse -- i.e. (expected values probabilistically) flowing back to state 's', people may want to consider the reversed stated diagram, below.

A_reversed.png
 

Attachments

  • A.png
    A.png
    8.7 KB · Views: 636
  • A_reversed.png
    A_reversed.png
    10.3 KB · Views: 613
Last edited:
  • #62
StoneTemplePython said:
Part 2 then is figuring out the expected value, now that you have the stopping rule in place. Your careful approach is how -- I imagine-- Fermat would solve the problem.
I have a picture of Fermat and he talks to me, so I can't take full credit.
QuantumQuest said:
optional:
How many rounds would it take on average for the game to terminate? (You may assume a mild preference for shorter vs longer games in the event of any tie breaking concerns.)

Now suppose the player doesn't care about the score and just loves flipping coins -- how long will the game take to terminate, on average, in this case? \space (by @StoneTemplePython)
Solution to optional:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll.
Let p = ⅖.

Round)
1) ⅕ + 0
2) (either a 1 or a 3) 2p⋅⅕ + 0
3) (either 1&1, or 1&3, or ...) 4p2+ (3 3s) p3
4) (no 3s) p3 ⅕ + (1 3 & 2 1s) 3p3+ (2 3s & 1 1) 3p3+ (2 3s & a 1 then any) 6p4
5) (no 3s) p4⅕ + (1 3 & 3 1s) 4p4+ (3 1s & 3 then 3) 4p5
6) (no 3s) p5⅕ + (1 3 & 4 1s) 5p5+ (4 1s & 3 then any) 10p6 + (5 1s then 3) p6
7) (no 3s) p6+ (6 1s then 3) p7
8) (no 3s) p7+ (7 1s then any) 2p8
Now multiply the round by its total prob and add them up to get about 2.8 expected rounds.

If she just keeps on flipping regardless of score the expected number of rounds = ∑n⋅⅘n-1⋅⅕ for n ≥ 1 = 5.
 
  • #63
Zafa Pi said:
I have a picture of Fermat and he talks to me, so I can't take full credit.

I liked this quite a bit. It could also be quite useful for problem 6.

Zafa Pi said:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll...

If she just keeps on flipping regardless of score the expected number of rounds = ∑n⋅⅘n-1⋅⅕ for n ≥ 1 = 5.

This is right, and gives a quick and easy upper bound on the expected number of rounds with the stopping rule in place.

Zafa Pi said:
Solution to optional:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll.
Let p = ⅖.

Round)
1) ⅕ + 0
2) (either a 1 or a 3) 2p⋅⅕ + 0
3) (either 1&1, or 1&3, or ...) 4p2+ (3 3s) p3
4) (no 3s) p3 ⅕ + (1 3 & 2 1s) 3p3+ (2 3s & 1 1) 3p3+ (2 3s & a 1 then any) 6p4
5) (no 3s) p4⅕ + (1 3 & 3 1s) 4p4+ (3 1s & 3 then 3) 4p5
6) (no 3s) p5⅕ + (1 3 & 4 1s) 5p5+ (4 1s & 3 then any) 10p6 + (5 1s then 3) p6
7) (no 3s) p6+ (6 1s then 3) p7
8) (no 3s) p7+ (7 1s then any) 2p8
Now multiply the round by its total prob and add them up to get about 2.8 expected rounds.

The result here is awfully close but not quite right. I'd encourage you to write out the exact probabilities in the post.

For others benefit, I'd state it as having a counter vector

##\mathbf c = \begin{bmatrix}
1\\
2\\
\vdots\\
7\\
8
\end{bmatrix}##

and a probability of terminating on that turn vector

##\mathbf p= \begin{bmatrix}
?\\
?\\
\vdots\\
?\\
?
\end{bmatrix}
##

all entries in ##\mathbf p## would be real non-negative. And since we know that the game must terminate in one of those 8 rounds, and the termination times correspond to mutually exclusive events, the probability vector sums to one, i.e.

##\mathbf 1^T \mathbf p = \sum_{k=1}^8 p_k = 1##
- - - -
So, for the expected number of turns calculation, you have

##\text{expected time till termination} = \mathbf c^T \mathbf p = \sum_{k=1}^8 (c_k) p_k = \sum_{k=1}^8 (k)p_k##

- - - -
Interesting point: if the game starts at a certain other state -- not "s" though-- one gets ##2.838## as the expected rounds till termination. The expected time from "s", then, is strictly greater than this (inclusive of any rounding effects).
 
Last edited:
  • #64
StoneTemplePython said:
The result here is awfully close but not quite right. I'd encourage you to write out the exact probabilities in the post.
Change ⅕ to p/2.
1•p/2 = .2
2•2pp/2 = .32
3•(2p3 + p3 = 3p3) = .576
4•((p3 + 3p3 +3p3(corrected))p/2 + 6p4 = 9.5p4) = .9728
5•(5p4p/2 + 4p5 = 6.5p5) = .3328
6•(6p5p/2 + 11p6 = 14p6) = .344
7•(p6p/2 + p7 = 1.5p7) = .0172
8•(p7p/2 + 2p8 = 2.5p8) = .013

Adding the bolds = 2.7758 ≈ 2.8.
You're right there was an error at round 4, but to the nearest tenth it's still the same. Good looking out.
I waited for others to solve it, but think no one did because it was a bit messy and needed careful attention. The only real idea was the stopping rule.
 
  • #65
StoneTemplePython said:
not quite right
Wait, I had round 4 correct (except the first bold + shouldn't have been bold).
So where is the error?
 
  • #66
Zafa Pi said:
Wait, I had round 4 correct (except the first bold + shouldn't have been bold).
So where is the error?

let me try to transcribe the below -- I'm not concerned about minor rounding nits -- to the first or second decimal is fine.

(disclaimer: what I show below may look inconsistent on significant figures as I'm copying and pasting from python, etc.)

Zafa Pi said:
Change ⅕ to p/2.
1•p/2 = .2
2•2pp/2 = .32
3•(2p3 + p3 = 3p3) = .576
4•((p3 + 3p3 +3p3(corrected))p/2 + 6p4 = 9.5p4) = .9728
5•(5p4p/2 + 4p5 = 6.5p5) = .3328
6•(6p5p/2 + 11p6 = 14p6) = .344
7•(p6p/2 + p7 = 1.5p7) = .0172
8•(p7p/2 + 2p8 = 2.5p8) = .013

Adding the bolds = 2.7758 ≈ 2.8.
You're right there was an error at round 4, but to the nearest tenth it's still the same. Good looking out.
I waited for others to solve it, but think no one did because it was a bit messy and needed careful attention. The only real idea was the stopping rule.

##\mathbf v :=
\left[\begin{matrix}0.2\\0.32\\0.576\\0.9728\\0.3328\\0.344\\0.0172\\0.013\end{matrix}\right] = \begin{bmatrix}
1\\
2\\
3\\
4\\
5\\
6\\
7\\
8
\end{bmatrix} \circ
\left[\begin{matrix}0.2\\0.16\\0.192\\0.2432\\0.06656\\0.0573333333333333\\0.00245714285714286\\0.001625\end{matrix}\right]
= \mathbf c \circ \mathbf p##

where ##\circ## denotes element wise multiplication (Hadamard product).

I believe I got the transcription right but let me know if I missed something.
- - - -

going through it:

as you've said:
##\mathbf c^T \mathbf p = \text{sum}\big(\mathbf v\big) = \mathbf 1^T \mathbf v = 2.7758##

But
##\mathbf 1^T \mathbf p = \sum_{k=1}^8 p_k = 0.923175 \lt 1##which means you haven't included all paths that lead to termination. For avoidance of doubt, we know that all paths terminate with probability one. We know it for several reasons... you've already implicitly shown this by getting the upper bound case -- i.e. the expected time of 5, for the person who just loves coin tossing.

- - - -
I know of at least two other approaches besides enumeration that could get you there -- I tend to think they are easier though enumeration is a perfectly valid approach.

- - - -
edit:
your probabilities vector is off by just under 8 percentage points in total. This is entirely attributable to just one of the slots... i.e. you are correct in ##7## of the ##8## cases you've listed, but one of them is off by ##\approx 8 \text{% points}##
 
Last edited:
  • Like
Likes Zafa Pi
  • #67
StoneTemplePython said:
which means you haven't included all paths that lead to termination.
OK,OK, you are correct. Nice observation.
My error was in round 4. I omitted a 3p4 = .0768 which, with exact calculations brings the total probability to 1.
The expected number of rounds then becomes 3.0831744.
So what is easier than enumeration? Run a zillion trials on a computer and take an average?
Thanks.
 
  • Like
Likes StoneTemplePython
  • #68
Zafa Pi said:
OK,OK, you are correct. Nice observation.
My error was in round 4. I omitted a 3p4 = .0768 which, with exact calculations brings the total probability to 1.
The expected number of rounds then becomes 3.0831744.
So what is easier than enumeration? Run a zillion trials on a computer and take an average?
Thanks.

Simulations work well in practice, though they wouldn't count for credit here-- something more 'exact' is needed in the spirit of the problem and rules 1 and 4.

My preferred approach is to (a) think forward to reachable termination points i.e. ##\{8,9,10,0\}## -- given you have the stopping rule in place-- and these become your base case(s). Now apply backward induction.

This was my opaque reference earlier:

StoneTemplePython said:
One technique that I particularly like would be familiar to Pascal, and, I think, Cauchy would approve.

though perhaps I could have mentioned Bellman and some others as well.

edit:
Strictly speaking I was thinking of the Pascal approach of drawing the tree and inducting backward in context of pricing this to be a 'fair game'. (Given that you have a stopping rule in place, this is not unlike a Problem of Points.)

The approach is quite flexible, though, and can be easily modified to get expected time till termination.
 
Last edited:
  • #69
StoneTemplePython said:
Simulations work well in practice, though they wouldn't count for credit here-- something more 'exact' is needed in the spirit of the problem and rules 1 and 4.

My preferred approach is to (a) think forward to reachable termination points i.e. ##\{8,9,10,0\}## -- given you have the stopping rule in place-- and these become your base case(s). Now apply backward induction.

This was my opaque reference earlier:
though perhaps I could have mentioned Bellman and some others as well.

edit:
Strictly speaking I was thinking of the Pascal approach of drawing the tree and inducting backward in context of pricing this to be a 'fair game'. (Given that you have a stopping rule in place, this is not unlike a Problem of Points.)

The approach is quite flexible, though, and can be easily modified to get expected time till termination.
If I understand you correctly it seems as though you will eventually end up with all of the same paths as I did, and your approach isn't any faster.
 
  • #70
Zafa Pi said:
If I understand you correctly it seems as though you will eventually end up with all of the same paths as I did, and your approach isn't any faster.

It depends, I suppose, on how you are generating your paths. The idea is to collapse things/ take advantage of overlapping subproblems. Think trellis lattice or recombining tree (maybe an abuse of language but a common term in finance).

The approach is a single linear scan through the states-- O(n) with bare minimal coefficient-- to get the answer to both the fair pricing and expected rounds till termination questions-- again, assuming you have a stopping rule in place. The approach also has the virtue of making it impossible to double count or omit paths.

(For what it's worth, another way of tackling the problem reduces to (sparse) matrix multiplication.)

I'm happy to post the approach at the end once this thread is closed out.

- - - -
Problem 6 is still open.

I'd be much obliged if someone tackles it. Depending on how one sets up the problem, it is either straightforward or an impossible task.
 
  • #71
... and the ##\mathbb{F}_8-##basis over ##\mathbb{F}_2## with its multiplication rules.
 
  • #72
StoneTemplePython said:
Problem 6 is still open.
I find #6 ambiguous. If an urn has 3 whites and 1 black what do you think is the probability of drawing 3 that are either all white or all black?
 
  • #73
StoneTemplePython said:
Problem 6 is still open.

I'd be much obliged if someone tackles it
The only way I can make a reasonable problem from the way it was posed is as follows:
The probability of getting all white or all black from urn 2 is prob all white + prob all black.
In that case it is impossible due to Fermat's Theorem and n ≥ 3.

That's what Fermat told me.
 
  • Like
Likes StoneTemplePython
  • #74
Zafa Pi said:
The only way I can make a reasonable problem from the way it was posed is as follows:
The probability of getting all white or all black from urn 2 is prob all white + prob all black.
In that case it is impossible due to Fermat's Theorem and n ≥ 3.

That's what Fermat told me.

Can you expand on this a bit for others' benefit?

It's a mildly famous problem designed by E. C. Molina, to cast Fermat's Last Theorem as a probability problem.
 
Last edited:
  • #75
There are some trivial solutions for (6).
Zafa Pi said:
If an urn has 3 whites and 1 black what do you think is the probability of drawing 3 that are either all white or all black?
1/43 + (3/4)3. We draw with replacement.
Zafa Pi said:
In that case it is impossible due to Fermat's Theorem and n ≥ 3.
Huh?
 
  • #76
mfb said:
Huh?
Let N be the number of balls in each urn, W the number of white balls in urn 1, b and w the number of black and white balls in urn 2, n > 2 the number drawn from each urn. With my interpretation in #73 of the problem we get:
(W/b+w)n = (b/b+w)n + (w/b+w)n.
Multiplying by (b+w)n we have Wn = bn + wn which is impossible for n > 2 by Fermat's (Wiles') Theorem.
 
  • Like
Likes StoneTemplePython
  • #77
Ah. I interpreted "the probability that all white balls are drawn from the first urn" as probability that every white ball in the urn is drawn at least once.
 
  • #78
mfb said:
Ah. I interpreted "the probability that all white balls are drawn from the first urn" as probability that every white ball in the urn is drawn at least once.
I think that the way the problem was formulated is problematic. I attempted to point that out to @StoneTemplePython in post #72, but received no response.
For example, If an urn contains 3 whites and one black, then what is the probability of drawing (with replacement) three balls such that they are either all white or all black? I think the only reasonable answer is ¾3, which = the probability of drawing three white balls. Thus urn 1 and urn 2 are the same is a solution.

Perhaps it should be said that the urns contain at least 3 (or n) of each color.
 
  • #79
Zafa Pi said:
I think that the way the problem was formulated is problematic. I attempted to point that out to @StoneTemplePython in post #72, but received no response.
For example, If an urn contains 3 whites and one black, then what is the probability of drawing (with replacement) three balls such that they are either all white or all black? I think the only reasonable answer is ¾3, which = the probability of drawing three white balls. Thus urn 1 and urn 2 are the same is a solution.

Perhaps it should be said that the urns contain at least 3 (or n) of each color.

This problem and its wording, setting aside one or two nits, is verbatim from Mosteller's Fifty Challenging Problems in Probability. It is the last problem in the book, of course numbered 50 56. (There are some bonus problems.)

People are free to ask questions to clarify, though posts #72 and #73, ##\lt 3 \text{ hours }## apart, both arrived while I was offline.

- - -
My read: Post #72 did not directly seek to clarify the wording of the question but instead said "what do you think...".

On the other hand, Post #73 was direct and had the answer. I responded to the most recent, accurate and direct post, #73.

- - -
n.b.

I also liked this:

Zafa Pi said:
That's what Fermat told me.
 
Last edited:
  • Like
Likes fresh_42
  • #80
Post 73 doesn't address the interpretation question for urn 1. As you marked it as solved I guess the different wording for urn 1 doesn't mean anything. "It is impossible" is a strange solution to "Find the number of [...]".

With the other interpretation I could rule out 1 and 2 white balls in urn 1 for 3 and 4 drawings, but the problem gets much more difficult.
 
  • #81
Just a couple more sub questions to be answered. Incredible works everyone!
 
  • #82
First day open to all. Open: 1.b.) ##\mathbb{F}_2-## basis of ##\mathbb{F}_8## and its multiplication resp. addition structure.
 
  • #83
Solution to 1.b)
QuantumQuest said:
From there determine a basis of ##\mathbb{F}_8## over ##\mathbb{F}_2## and write down its multiplication and addition laws.
This is actually the easiest part of all. Let's say we have the minimal polynomial ##x^3+x+1=0\,##, and we assume a root ##\xi \,##, i.e. an element which satisfies ##\xi^3+\xi +1 =0##. Then
$$
x^3+x+1 = (x+\xi)(x+\xi^2)(x+\xi+\xi^2)
$$
and ##\{1,\xi,\xi^2\}## is a ##\mathbb{F}_2## basis of ##\mathbb{F}_8##. The elements are thus
$$
\{0,1,\xi,\xi+1 = \xi^3,\xi^2,\xi^2+1=\xi^6,\xi^2+\xi=\xi^4,\xi^2+\xi+1=\xi^5\}
$$
which defines the multiplicative group generated by ##\xi## as well as the addition table.
 

Similar threads

2
Replies
80
Views
9K
Replies
42
Views
10K
2
Replies
93
Views
14K
2
Replies
61
Views
11K
3
Replies
102
Views
10K
2
Replies
86
Views
13K
2
Replies
56
Views
10K
2
Replies
60
Views
11K
3
Replies
100
Views
11K
Replies
61
Views
12K
Back
Top