Challenge Basic Math Challenge - May 2018

952
161
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.

For GF2, the primitive polynomials are x + (0,1) = x, x+1.

For GF4, we consider primitive-polynomial candidates x2 + (0,1)*x + (0,1): x2, x2 + 1 = (x + 1)2, x2 + x = x(x + 1), x2 + x + 1. That last one is the only primitive polynomial for GF4.

For GF8, we consider primitive-polynomial candidates x3 + (0,1)*x2 + (0,1)*x + (0,1): x3, x3 + 1 = (x2 + x + 1)*(x + 1), x3 + x = x * (x + 1)2, x3 + x + 1, x3 + x2 = x2 * (x + 1), x3 + x2 + 1, x3 + x2 + x = x*(x2 + x + 1), x3 + x2 + x + 1 = (x + 1)3.

Thus, GF8 has primitive polynomials x3 + x + 1 and x3 + x2 + 1.

(b) There is a problem here. A basis is easy to define for addition: {1, x, x2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2)3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.

(c) That is a consequence of every finite field GF(pn) being a subfield of an infinite number of finite fields GF(pm*n), each one with a nonzero number of primitive polynomials with coefficients in GF(pn). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.

I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(pm*n) relative to GF(pn), I will call the number N(m). One can count all the possible candidate polynomials for GF(pm*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
 

fresh_42

Mentor
Insights Author
2018 Award
10,339
7,041
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.

For GF2, the primitive polynomials are x + (0,1) = x, x+1.

For GF4, we consider primitive-polynomial candidates x2 + (0,1)*x + (0,1): x2, x2 + 1 = (x + 1)2, x2 + x = x(x + 1), x2 + x + 1. That last one is the only primitive polynomial for GF4.

For GF8, we consider primitive-polynomial candidates x3 + (0,1)*x2 + (0,1)*x + (0,1): x3, x3 + 1 = (x2 + x + 1)*(x + 1), x3 + x = x * (x + 1)2, x3 + x + 1, x3 + x2 = x2 * (x + 1), x3 + x2 + 1, x3 + x2 + x = x*(x2 + x + 1), x3 + x2 + x + 1 = (x + 1)3.

Thus, GF8 has primitive polynomials x3 + x + 1 and x3 + x2 + 1.

(b) There is a problem here. A basis is easy to define for addition: {1, x, x2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2)3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.

(c) That is a consequence of every finite field GF(pn) being a subfield of an infinite number of finite fields GF(pm*n), each one with a nonzero number of primitive polynomials with coefficients in GF(pn). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.

I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(pm*n) relative to GF(pn), I will call the number N(m). One can count all the possible candidate polynomials for GF(pm*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
Although your language is in part a bit unusual, the basic ideas are correct.
To finish what you've deduced for part a), we can write
$$
\mathbb{F}_8 \cong \mathbb{F}_2[x]/(x^3+x+1) \cong \mathbb{F}_2 / (x^3+x^2+1)
$$
Part c) is basically correct, although I haven't checked the formulas. But to consider ##\mathbb{F}_{p^n}## is the right idea. The argument can be simplified a lot. All these fields are algebraic over their prime field ##\mathbb{F}_p##, so all of them have to be included in the algebraic closure ##\mathbb{A}_p##. But with each new ##n## we get a larger field and all of them must be part of ##\mathbb{A}_p##, and ##n## doesn't stop. If we want to "construct" ##\mathbb{A}_p##, then it can be shown, that
$$
\mathbb{A}_p = \cup_{n \in \mathbb{N}} \mathbb{F}_{p^{n!}} \text{ with the chain } \mathbb{F}_{p^{1!}} < \mathbb{F}_{p^{2!}} < \ldots
$$
where the inclusions of subfields are strict.

As a hint for part b) - and this is the usual way to do it in all of these cases - simply define a number ##\xi## which satisfies ##\xi^3+\xi +1=0##. It is the same thing which we do from ##\mathbb{R} < \mathbb{C} \cong \mathbb{R}[x]/(x^2+1) \cong \mathbb{R}[ i ]##. We define a number ## i ## which satisfies ##i^2 + 1 = 0##. We simply call it ##i##. Now try to express all elements of ##\mathbb{F}_8## in terms of ##\mathbb{F}_2=\{0,1\}## and ##\xi##, i.e. ##\mathbb{F}_8 = \mathbb{F}_2[\xi]\,.##
 
Last edited:
81
12
I appreciate your efforts but can this way lead to any safe conclusion?Can you come up with some other way?
Maybe for another challenge :p
 
952
161
I will now continue with Problem 1
About (b), one defines GF(8) in terms of GF(2) as polynomials (0 or 1)*x2 + (0 or 1)*x + (0 or 1). One defines addition and multiplication in the usual way for polynomials, but for multiplication, one must divide by one of the primitive polynomials and take the remainder. Another way of interpreting this is to suppose that x is one of the roots of one of the primitive polynomials. Those primitive polynomials: x3 + x + 1 and x3 + x2 + 1.

(c) I have an insight into the issue of how many primitive polynomials an extended field has relative to its original field. I had earlier found a formula for N(m), the number of primitive polynomials of a field that is polynomials up to degree m-1 in the original field: F' ~ (F)m. In terms of the number N of elements of F,
$$ \sum_{\sum_k k m_k = n} \sum_k P(N(k),m_k) = N^n $$
where P is the Pochhammer symbol ##P(a,m) = a(a+1)\cdots(a+m-1)/m!##.

This looks like a difficult equation to solve for the N(k)'s, but there is a nice trick for doing so. Multiply by sn and add:
$$ \prod_k \left( \sum_m P(N(k),m) s^{km} \right) = \sum_n (Ns)^n $$
Use the negative-power generalization of the binomial theorem:
$$ \prod_k (1 - s^k)^{-N(k)} = (1 - Ns)^{-1} $$
Take the logarithm:
$$ - \sum_k N(k) \log (1 - s^k) = - \log (1 - Ns) $$
Expand in powers of s, using the familiar log(1+x) series:
$$ \sum_{m|n} \frac1m N(n/m) = \frac1n N^n $$
Instead of complicated nonlinear equations in the N(k)"s, we have linear equations in them. That should make it easier to solve for them.
 

fresh_42

Mentor
Insights Author
2018 Award
10,339
7,041
I will now continue with Problem 1
About (b), one defines GF(8) in terms of GF(2) as polynomials (0 or 1)*x2 + (0 or 1)*x + (0 or 1). One defines addition and multiplication in the usual way for polynomials, but for multiplication, one must divide by one of the primitive polynomials and take the remainder. Another way of interpreting this is to suppose that x is one of the roots of one of the primitive polynomials. Those primitive polynomials: x3 + x + 1 and x3 + x2 + 1.

(c) I have an insight into the issue of how many primitive polynomials an extended field has relative to its original field. I had earlier found a formula for N(m), the number of primitive polynomials of a field that is polynomials up to degree m-1 in the original field: F' ~ (F)m. In terms of the number N of elements of F,
$$ \sum_{\sum_k k m_k = n} \sum_k P(N(k),m_k) = N^n $$
where P is the Pochhammer symbol ##P(a,m) = a(a+1)\cdots(a+m-1)/m!##.

This looks like a difficult equation to solve for the N(k)'s, but there is a nice trick for doing so. Multiply by sn and add:
$$ \prod_k \left( \sum_m P(N(k),m) s^{km} \right) = \sum_n (Ns)^n $$
Use the negative-power generalization of the binomial theorem:
$$ \prod_k (1 - s^k)^{-N(k)} = (1 - Ns)^{-1} $$
Take the logarithm:
$$ - \sum_k N(k) \log (1 - s^k) = - \log (1 - Ns) $$
Expand in powers of s, using the familiar log(1+x) series:
$$ \sum_{m|n} \frac1m N(n/m) = \frac1n N^n $$
Instead of complicated nonlinear equations in the N(k)"s, we have linear equations in them. That should make it easier to solve for them.
Please call the polynomials irreducible, because that's what counts. A polynomial is primitive, if it's coefficients are all coprime, which doesn't make much sense over a field in general and especially over ##\mathbb{F}_2##. There are primitive elements of a field extension, namely those who generates it, say ##\xi##. In our example if we take the easier polynomial of the two, we have ##\mathbb{F}_8=\mathbb{F}_2[\xi]## with ##\xi^3+\xi+1=0##. The polynomial ##x^3+x+1## is called minimal polynomial and it is irreducible over ##\mathbb{F}_2##, i.e. it cannot be factored in ##\mathbb{F}_2[x]## which in this case automatically means it has no zeros in ##\mathbb{F}_2##.

Now obviously ##\xi \neq 0##, so ##\mathbb{F}^*_8 = \mathbb{F}_8 - \{0\} \cong \mathbb{Z}_7 = \langle a\,\vert \,a^7=1 \rangle## and we can chose ##a=\xi##. From there you can directly calculate all ##\xi^n## in terms of a linear ##\mathbb{F}_2-##basis of ##\mathbb{F}_8## given by ##\{1,\xi,\xi^2\}##. These seven vectors are what is asked for, because with them, we can see all addition and multiplication rules without having to to fill in complete tables.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
As a hint for problem 3. Termination conditions are stated to be:
- - - -

(For avoidance of doubt, this refers to playing one 'full' game, which is complete upon termination, and termination occurs on the first occurrence of (a) result of coin toss equals ##\text{Other}## or (b) the player elects to stop.)

If you like, you may include an additional condition: (c) the carnival operator doesn't want you to play all day and so will stop you and force a "cash out" if you've hit a payoff of 1 million (or 1,000,001 or 1,000,002 -- in the case of overshoots).

- - - -

another hint: there are several ways to solve the problem. One technique that I particularly like would be familiar to Pascal, and, I think, Cauchy would approve.
 
32,683
8,556
If you like, you may include an additional condition: (c) the carnival operator doesn't want you to play all day and so will stop you and force a "cash out" if you've hit a payoff of 1 million (or 1,000,001 or 1,000,002 -- in the case of overshoots).
At least for me "find a solution that would profit from this idea" is much more difficult than the original problem. I know ways to solve the problem, but no way where considering a payoff of 1 million would be involved.
 
952
161
More on Problem 1
For a field F with N elements, the number of irreducible monic polynomials of degree n I call N(n), and I'd shown
$$ \sum_{\sum_k k m_k = n} \left( \prod_k P(N(k),m_k) \right) = N^n = \sum_{m|n} \frac{n}{m} N(n/m) $$
taking my final expression in my previous post and multiplying it by n.

This calls for a Möbius transform, where
$$ f(n) = \sum_{m|n} g(m) \longleftrightarrow g(n) = \sum_{m|n} \mu(m) f(n/m) $$
where ##\mu(n)## is the Möbius function of n.

Using it here gives
$$ N(n) = \frac{1}{n} \sum_{m|n} \mu(m) N^{n/m} $$
Since ##\mu(n)## can be negative as well as positive, proving that N(n) is always positive for N > 1 is a challenge.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
At least for me "find a solution that would profit from this idea" is much more difficult than the original problem. I know ways to solve the problem, but no way where considering a payoff of 1 million would be involved.
I could argue it either way as to which setup is more intuitive... which is why I put (c) as optional.

The original problem, literally interpreted, has a countable set of outcomes / state space. The use of (c) truncates that and makes the set have a finite number of outcomes-- which, I think, may be easier for some to work with.
 
631
132
Solution to #3
The average win for heads or tails is 2 with probability 4/5.
A risk neutral player will play as long as her expectation is positive. She will quit if her expectation is ≤ 0.
If she has accrued x, then her expectation for the next play 2⋅4/5 - x⋅1/5. That is positive for x < 8 and ≤ 0 for x for x ≥ 8.
Thus she will quit when x ≥ 8, assuming "other" has not occurred.

We must now determine the expected value of that strategy. Let p = 2/5.
If she gets to x = 7, then with prob 2p her expected win will be 9.
There are various ways to get to x= 7,
by 7 1's with prob p7, or
by 4 1's and a 3 with prob 5⋅p5, or
by 2 3's and a 1 with prob 3⋅p3.
Thus the expected win win via x = 7 is (p7 2p + 5p5 2p + 3p32p)9.

She can also win by having x = 6 and getting a 3, and there are various ways to get x = 6,
by 6 1's with prob p6, or
by 3 1's and a 3 with prob 4p4, or
by 2 3's with prob p2.
Each of these pay 9 with total prob = p7 + 4p5 + p3.

Finally, she can also win by having x = 5 and getting a 3. There are 2 ways of getting x = 5,
by 5 1's with prob p5, or
by 2 1's and a 3 with prob 3p3.
Each of these pay 8 with total prob = p6 + 3p4.

Adding the three cases we get the expected win to be 3.37 which is what a neutral player should pay.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
Solution to #3
The average win for heads or tails is 2 with probability 4/5.
A risk neutral player will play as long as her expectation is positive. She will quit if her expectation is ≤ 0.
If she has accrued x, then her expectation for the next play 2⋅4/5 - x⋅1/5. That is positive for x < 8 and ≤ 0 for x for x ≥ 8.
Thus she will quit when x ≥ 8, assuming "other" has not occurred.

We must now determine the expected value of that strategy. Let p = 2/5.
If she gets to x = 7, then with prob 2p her expected win will be 9.
There are various ways to get to x= 7,
by 7 1's with prob p7, or
by 4 1's and a 3 with prob 5⋅p5, or
by 2 3's and a 1 with prob 3⋅p3.
Thus the expected win win via x = 7 is (p7 2p + 5p5 2p + 3p32p)9.

She can also win by having x = 6 and getting a 3, and there are various ways to get x = 6,
by 6 1's with prob p6, or
by 3 1's and a 3 with prob 4p4, or
by 2 3's with prob p2.
Each of these pay 9 with total prob = p7 + 4p5 + p3.

Finally, she can also win by having x = 5 and getting a 3. There are 2 ways of getting x = 5,
by 5 1's with prob p5, or
by 2 1's and a 3 with prob 3p3.
Each of these pay 8 with total prob = p6 + 3p4.

Adding the three cases we get the expected win to be 3.37 which is what a neutral player should pay.
Nicely done.

For the benefit of other readers, I'd belabor the point that you first came up with a stopping rule: to stop when the expected value is no longer positive.

The expected cost of playing another round goes up in proportion to your current stake -- specifically the proportion is ##\frac{1}{5}##. On the other hand the expected benefit of another play is constant. Hence there is a single point of interest -- the maximum -- which you found at ##8##.
- - - -
Part 2 then is figuring out the expected value, now that you have the stopping rule in place. Your careful approach is how -- I imagine-- Fermat would solve the problem.
- - - - - - - - - - - - - - - - - - - -
It may help some people to see a state diagram of the process to work through those path probabilities. I've dropped this in below.

There is a dummy state 's' which denotes the starting state. All other node labels correspond the stake value at that state. This should be mostly interpretable, though the software I use is a bit clunky at times, e.g. with respect to node 5.

A.png


- - - -
And if one is thinking about the process in reverse -- i.e. (expected values probabilistically) flowing back to state 's', people may want to consider the reversed stated diagram, below.

A_reversed.png
 

Attachments

Last edited:
631
132
Part 2 then is figuring out the expected value, now that you have the stopping rule in place. Your careful approach is how -- I imagine-- Fermat would solve the problem.
I have a picture of Fermat and he talks to me, so I can't take full credit.
optional:
How many rounds would it take on average for the game to terminate? (You may assume a mild preference for shorter vs longer games in the event of any tie breaking concerns.)

Now suppose the player doesn't care about the score and just loves flipping coins -- how long will the game take to terminate, on average, in this case? \space (by @StoneTemplePython)
Solution to optional:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll.
Let p = ⅖.

Round)
1) ⅕ + 0
2) (either a 1 or a 3) 2p⋅⅕ + 0
3) (either 1&1, or 1&3, or ...) 4p2+ (3 3s) p3
4) (no 3s) p3 ⅕ + (1 3 & 2 1s) 3p3+ (2 3s & 1 1) 3p3+ (2 3s & a 1 then any) 6p4
5) (no 3s) p4⅕ + (1 3 & 3 1s) 4p4+ (3 1s & 3 then 3) 4p5
6) (no 3s) p5⅕ + (1 3 & 4 1s) 5p5+ (4 1s & 3 then any) 10p6 + (5 1s then 3) p6
7) (no 3s) p6+ (6 1s then 3) p7
8) (no 3s) p7+ (7 1s then any) 2p8
Now multiply the round by its total prob and add them up to get about 2.8 expected rounds.

If she just keeps on flipping regardless of score the expected number of rounds = ∑n⋅⅘n-1⋅⅕ for n ≥ 1 = 5.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
I have a picture of Fermat and he talks to me, so I can't take full credit.
I liked this quite a bit. It could also be quite useful for problem 6.

The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll...

If she just keeps on flipping regardless of score the expected number of rounds = ∑n⋅⅘n-1⋅⅕ for n ≥ 1 = 5.
This is right, and gives a quick and easy upper bound on the expected number of rounds with the stopping rule in place.

Solution to optional:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll.
Let p = ⅖.

Round)
1) ⅕ + 0
2) (either a 1 or a 3) 2p⋅⅕ + 0
3) (either 1&1, or 1&3, or ...) 4p2+ (3 3s) p3
4) (no 3s) p3 ⅕ + (1 3 & 2 1s) 3p3+ (2 3s & 1 1) 3p3+ (2 3s & a 1 then any) 6p4
5) (no 3s) p4⅕ + (1 3 & 3 1s) 4p4+ (3 1s & 3 then 3) 4p5
6) (no 3s) p5⅕ + (1 3 & 4 1s) 5p5+ (4 1s & 3 then any) 10p6 + (5 1s then 3) p6
7) (no 3s) p6+ (6 1s then 3) p7
8) (no 3s) p7+ (7 1s then any) 2p8
Now multiply the round by its total prob and add them up to get about 2.8 expected rounds.
The result here is awfully close but not quite right. I'd encourage you to write out the exact probabilities in the post.

For others benefit, I'd state it as having a counter vector

##\mathbf c = \begin{bmatrix}
1\\
2\\
\vdots\\
7\\
8
\end{bmatrix}##

and a probability of terminating on that turn vector

##\mathbf p= \begin{bmatrix}
?\\
?\\
\vdots\\
?\\
?
\end{bmatrix}
##

all entries in ##\mathbf p## would be real non-negative. And since we know that the game must terminate in one of those 8 rounds, and the termination times correspond to mutually exclusive events, the probability vector sums to one, i.e.

##\mathbf 1^T \mathbf p = \sum_{k=1}^8 p_k = 1##
- - - -
So, for the expected number of turns calculation, you have

##\text{expected time till termination} = \mathbf c^T \mathbf p = \sum_{k=1}^8 (c_k) p_k = \sum_{k=1}^8 (k)p_k##

- - - -
Interesting point: if the game starts at a certain other state -- not "s" though-- one gets ##2.838## as the expected rounds till termination. The expected time from "s", then, is strictly greater than this (inclusive of any rounding effects).
 
Last edited:
631
132
The result here is awfully close but not quite right. I'd encourage you to write out the exact probabilities in the post.
Change ⅕ to p/2.
1•p/2 = .2
2•2pp/2 = .32
3•(2p3 + p3 = 3p3) = .576
4•((p3 + 3p3 +3p3(corrected))p/2 + 6p4 = 9.5p4) = .9728
5•(5p4p/2 + 4p5 = 6.5p5) = .3328
6•(6p5p/2 + 11p6 = 14p6) = .344
7•(p6p/2 + p7 = 1.5p7) = .0172
8•(p7p/2 + 2p8 = 2.5p8) = .013

Adding the bolds = 2.7758 ≈ 2.8.
You're right there was an error at round 4, but to the nearest tenth it's still the same. Good looking out.
I waited for others to solve it, but think no one did because it was a bit messy and needed careful attention. The only real idea was the stopping rule.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
Wait, I had round 4 correct (except the first bold + shouldn't have been bold).
So where is the error?
let me try to transcribe the below -- I'm not concerned about minor rounding nits -- to the first or second decimal is fine.

(disclaimer: what I show below may look inconsistent on significant figures as I'm copying and pasting from python, etc.)

Change ⅕ to p/2.
1•p/2 = .2
2•2pp/2 = .32
3•(2p3 + p3 = 3p3) = .576
4•((p3 + 3p3 +3p3(corrected))p/2 + 6p4 = 9.5p4) = .9728
5•(5p4p/2 + 4p5 = 6.5p5) = .3328
6•(6p5p/2 + 11p6 = 14p6) = .344
7•(p6p/2 + p7 = 1.5p7) = .0172
8•(p7p/2 + 2p8 = 2.5p8) = .013

Adding the bolds = 2.7758 ≈ 2.8.
You're right there was an error at round 4, but to the nearest tenth it's still the same. Good looking out.
I waited for others to solve it, but think no one did because it was a bit messy and needed careful attention. The only real idea was the stopping rule.
##\mathbf v :=
\left[\begin{matrix}0.2\\0.32\\0.576\\0.9728\\0.3328\\0.344\\0.0172\\0.013\end{matrix}\right] = \begin{bmatrix}
1\\
2\\
3\\
4\\
5\\
6\\
7\\
8
\end{bmatrix} \circ
\left[\begin{matrix}0.2\\0.16\\0.192\\0.2432\\0.06656\\0.0573333333333333\\0.00245714285714286\\0.001625\end{matrix}\right]
= \mathbf c \circ \mathbf p##

where ##\circ## denotes element wise multiplication (Hadamard product).

I believe I got the transcription right but let me know if I missed something.
- - - -

going through it:

as you've said:
##\mathbf c^T \mathbf p = \text{sum}\big(\mathbf v\big) = \mathbf 1^T \mathbf v = 2.7758##

But
##\mathbf 1^T \mathbf p = \sum_{k=1}^8 p_k = 0.923175 \lt 1##


which means you haven't included all paths that lead to termination. For avoidance of doubt, we know that all paths terminate with probability one. We know it for several reasons... you've already implicitly shown this by getting the upper bound case -- i.e. the expected time of 5, for the person who just loves coin tossing.

- - - -
I know of at least two other approaches besides enumeration that could get you there -- I tend to think they are easier though enumeration is a perfectly valid approach.

- - - -
edit:
your probabilities vector is off by just under 8 percentage points in total. This is entirely attributable to just one of the slots... i.e. you are correct in ##7## of the ##8## cases you've listed, but one of them is off by ##\approx 8 \text{% points}##
 
Last edited:
631
132
which means you haven't included all paths that lead to termination.
OK,OK, you are correct. Nice observation.
My error was in round 4. I omitted a 3p4 = .0768 which, with exact calculations brings the total probability to 1.
The expected number of rounds then becomes 3.0831744.
So what is easier than enumeration? Run a zillion trials on a computer and take an average?
Thanks.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
OK,OK, you are correct. Nice observation.
My error was in round 4. I omitted a 3p4 = .0768 which, with exact calculations brings the total probability to 1.
The expected number of rounds then becomes 3.0831744.
So what is easier than enumeration? Run a zillion trials on a computer and take an average?
Thanks.
Simulations work well in practice, though they wouldn't count for credit here-- something more 'exact' is needed in the spirit of the problem and rules 1 and 4.

My preferred approach is to (a) think forward to reachable termination points i.e. ##\{8,9,10,0\}## -- given you have the stopping rule in place-- and these become your base case(s). Now apply backward induction.

This was my opaque reference earlier:

One technique that I particularly like would be familiar to Pascal, and, I think, Cauchy would approve.
though perhaps I could have mentioned Bellman and some others as well.

edit:
Strictly speaking I was thinking of the Pascal approach of drawing the tree and inducting backward in context of pricing this to be a 'fair game'. (Given that you have a stopping rule in place, this is not unlike a Problem of Points.)

The approach is quite flexible, though, and can be easily modified to get expected time till termination.
 
Last edited:
631
132
Simulations work well in practice, though they wouldn't count for credit here-- something more 'exact' is needed in the spirit of the problem and rules 1 and 4.

My preferred approach is to (a) think forward to reachable termination points i.e. ##\{8,9,10,0\}## -- given you have the stopping rule in place-- and these become your base case(s). Now apply backward induction.

This was my opaque reference earlier:



though perhaps I could have mentioned Bellman and some others as well.

edit:
Strictly speaking I was thinking of the Pascal approach of drawing the tree and inducting backward in context of pricing this to be a 'fair game'. (Given that you have a stopping rule in place, this is not unlike a Problem of Points.)

The approach is quite flexible, though, and can be easily modified to get expected time till termination.
If I understand you correctly it seems as though you will eventually end up with all of the same paths as I did, and your approach isn't any faster.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
If I understand you correctly it seems as though you will eventually end up with all of the same paths as I did, and your approach isn't any faster.
It depends, I suppose, on how you are generating your paths. The idea is to collapse things/ take advantage of overlapping subproblems. Think trellis lattice or recombining tree (maybe an abuse of language but a common term in finance).

The approach is a single linear scan through the states-- O(n) with bare minimal coefficient-- to get the answer to both the fair pricing and expected rounds till termination questions-- again, assuming you have a stopping rule in place. The approach also has the virtue of making it impossible to double count or omit paths.

(For what it's worth, another way of tackling the problem reduces to (sparse) matrix multiplication.)

I'm happy to post the approach at the end once this thread is closed out.

- - - -
Problem 6 is still open.

I'd be much obliged if someone tackles it. Depending on how one sets up the problem, it is either straightforward or an impossible task.
 

fresh_42

Mentor
Insights Author
2018 Award
10,339
7,041
... and the ##\mathbb{F}_8-##basis over ##\mathbb{F}_2## with its multiplication rules.
 
631
132
Problem 6 is still open.
I find #6 ambiguous. If an urn has 3 whites and 1 black what do you think is the probability of drawing 3 that are either all white or all black?
 
631
132
Problem 6 is still open.

I'd be much obliged if someone tackles it
The only way I can make a reasonable problem from the way it was posed is as follows:
The probability of getting all white or all black from urn 2 is prob all white + prob all black.
In that case it is impossible due to Fermat's Theorem and n ≥ 3.

That's what Fermat told me.
 

StoneTemplePython

Science Advisor
Gold Member
1,053
503
The only way I can make a reasonable problem from the way it was posed is as follows:
The probability of getting all white or all black from urn 2 is prob all white + prob all black.
In that case it is impossible due to Fermat's Theorem and n ≥ 3.

That's what Fermat told me.
Can you expand on this a bit for others' benefit?

It's a mildly famous problem designed by E. C. Molina, to cast Fermat's Last Theorem as a probability problem.
 
Last edited:
32,683
8,556
There are some trivial solutions for (6).
If an urn has 3 whites and 1 black what do you think is the probability of drawing 3 that are either all white or all black?
1/43 + (3/4)3. We draw with replacement.
In that case it is impossible due to Fermat's Theorem and n ≥ 3.
Huh?
 

Want to reply to this thread?

"Basic Math Challenge - May 2018" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top