Basic Math Challenge - May 2018

In summary: That the volume is confined to the first octant. This eliminates the negative coordinates. The condition x + y + z ≤ √2 means that the volume is confined to a tetrahedron with vertices at the origin and the points (1,0,0), (0,1,0), and (0,0,1). The condition x^2 + y^2 ≤ 1 means that the volume is further confined to a cylinder with radius 1 and height 1, bounded by the plane z = 1. To find the volume, we integrate over the given region using the cylindrical coordinate system. The limits of integration for r are 0 to 1, for θ are 0 to π/
  • #36
nuuskur said:
As previous attempts had failed, it's time for some good ol' induction.

@nuuskur you did well. Just try to be somewhat more straightforward and structured in your solutions and you'll be really fine. This is no kind of criticism or some negative comment; I just tell you for your own good.
 
  • Like
Likes nuuskur
Physics news on Phys.org
  • #37
A try at question 10.
Let x = ⌊n!e⌋
⟺ (property) n!e − 1 < x ≤ n!e
⟺ (multiply by -1) - n!e ≤ - x < 1 - n!e
⟺ (add n!e to both sides) 0 ≤ n!e - x < 1
$$\lim_{n\to\infty} 0 ≤ \lim_{n\to\infty} (n!e - \lfloor n!e \rfloor) ≤ \lim_{n\to\infty} 1$$ ?
 
Last edited:
  • #38
@archaic
When you take the limit, strict inequalities become non-strict.
 
  • Like
Likes archaic
  • #39
nuuskur said:
@archaic
When you take the limit, strict inequalities become non-strict.
Thank you
 
  • #40
Number 5 seems like a fun exercise. I'll assume we already know these mappings are metrics.
French Metro. For every ##a,b\in\mathbb R^2 ##
[tex]
\rho (a,b) := \begin{cases} \lvert a-b\rvert, &\exists k: ka=b \\ |a|+|b|, &\mbox{otherwise}\end{cases}
[/tex]
where ##|a| ## denotes the Euclidean distance (from Paris). Describe the open ball ##B((2,1),3)##.
If one observes ##R(2,1) ## as a bound vector (on Paris), multiplying it by some number, we just stretch this vector so whenever ##\rho (a,b) = |a-b| ##, they are situated on the straight line determined by Paris and Reims. Then we have the condition ##\rho (X,R)<3 ## i.e
[tex]
|(2k,k) - (2,1)| = |(2(k-1), k-1)| \overset{*}= |k-1||(2,1)| = |k-1|\sqrt{5}<3
[/tex]
(*) the mapping ##|\cdot| ## is actually a norm, so I can use its homogeneity property
therefore ##-\frac{3}{\sqrt{5}} +1 < k < \frac{3}{\sqrt{5}} +1##.
On the other hand, if ##X## does not lie on the line, the minimum distance between Reims and ##X## is at least the distance between Paris and Reims. (Even if the other city is like 5 km north, you have to take the train back to Paris and only then get to your destination, unless you don't mind walking )
More specifically we have ##\rho (X,R)<3 ## i.e
[tex]
|(x,y)| + |(2,1)| = \sqrt{x^2 + y^2} + \sqrt{5} < 3 \implies x^2+y^2 < \left (3-\sqrt{5}\right )^2
[/tex]
It's a circle around Paris with radius ##3-\sqrt{5} ##, boundary excluded.
The open ball consists of the segment with specified ##k## and the circle. (think lollipop)
---------------------------------------------------------------------------------------------------------------------------
Manhattan metric. For every ##(a,b),(c,d)\in\mathbb R^2 ##
[tex]
\rho ((a,b),(c,d)) := |a-c| + |b-d|.
[/tex]
Suppose ##\rho ((x,y),(2,1))<3 ## i.e
[tex]
|x-2| + |y-1| <3
[/tex]
Since we are adding two non-negative quantities they are both bounded by ##3##. This means ##-1<x<5## or ##-2<y<4##. (so the open ball definitely lives inside of this open rectangle)

Firstly, suppose ##y\geq 1 ##. We have the condition ##|x-2| <3 - (y-1) = 4-y ##.
  1. If ##x\geq 2 ##, then ##x+y<6 ## (strictly under the line ##y = -x+6##)
  2. If ##x<2 ##, then ##y-x<2## (strictly under the line ##y=x+2##)
Secondly, suppose ##y<1 ##. We have the condition ##|x-2| < 3 +y-1 = y+2 ##.
  1. If ##x\geq 2##, then ##x-y<4## (strictly above the line ##y=x-4##)
  2. If ##x<2##, then ##x+y>0 ## (strictly above the line ##y=-x##).
Edit: the shape is a special case of a parallelogram with vertices lying on the intersections of the lines. The vertices are ##(-1,1), (2,4), (5,1), (2,-2) ##. The boundary is excluded.
---------------------------------------------------------------------------------------------------------------------------
Maximum metric. For every ##(a,b),(c,d)\in\mathbb R^2 ##
[tex]
\rho ((a,b),(c,d)) := \max \lbrace |a-c|, |b-d|\rbrace
[/tex]
So, suppose ##\rho ((x,y),(2,1))<3 ## i.e
[tex]
\max \lbrace |x-2|, |y-1|\rbrace < 3
[/tex]
This is the rectangle I was talking about in Manhattan exercise: ##-1<x<5## or ##-2<y<4##

Edit: It's a square, sorry. With vertices ##(-1,4), (5,4), (5,-2), (-1,-2) ##. The boundary is excluded.

The balls in general metric spaces exhibit some very "unball"-like characteristics o0)
 
Last edited:
  • #41
nuuskur said:
The balls in general metric spaces exhibit some very "unball"-like characteristics
That's been the idea behind the question: a different metric is a different way of measuring, not just a different scale like meter and yards.

In the French Railway metric, we can easily travel to Luxembourg but won't reach - and that is the good news - Saint Quentin, although it is "closer" (in the ordinary sense). The ball is shaped like a hammer in a hammer throw competition: a rope of some fixed length and everything inside the Paris highway circle.
Your description and numbers are correct.

For the other two metrics, where a ball is shaped like a rectangular, can you tell the name of the shapes and their vertices?
 
  • #42
fresh_42 said:
For the other two metrics, where a ball is shaped like a rectangular, can you tell the name of the shapes and their vertices?
I've edited #40 with this information.
 
  • #43
nuuskur said:
I've edited #40 with this information.
Thanks. Btw. the "ball" in the Manhattan metric is called a rhombus, a square standing on one of its corners. Here's a quick illustration:

upload_2018-5-4_14-47-13.png
 

Attachments

  • upload_2018-5-4_14-47-13.png
    upload_2018-5-4_14-47-13.png
    6 KB · Views: 464
Last edited:
  • Like
Likes nuuskur
  • #44
I'll do Problem 10.
For that, we must start with the series
$$e = \sum_{k=0}^\infty \frac{1}{k!}$$
Multiply by n!:
$$n! e = \sum_{k=0}^\infty \frac{n!}{k!}$$
Split the series in two, after k = n: ##n! e = e_1 + e_2## where
$$ e_1 = \sum_{k=0}^n \frac{n!}{k!} ,\ e_2 = \sum_{k=n+1}^\infty \frac{n!}{k!} = \sum_{k=1}^\infty \frac{n!}{(n+k)!} $$
with k redefined for e2.

For e1, each of the terms is (k+1)*(k+2)*...*n, and is thus an integer. Therefore, e1 is an integer.

For e2, however, each of the terms is the reciprocal of (n+1)*(n+2)*...*(n+k), and each part of that term has lower limit n+1. That reciprocal thus has the lower limit (n+1)k. Thus,
$$ e_2 = \sum_{k=1}^\infty \frac{1}{(n+1)(n+2) \cdots (n+k)} < \sum_{k=1}^\infty \frac{1}{(n+1)^k} = 1/n $$
using the familiar formula for the sum of a geometric series.

This means that e2 is always between 0 and 1/n for n >= 1, and thus that it is always between 0 and 1. That means that (n!*e) has integer part e1 and fractional part e2 for n >= 1. Thus, ## n!e - \lfloor n!e \rfloor = e_2 ##

Though e2 is positive, it can be made arbitrarily small with some suitable selection of n, and thus ## \lim_{n\to\infty} e_2 = 0 ##. Thus proving that
$$\lim_{n\to\infty}(n!e - \lfloor n!e \rfloor) = 0$$
 
  • Like
Likes dRic2, StoneTemplePython, QuantumQuest and 2 others
  • #45
archaic said:
A try at question 10.

I appreciate your efforts but can this way lead to any safe conclusion?Can you come up with some other way?
 
  • Like
Likes archaic
  • #47
I'll try problem 9.
For a space with coordinates xi, the one-form ##\omega = \omega_i dx^i##, assuming summation over dummy indices like i here. Integrating over curve C with parameter t, with ##x^i = x^i(t)##,
$$\int_C \omega = \int_C \omega_i \frac{dx^i}{dt} dt$$
Using Mathematica to do the mathematical grunt work, I find that this problem's integral has the value 5.

The exterior derivative of ω is
$$\nu = d\omega = \frac12 \left( \frac{d\omega_j}{dx^i} - \frac{d\omega_i}{dx^j}\right) dx^i \wedge dx^j$$
where the wedge denotes antisymmetry. For this problem, ##\nu = z dx \wedge dz##.

Taking a further exterior derivative, I find ##d\nu = dz \wedge dx \wedge dz = 0##, and ν is thus closed. This result can be proved more generally:

d(d(any n-form)) = 0

as a consequence of the antisymmetry of the exterior derivative.
 
  • #48
lpetrich said:
I'll try problem 9.
For a space with coordinates xi, the one-form ##\omega = \omega_i dx^i##, assuming summation over dummy indices like i here. Integrating over curve C with parameter t, with ##x^i = x^i(t)##,
$$\int_C \omega = \int_C \omega_i \frac{dx^i}{dt} dt$$
Using Mathematica to do the mathematical grunt work, I find that this problem's integral has the value 5.

The exterior derivative of ω is
$$\nu = d\omega = \frac12 \left( \frac{d\omega_j}{dx^i} - \frac{d\omega_i}{dx^j}\right) dx^i \wedge dx^j$$
where the wedge denotes antisymmetry. For this problem, ##\nu = z dx \wedge dz##.

Taking a further exterior derivative, I find ##d\nu = dz \wedge dx \wedge dz = 0##, and ν is thus closed. This result can be proved more generally:

d(d(any n-form)) = 0

as a consequence of the antisymmetry of the exterior derivative.
The results are correct, but I want to see the steps. E.g. there are only six steps to get 5 and only 4 to calculate ##\nu##. Shouldn't be too many to write out.
 
  • #49
Here are some evaluations.
For doing the integral, the integrand is
$$ z^2 \frac{dx}{dt} + 2y \frac{dy}{dt} + x z \frac{dz}{dt} = 1^2 \frac{d(t^2)}{dt} + 2(2t) \frac{d(2t)}{dt} + t^2 \cdot 1 \frac{d(1)}{dt} = 2t + 8t + 0 = 10t $$
Integrating it is easy: ##\int (10 t) dt = 5 t^2##, and plugging in the limits of integration gives ##5(1^2) - 5(0^2) = 5##.

For taking the derivative, I do
$$ d(z^2 (dx) + 2y (dy) + x z (dz)) = 2z (dz \wedge dx) + 2 (dy \wedge dy) + z (dx \wedge dz) + x (dz \wedge dz) = - 2z (dx \wedge dz) + z (dx \wedge dz) = z (dx \wedge dz) $$
 
  • Like
Likes fresh_42
  • #50
lpetrich said:
Here are some evaluations.
For doing the integral, the integrand is
$$ z^2 \frac{dx}{dt} + 2y \frac{dy}{dt} + x z \frac{dz}{dt} = 1^2 \frac{d(t^2)}{dt} + 2(2t) \frac{d(2t)}{dt} + t^2 \cdot 1 \frac{d(1)}{dt} = 2t + 8t + 0 = 10t $$
Integrating it is easy: ##\int (10 t) dt = 5 t^2##, and plugging in the limits of integration gives ##5(1^2) - 5(0^2) = 5##.

For taking the derivative, I do
$$ d(z^2 (dx) + 2y (dy) + x z (dz)) = 2z (dz \wedge dx) + 2 (dy \wedge dy) + z (dx \wedge dz) + x (dz \wedge dz) = - 2z (dx \wedge dz) + z (dx \wedge dz) = z (dx \wedge dz) $$
Yes, and for sake of completeness, the initial steps are formally:
$$\int_{\Gamma} \omega = \int_{[0,1]} \gamma^*(\omega)=\int_{[0,1]} \omega(d\gamma)=\int_{[0,1]} (z^2 dx +2ydy+xzdz)d\gamma $$
 
  • #51
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.

For GF2, the primitive polynomials are x + (0,1) = x, x+1.

For GF4, we consider primitive-polynomial candidates x2 + (0,1)*x + (0,1): x2, x2 + 1 = (x + 1)2, x2 + x = x(x + 1), x2 + x + 1. That last one is the only primitive polynomial for GF4.

For GF8, we consider primitive-polynomial candidates x3 + (0,1)*x2 + (0,1)*x + (0,1): x3, x3 + 1 = (x2 + x + 1)*(x + 1), x3 + x = x * (x + 1)2, x3 + x + 1, x3 + x2 = x2 * (x + 1), x3 + x2 + 1, x3 + x2 + x = x*(x2 + x + 1), x3 + x2 + x + 1 = (x + 1)3.

Thus, GF8 has primitive polynomials x3 + x + 1 and x3 + x2 + 1.

(b) There is a problem here. A basis is easy to define for addition: {1, x, x2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2)3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.

(c) That is a consequence of every finite field GF(pn) being a subfield of an infinite number of finite fields GF(pm*n), each one with a nonzero number of primitive polynomials with coefficients in GF(pn). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.

I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(pm*n) relative to GF(pn), I will call the number N(m). One can count all the possible candidate polynomials for GF(pm*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
 
  • #52
lpetrich said:
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.

For GF2, the primitive polynomials are x + (0,1) = x, x+1.

For GF4, we consider primitive-polynomial candidates x2 + (0,1)*x + (0,1): x2, x2 + 1 = (x + 1)2, x2 + x = x(x + 1), x2 + x + 1. That last one is the only primitive polynomial for GF4.

For GF8, we consider primitive-polynomial candidates x3 + (0,1)*x2 + (0,1)*x + (0,1): x3, x3 + 1 = (x2 + x + 1)*(x + 1), x3 + x = x * (x + 1)2, x3 + x + 1, x3 + x2 = x2 * (x + 1), x3 + x2 + 1, x3 + x2 + x = x*(x2 + x + 1), x3 + x2 + x + 1 = (x + 1)3.

Thus, GF8 has primitive polynomials x3 + x + 1 and x3 + x2 + 1.

(b) There is a problem here. A basis is easy to define for addition: {1, x, x2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2)3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.

(c) That is a consequence of every finite field GF(pn) being a subfield of an infinite number of finite fields GF(pm*n), each one with a nonzero number of primitive polynomials with coefficients in GF(pn). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.

I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(pm*n) relative to GF(pn), I will call the number N(m). One can count all the possible candidate polynomials for GF(pm*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
Although your language is in part a bit unusual, the basic ideas are correct.
To finish what you've deduced for part a), we can write
$$
\mathbb{F}_8 \cong \mathbb{F}_2[x]/(x^3+x+1) \cong \mathbb{F}_2 / (x^3+x^2+1)
$$
Part c) is basically correct, although I haven't checked the formulas. But to consider ##\mathbb{F}_{p^n}## is the right idea. The argument can be simplified a lot. All these fields are algebraic over their prime field ##\mathbb{F}_p##, so all of them have to be included in the algebraic closure ##\mathbb{A}_p##. But with each new ##n## we get a larger field and all of them must be part of ##\mathbb{A}_p##, and ##n## doesn't stop. If we want to "construct" ##\mathbb{A}_p##, then it can be shown, that
$$
\mathbb{A}_p = \cup_{n \in \mathbb{N}} \mathbb{F}_{p^{n!}} \text{ with the chain } \mathbb{F}_{p^{1!}} < \mathbb{F}_{p^{2!}} < \ldots
$$
where the inclusions of subfields are strict.

As a hint for part b) - and this is the usual way to do it in all of these cases - simply define a number ##\xi## which satisfies ##\xi^3+\xi +1=0##. It is the same thing which we do from ##\mathbb{R} < \mathbb{C} \cong \mathbb{R}[x]/(x^2+1) \cong \mathbb{R}[ i ]##. We define a number ## i ## which satisfies ##i^2 + 1 = 0##. We simply call it ##i##. Now try to express all elements of ##\mathbb{F}_8## in terms of ##\mathbb{F}_2=\{0,1\}## and ##\xi##, i.e. ##\mathbb{F}_8 = \mathbb{F}_2[\xi]\,.##
 
Last edited:
  • Like
Likes lpetrich
  • #53
QuantumQuest said:
I appreciate your efforts but can this way lead to any safe conclusion?Can you come up with some other way?
Maybe for another challenge :p
 
  • Like
Likes QuantumQuest
  • #54
I will now continue with Problem 1
About (b), one defines GF(8) in terms of GF(2) as polynomials (0 or 1)*x2 + (0 or 1)*x + (0 or 1). One defines addition and multiplication in the usual way for polynomials, but for multiplication, one must divide by one of the primitive polynomials and take the remainder. Another way of interpreting this is to suppose that x is one of the roots of one of the primitive polynomials. Those primitive polynomials: x3 + x + 1 and x3 + x2 + 1.

(c) I have an insight into the issue of how many primitive polynomials an extended field has relative to its original field. I had earlier found a formula for N(m), the number of primitive polynomials of a field that is polynomials up to degree m-1 in the original field: F' ~ (F)m. In terms of the number N of elements of F,
$$ \sum_{\sum_k k m_k = n} \sum_k P(N(k),m_k) = N^n $$
where P is the Pochhammer symbol ##P(a,m) = a(a+1)\cdots(a+m-1)/m!##.

This looks like a difficult equation to solve for the N(k)'s, but there is a nice trick for doing so. Multiply by sn and add:
$$ \prod_k \left( \sum_m P(N(k),m) s^{km} \right) = \sum_n (Ns)^n $$
Use the negative-power generalization of the binomial theorem:
$$ \prod_k (1 - s^k)^{-N(k)} = (1 - Ns)^{-1} $$
Take the logarithm:
$$ - \sum_k N(k) \log (1 - s^k) = - \log (1 - Ns) $$
Expand in powers of s, using the familiar log(1+x) series:
$$ \sum_{m|n} \frac1m N(n/m) = \frac1n N^n $$
Instead of complicated nonlinear equations in the N(k)"s, we have linear equations in them. That should make it easier to solve for them.
 
  • #55
lpetrich said:
I will now continue with Problem 1
About (b), one defines GF(8) in terms of GF(2) as polynomials (0 or 1)*x2 + (0 or 1)*x + (0 or 1). One defines addition and multiplication in the usual way for polynomials, but for multiplication, one must divide by one of the primitive polynomials and take the remainder. Another way of interpreting this is to suppose that x is one of the roots of one of the primitive polynomials. Those primitive polynomials: x3 + x + 1 and x3 + x2 + 1.

(c) I have an insight into the issue of how many primitive polynomials an extended field has relative to its original field. I had earlier found a formula for N(m), the number of primitive polynomials of a field that is polynomials up to degree m-1 in the original field: F' ~ (F)m. In terms of the number N of elements of F,
$$ \sum_{\sum_k k m_k = n} \sum_k P(N(k),m_k) = N^n $$
where P is the Pochhammer symbol ##P(a,m) = a(a+1)\cdots(a+m-1)/m!##.

This looks like a difficult equation to solve for the N(k)'s, but there is a nice trick for doing so. Multiply by sn and add:
$$ \prod_k \left( \sum_m P(N(k),m) s^{km} \right) = \sum_n (Ns)^n $$
Use the negative-power generalization of the binomial theorem:
$$ \prod_k (1 - s^k)^{-N(k)} = (1 - Ns)^{-1} $$
Take the logarithm:
$$ - \sum_k N(k) \log (1 - s^k) = - \log (1 - Ns) $$
Expand in powers of s, using the familiar log(1+x) series:
$$ \sum_{m|n} \frac1m N(n/m) = \frac1n N^n $$
Instead of complicated nonlinear equations in the N(k)"s, we have linear equations in them. That should make it easier to solve for them.
Please call the polynomials irreducible, because that's what counts. A polynomial is primitive, if it's coefficients are all coprime, which doesn't make much sense over a field in general and especially over ##\mathbb{F}_2##. There are primitive elements of a field extension, namely those who generates it, say ##\xi##. In our example if we take the easier polynomial of the two, we have ##\mathbb{F}_8=\mathbb{F}_2[\xi]## with ##\xi^3+\xi+1=0##. The polynomial ##x^3+x+1## is called minimal polynomial and it is irreducible over ##\mathbb{F}_2##, i.e. it cannot be factored in ##\mathbb{F}_2[x]## which in this case automatically means it has no zeros in ##\mathbb{F}_2##.

Now obviously ##\xi \neq 0##, so ##\mathbb{F}^*_8 = \mathbb{F}_8 - \{0\} \cong \mathbb{Z}_7 = \langle a\,\vert \,a^7=1 \rangle## and we can chose ##a=\xi##. From there you can directly calculate all ##\xi^n## in terms of a linear ##\mathbb{F}_2-##basis of ##\mathbb{F}_8## given by ##\{1,\xi,\xi^2\}##. These seven vectors are what is asked for, because with them, we can see all addition and multiplication rules without having to to fill in complete tables.
 
  • Like
Likes Janosh89
  • #56
As a hint for problem 3. Termination conditions are stated to be:
- - - -

(For avoidance of doubt, this refers to playing one 'full' game, which is complete upon termination, and termination occurs on the first occurrence of (a) result of coin toss equals ##\text{Other}## or (b) the player elects to stop.)

If you like, you may include an additional condition: (c) the carnival operator doesn't want you to play all day and so will stop you and force a "cash out" if you've hit a payoff of 1 million (or 1,000,001 or 1,000,002 -- in the case of overshoots).

- - - -

another hint: there are several ways to solve the problem. One technique that I particularly like would be familiar to Pascal, and, I think, Cauchy would approve.
 
  • #57
StoneTemplePython said:
If you like, you may include an additional condition: (c) the carnival operator doesn't want you to play all day and so will stop you and force a "cash out" if you've hit a payoff of 1 million (or 1,000,001 or 1,000,002 -- in the case of overshoots).
At least for me "find a solution that would profit from this idea" is much more difficult than the original problem. I know ways to solve the problem, but no way where considering a payoff of 1 million would be involved.
 
  • #58
More on Problem 1
For a field F with N elements, the number of irreducible monic polynomials of degree n I call N(n), and I'd shown
$$ \sum_{\sum_k k m_k = n} \left( \prod_k P(N(k),m_k) \right) = N^n = \sum_{m|n} \frac{n}{m} N(n/m) $$
taking my final expression in my previous post and multiplying it by n.

This calls for a Möbius transform, where
$$ f(n) = \sum_{m|n} g(m) \longleftrightarrow g(n) = \sum_{m|n} \mu(m) f(n/m) $$
where ##\mu(n)## is the Möbius function of n.

Using it here gives
$$ N(n) = \frac{1}{n} \sum_{m|n} \mu(m) N^{n/m} $$
Since ##\mu(n)## can be negative as well as positive, proving that N(n) is always positive for N > 1 is a challenge.
 
  • #59
mfb said:
At least for me "find a solution that would profit from this idea" is much more difficult than the original problem. I know ways to solve the problem, but no way where considering a payoff of 1 million would be involved.

I could argue it either way as to which setup is more intuitive... which is why I put (c) as optional.

The original problem, literally interpreted, has a countable set of outcomes / state space. The use of (c) truncates that and makes the set have a finite number of outcomes-- which, I think, may be easier for some to work with.
 
  • #60
Solution to #3
The average win for heads or tails is 2 with probability 4/5.
A risk neutral player will play as long as her expectation is positive. She will quit if her expectation is ≤ 0.
If she has accrued x, then her expectation for the next play 2⋅4/5 - x⋅1/5. That is positive for x < 8 and ≤ 0 for x for x ≥ 8.
Thus she will quit when x ≥ 8, assuming "other" has not occurred.

We must now determine the expected value of that strategy. Let p = 2/5.
If she gets to x = 7, then with prob 2p her expected win will be 9.
There are various ways to get to x= 7,
by 7 1's with prob p7, or
by 4 1's and a 3 with prob 5⋅p5, or
by 2 3's and a 1 with prob 3⋅p3.
Thus the expected win win via x = 7 is (p7 2p + 5p5 2p + 3p32p)9.

She can also win by having x = 6 and getting a 3, and there are various ways to get x = 6,
by 6 1's with prob p6, or
by 3 1's and a 3 with prob 4p4, or
by 2 3's with prob p2.
Each of these pay 9 with total prob = p7 + 4p5 + p3.

Finally, she can also win by having x = 5 and getting a 3. There are 2 ways of getting x = 5,
by 5 1's with prob p5, or
by 2 1's and a 3 with prob 3p3.
Each of these pay 8 with total prob = p6 + 3p4.

Adding the three cases we get the expected win to be 3.37 which is what a neutral player should pay.
 
  • Like
Likes mfb and StoneTemplePython
  • #61
Zafa Pi said:
Solution to #3
The average win for heads or tails is 2 with probability 4/5.
A risk neutral player will play as long as her expectation is positive. She will quit if her expectation is ≤ 0.
If she has accrued x, then her expectation for the next play 2⋅4/5 - x⋅1/5. That is positive for x < 8 and ≤ 0 for x for x ≥ 8.
Thus she will quit when x ≥ 8, assuming "other" has not occurred.

We must now determine the expected value of that strategy. Let p = 2/5.
If she gets to x = 7, then with prob 2p her expected win will be 9.
There are various ways to get to x= 7,
by 7 1's with prob p7, or
by 4 1's and a 3 with prob 5⋅p5, or
by 2 3's and a 1 with prob 3⋅p3.
Thus the expected win win via x = 7 is (p7 2p + 5p5 2p + 3p32p)9.

She can also win by having x = 6 and getting a 3, and there are various ways to get x = 6,
by 6 1's with prob p6, or
by 3 1's and a 3 with prob 4p4, or
by 2 3's with prob p2.
Each of these pay 9 with total prob = p7 + 4p5 + p3.

Finally, she can also win by having x = 5 and getting a 3. There are 2 ways of getting x = 5,
by 5 1's with prob p5, or
by 2 1's and a 3 with prob 3p3.
Each of these pay 8 with total prob = p6 + 3p4.

Adding the three cases we get the expected win to be 3.37 which is what a neutral player should pay.

Nicely done.

For the benefit of other readers, I'd belabor the point that you first came up with a stopping rule: to stop when the expected value is no longer positive.

The expected cost of playing another round goes up in proportion to your current stake -- specifically the proportion is ##\frac{1}{5}##. On the other hand the expected benefit of another play is constant. Hence there is a single point of interest -- the maximum -- which you found at ##8##.
- - - -
Part 2 then is figuring out the expected value, now that you have the stopping rule in place. Your careful approach is how -- I imagine-- Fermat would solve the problem.
- - - - - - - - - - - - - - - - - - - -
It may help some people to see a state diagram of the process to work through those path probabilities. I've dropped this in below.

There is a dummy state 's' which denotes the starting state. All other node labels correspond the stake value at that state. This should be mostly interpretable, though the software I use is a bit clunky at times, e.g. with respect to node 5.

A.png


- - - -
And if one is thinking about the process in reverse -- i.e. (expected values probabilistically) flowing back to state 's', people may want to consider the reversed stated diagram, below.

A_reversed.png
 

Attachments

  • A.png
    A.png
    8.7 KB · Views: 552
  • A_reversed.png
    A_reversed.png
    10.3 KB · Views: 556
Last edited:
  • #62
StoneTemplePython said:
Part 2 then is figuring out the expected value, now that you have the stopping rule in place. Your careful approach is how -- I imagine-- Fermat would solve the problem.
I have a picture of Fermat and he talks to me, so I can't take full credit.
QuantumQuest said:
optional:
How many rounds would it take on average for the game to terminate? (You may assume a mild preference for shorter vs longer games in the event of any tie breaking concerns.)

Now suppose the player doesn't care about the score and just loves flipping coins -- how long will the game take to terminate, on average, in this case? \space (by @StoneTemplePython)
Solution to optional:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll.
Let p = ⅖.

Round)
1) ⅕ + 0
2) (either a 1 or a 3) 2p⋅⅕ + 0
3) (either 1&1, or 1&3, or ...) 4p2+ (3 3s) p3
4) (no 3s) p3 ⅕ + (1 3 & 2 1s) 3p3+ (2 3s & 1 1) 3p3+ (2 3s & a 1 then any) 6p4
5) (no 3s) p4⅕ + (1 3 & 3 1s) 4p4+ (3 1s & 3 then 3) 4p5
6) (no 3s) p5⅕ + (1 3 & 4 1s) 5p5+ (4 1s & 3 then any) 10p6 + (5 1s then 3) p6
7) (no 3s) p6+ (6 1s then 3) p7
8) (no 3s) p7+ (7 1s then any) 2p8
Now multiply the round by its total prob and add them up to get about 2.8 expected rounds.

If she just keeps on flipping regardless of score the expected number of rounds = ∑n⋅⅘n-1⋅⅕ for n ≥ 1 = 5.
 
  • #63
Zafa Pi said:
I have a picture of Fermat and he talks to me, so I can't take full credit.

I liked this quite a bit. It could also be quite useful for problem 6.

Zafa Pi said:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll...

If she just keeps on flipping regardless of score the expected number of rounds = ∑n⋅⅘n-1⋅⅕ for n ≥ 1 = 5.

This is right, and gives a quick and easy upper bound on the expected number of rounds with the stopping rule in place.

Zafa Pi said:
Solution to optional:
The expected number of rounds for the game to terminate = ∑n⋅prob game ends on nth round, n ∈ [1,8] (see post #60).
Prob game ends on nth round = prob "other" is rolled at the nth turn + prob game terminates by a win at the nth roll.
Let p = ⅖.

Round)
1) ⅕ + 0
2) (either a 1 or a 3) 2p⋅⅕ + 0
3) (either 1&1, or 1&3, or ...) 4p2+ (3 3s) p3
4) (no 3s) p3 ⅕ + (1 3 & 2 1s) 3p3+ (2 3s & 1 1) 3p3+ (2 3s & a 1 then any) 6p4
5) (no 3s) p4⅕ + (1 3 & 3 1s) 4p4+ (3 1s & 3 then 3) 4p5
6) (no 3s) p5⅕ + (1 3 & 4 1s) 5p5+ (4 1s & 3 then any) 10p6 + (5 1s then 3) p6
7) (no 3s) p6+ (6 1s then 3) p7
8) (no 3s) p7+ (7 1s then any) 2p8
Now multiply the round by its total prob and add them up to get about 2.8 expected rounds.

The result here is awfully close but not quite right. I'd encourage you to write out the exact probabilities in the post.

For others benefit, I'd state it as having a counter vector

##\mathbf c = \begin{bmatrix}
1\\
2\\
\vdots\\
7\\
8
\end{bmatrix}##

and a probability of terminating on that turn vector

##\mathbf p= \begin{bmatrix}
?\\
?\\
\vdots\\
?\\
?
\end{bmatrix}
##

all entries in ##\mathbf p## would be real non-negative. And since we know that the game must terminate in one of those 8 rounds, and the termination times correspond to mutually exclusive events, the probability vector sums to one, i.e.

##\mathbf 1^T \mathbf p = \sum_{k=1}^8 p_k = 1##
- - - -
So, for the expected number of turns calculation, you have

##\text{expected time till termination} = \mathbf c^T \mathbf p = \sum_{k=1}^8 (c_k) p_k = \sum_{k=1}^8 (k)p_k##

- - - -
Interesting point: if the game starts at a certain other state -- not "s" though-- one gets ##2.838## as the expected rounds till termination. The expected time from "s", then, is strictly greater than this (inclusive of any rounding effects).
 
Last edited:
  • #64
StoneTemplePython said:
The result here is awfully close but not quite right. I'd encourage you to write out the exact probabilities in the post.
Change ⅕ to p/2.
1•p/2 = .2
2•2pp/2 = .32
3•(2p3 + p3 = 3p3) = .576
4•((p3 + 3p3 +3p3(corrected))p/2 + 6p4 = 9.5p4) = .9728
5•(5p4p/2 + 4p5 = 6.5p5) = .3328
6•(6p5p/2 + 11p6 = 14p6) = .344
7•(p6p/2 + p7 = 1.5p7) = .0172
8•(p7p/2 + 2p8 = 2.5p8) = .013

Adding the bolds = 2.7758 ≈ 2.8.
You're right there was an error at round 4, but to the nearest tenth it's still the same. Good looking out.
I waited for others to solve it, but think no one did because it was a bit messy and needed careful attention. The only real idea was the stopping rule.
 
  • #65
StoneTemplePython said:
not quite right
Wait, I had round 4 correct (except the first bold + shouldn't have been bold).
So where is the error?
 
  • #66
Zafa Pi said:
Wait, I had round 4 correct (except the first bold + shouldn't have been bold).
So where is the error?

let me try to transcribe the below -- I'm not concerned about minor rounding nits -- to the first or second decimal is fine.

(disclaimer: what I show below may look inconsistent on significant figures as I'm copying and pasting from python, etc.)

Zafa Pi said:
Change ⅕ to p/2.
1•p/2 = .2
2•2pp/2 = .32
3•(2p3 + p3 = 3p3) = .576
4•((p3 + 3p3 +3p3(corrected))p/2 + 6p4 = 9.5p4) = .9728
5•(5p4p/2 + 4p5 = 6.5p5) = .3328
6•(6p5p/2 + 11p6 = 14p6) = .344
7•(p6p/2 + p7 = 1.5p7) = .0172
8•(p7p/2 + 2p8 = 2.5p8) = .013

Adding the bolds = 2.7758 ≈ 2.8.
You're right there was an error at round 4, but to the nearest tenth it's still the same. Good looking out.
I waited for others to solve it, but think no one did because it was a bit messy and needed careful attention. The only real idea was the stopping rule.

##\mathbf v :=
\left[\begin{matrix}0.2\\0.32\\0.576\\0.9728\\0.3328\\0.344\\0.0172\\0.013\end{matrix}\right] = \begin{bmatrix}
1\\
2\\
3\\
4\\
5\\
6\\
7\\
8
\end{bmatrix} \circ
\left[\begin{matrix}0.2\\0.16\\0.192\\0.2432\\0.06656\\0.0573333333333333\\0.00245714285714286\\0.001625\end{matrix}\right]
= \mathbf c \circ \mathbf p##

where ##\circ## denotes element wise multiplication (Hadamard product).

I believe I got the transcription right but let me know if I missed something.
- - - -

going through it:

as you've said:
##\mathbf c^T \mathbf p = \text{sum}\big(\mathbf v\big) = \mathbf 1^T \mathbf v = 2.7758##

But
##\mathbf 1^T \mathbf p = \sum_{k=1}^8 p_k = 0.923175 \lt 1##which means you haven't included all paths that lead to termination. For avoidance of doubt, we know that all paths terminate with probability one. We know it for several reasons... you've already implicitly shown this by getting the upper bound case -- i.e. the expected time of 5, for the person who just loves coin tossing.

- - - -
I know of at least two other approaches besides enumeration that could get you there -- I tend to think they are easier though enumeration is a perfectly valid approach.

- - - -
edit:
your probabilities vector is off by just under 8 percentage points in total. This is entirely attributable to just one of the slots... i.e. you are correct in ##7## of the ##8## cases you've listed, but one of them is off by ##\approx 8 \text{% points}##
 
Last edited:
  • Like
Likes Zafa Pi
  • #67
StoneTemplePython said:
which means you haven't included all paths that lead to termination.
OK,OK, you are correct. Nice observation.
My error was in round 4. I omitted a 3p4 = .0768 which, with exact calculations brings the total probability to 1.
The expected number of rounds then becomes 3.0831744.
So what is easier than enumeration? Run a zillion trials on a computer and take an average?
Thanks.
 
  • Like
Likes StoneTemplePython
  • #68
Zafa Pi said:
OK,OK, you are correct. Nice observation.
My error was in round 4. I omitted a 3p4 = .0768 which, with exact calculations brings the total probability to 1.
The expected number of rounds then becomes 3.0831744.
So what is easier than enumeration? Run a zillion trials on a computer and take an average?
Thanks.

Simulations work well in practice, though they wouldn't count for credit here-- something more 'exact' is needed in the spirit of the problem and rules 1 and 4.

My preferred approach is to (a) think forward to reachable termination points i.e. ##\{8,9,10,0\}## -- given you have the stopping rule in place-- and these become your base case(s). Now apply backward induction.

This was my opaque reference earlier:

StoneTemplePython said:
One technique that I particularly like would be familiar to Pascal, and, I think, Cauchy would approve.

though perhaps I could have mentioned Bellman and some others as well.

edit:
Strictly speaking I was thinking of the Pascal approach of drawing the tree and inducting backward in context of pricing this to be a 'fair game'. (Given that you have a stopping rule in place, this is not unlike a Problem of Points.)

The approach is quite flexible, though, and can be easily modified to get expected time till termination.
 
Last edited:
  • #69
StoneTemplePython said:
Simulations work well in practice, though they wouldn't count for credit here-- something more 'exact' is needed in the spirit of the problem and rules 1 and 4.

My preferred approach is to (a) think forward to reachable termination points i.e. ##\{8,9,10,0\}## -- given you have the stopping rule in place-- and these become your base case(s). Now apply backward induction.

This was my opaque reference earlier:
though perhaps I could have mentioned Bellman and some others as well.

edit:
Strictly speaking I was thinking of the Pascal approach of drawing the tree and inducting backward in context of pricing this to be a 'fair game'. (Given that you have a stopping rule in place, this is not unlike a Problem of Points.)

The approach is quite flexible, though, and can be easily modified to get expected time till termination.
If I understand you correctly it seems as though you will eventually end up with all of the same paths as I did, and your approach isn't any faster.
 
  • #70
Zafa Pi said:
If I understand you correctly it seems as though you will eventually end up with all of the same paths as I did, and your approach isn't any faster.

It depends, I suppose, on how you are generating your paths. The idea is to collapse things/ take advantage of overlapping subproblems. Think trellis lattice or recombining tree (maybe an abuse of language but a common term in finance).

The approach is a single linear scan through the states-- O(n) with bare minimal coefficient-- to get the answer to both the fair pricing and expected rounds till termination questions-- again, assuming you have a stopping rule in place. The approach also has the virtue of making it impossible to double count or omit paths.

(For what it's worth, another way of tackling the problem reduces to (sparse) matrix multiplication.)

I'm happy to post the approach at the end once this thread is closed out.

- - - -
Problem 6 is still open.

I'd be much obliged if someone tackles it. Depending on how one sets up the problem, it is either straightforward or an impossible task.
 

Similar threads

  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
2
Replies
60
Views
8K
  • Math Proof Training and Practice
3
Replies
102
Views
7K
  • Math Proof Training and Practice
2
Replies
56
Views
7K
  • Math Proof Training and Practice
3
Replies
86
Views
9K
Back
Top