Can Poker Chip Arrangements Be Optimized for Maximum Profit?

whdahl
Messages
15
Reaction score
0
Hey everyone. I was pondering how best to optimize a chip arrangement for a poker game. This is the scenario I've thought up:

There are 4 denominations of colored chips with a set value.
White (W) = 0.05
Red (R) = 0.25
Blue (B) = 1.00
Green (G) = 5.00

A player wants to purchase 40 dollars worth of chips. If he must receive exactly 60 chips total, what is the optimal amount of each chip denomination to give the player?

These two conditions, (the $40 buy in and the 60 chip amount) will yield two equations:

xW + yR + zB + wG = 40
x + y + z + w = 60

There are 2 equations and 4 unknowns. Where might I find two more equations so that I can solve the equations, or is there some method using calculus that would yield a result?
 
Mathematics news on Phys.org
What do you consider to be an "optimal" amount? This is a very vague term in this context.
 
Last edited:
whdahl said:
There are 2 equations and 4 unknowns. Where might I find two more equations so that I can solve the equations, or is there some method using calculus that would yield a result?
You also have the constraints that x>=0, y>=0, z>=0 and w>=0 and that x, y, z, w are integers. And you have your expressed desire for optimality.

In the absence of a constraint to integer values this might be an exercise in "linear programming" -- find the maximum of a linear function in n real-valued variables given a set of linear inequalities that those variables must satisfy.

With the restriction to integer values, this is at worst a matter of searching a finite number of possibilities for an optimum.
 
Another constraint would be that x > y > z > w >= 0 because it is best to have the highest amount of smaller chips. Is there an efficient way, using matrices perhaps, of solving for solutions?
 
whdahl said:
Another constraint would be that x > y > z > w >= 0 because it is best to have the highest amount of smaller chips. Is there an efficient way, using matrices perhaps, of solving for solutions?
If your job were to hand out $42 using exactly 10 chips, would this "constraint" still apply?

When describing optimization problems, a "constraint" is a hard requirement which must be met. Any possible solution must meet each and every constraint. In addition to the constraints, you generally have a way to rank the possible solutions to see which one(s) are best. A solution which is tied for best is "optimal".

"x > y > z > w >= 0" has the form of a constraint. It does not provide a way to rank solutions except in the crudest of ways (all solutions which satisfy the inequality are tied for best).

You have said that "it is best to have the highest amount of smaller chips". One possibility is that you want the solution that gives the player the highest possible number of white $0.05 chips. If multiple solutions maximize the number of white chips you want the solution that gives the player the highest possible number of red $0.25 chips. And so on. Is that what you are after?

A related problem is: https://en.wikipedia.org/wiki/Change-making_problem
 
jbriggs444 said:
You have said that "it is best to have the highest amount of smaller chips". One possibility is that you want the solution that gives the player the highest possible number of white $0.05 chips. If multiple solutions maximize the number of white chips you want the solution that gives the player the highest possible number of red $0.25 chips. And so on. Is that what you are after?

Yes.
After reading through that wiki page and another on dynamic programming, it seems that this is a problem that can be solved using VBA in excel.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Back
Top