- #1
- 41,927
- 10,118
I've come across a bug in a program that calculates rankings from the results of a bridge tournament. It should be fixable by increasing the precision, but it's not obvious what the precision needs to be. (I don't have access to the source and anticipate push back from the coder, so I need it to be an evidently straightforward change.)
Here's how standard scoring works:
Each hand is played on up to some number of tables (at most 30, say). Suppose at some table pair X plays the NS cards against pair Y as EW. A score is calculated based on the result at that table. This score is compared with the other scores for the same hand when played by other pairs. If the hand is played N times in all, the NS pair that got the worst result scores 0 points, and the best gets 2(N-1), with other rankings linearly interpolated (i.e. 0, 2, 4... 2(N-1)). NS pairs that got the same result share their points, so e.g. two equal tops get 2N-3 each. Similarly the EW pairs.
An awkward complication is that some hands might be played more often than others. So after the points scoring described above, the scores are rescaled. If a hand was only played M times instead of N times, the points are scaled up by (N-1)/(M-1). This can produce rounding errors. The existing code rounds to one decimal place(!), so 8.25 becomes 8.3 etc., giving some pairs a 0.05 points bonus. Pairs that benefit from this more than others may end up with a correspondingly higher overall points score, when perhaps they should be equal.
So here's the challenge: how precise do the calculations need to be such that the true overall placings can be deduced from the total points? (The total may be made up of up to 30 scores for individual hands.) Bear in mind that a theoretical exact score may involve a sum of fractions like n/28, n/27, ...
One obvious solution would be to work entirely in integers by scoring each hand out of 2*LCM(29, 28...) instead, but that's 4.6E12. Does this imply the precision would need to be 2E-13?
Here's how standard scoring works:
Each hand is played on up to some number of tables (at most 30, say). Suppose at some table pair X plays the NS cards against pair Y as EW. A score is calculated based on the result at that table. This score is compared with the other scores for the same hand when played by other pairs. If the hand is played N times in all, the NS pair that got the worst result scores 0 points, and the best gets 2(N-1), with other rankings linearly interpolated (i.e. 0, 2, 4... 2(N-1)). NS pairs that got the same result share their points, so e.g. two equal tops get 2N-3 each. Similarly the EW pairs.
An awkward complication is that some hands might be played more often than others. So after the points scoring described above, the scores are rescaled. If a hand was only played M times instead of N times, the points are scaled up by (N-1)/(M-1). This can produce rounding errors. The existing code rounds to one decimal place(!), so 8.25 becomes 8.3 etc., giving some pairs a 0.05 points bonus. Pairs that benefit from this more than others may end up with a correspondingly higher overall points score, when perhaps they should be equal.
So here's the challenge: how precise do the calculations need to be such that the true overall placings can be deduced from the total points? (The total may be made up of up to 30 scores for individual hands.) Bear in mind that a theoretical exact score may involve a sum of fractions like n/28, n/27, ...
One obvious solution would be to work entirely in integers by scoring each hand out of 2*LCM(29, 28...) instead, but that's 4.6E12. Does this imply the precision would need to be 2E-13?