Register to reply 
Ranking the pairs in a bridge match  what precision? 
Share this thread: 
#1
Feb1313, 12:14 AM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930

I've come across a bug in a program that calculates rankings from the results of a bridge tournament. It should be fixable by increasing the precision, but it's not obvious what the precision needs to be. (I don't have access to the source and anticipate push back from the coder, so I need it to be an evidently straightforward change.)
Here's how standard scoring works: Each hand is played on up to some number of tables (at most 30, say). Suppose at some table pair X plays the NS cards against pair Y as EW. A score is calculated based on the result at that table. This score is compared with the other scores for the same hand when played by other pairs. If the hand is played N times in all, the NS pair that got the worst result scores 0 points, and the best gets 2(N1), with other rankings linearly interpolated (i.e. 0, 2, 4... 2(N1)). NS pairs that got the same result share their points, so e.g. two equal tops get 2N3 each. Similarly the EW pairs. An awkward complication is that some hands might be played more often than others. So after the points scoring described above, the scores are rescaled. If a hand was only played M times instead of N times, the points are scaled up by (N1)/(M1). This can produce rounding errors. The existing code rounds to one decimal place(!), so 8.25 becomes 8.3 etc., giving some pairs a 0.05 points bonus. Pairs that benefit from this more than others may end up with a correspondingly higher overall points score, when perhaps they should be equal. So here's the challenge: how precise do the calculations need to be such that the true overall placings can be deduced from the total points? (The total may be made up of up to 30 scores for individual hands.) Bear in mind that a theoretical exact score may involve a sum of fractions like n/28, n/27, ... One obvious solution would be to work entirely in integers by scoring each hand out of 2*LCM(29, 28...) instead, but that's 4.6E12. Does this imply the precision would need to be 2E13? 


#2
Feb1313, 01:39 AM

Sci Advisor
HW Helper
P: 4,300

Can't you just do the calculation with floating point numbers and only round at the end. There may still be rounding errors but they will ve of the order of the machine precision.



#3
Feb1313, 04:13 AM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930




#4
Feb1313, 08:18 AM

Engineering
Sci Advisor
HW Helper
Thanks
P: 7,288

Ranking the pairs in a bridge match  what precision?
If your analysis that gives 4.6e12 is correct, floating point 64bit (or "double") precision should be OK since that is good for 16 significant figures.
Or, you could use a language or a library that supports exact rational integer arithmetic with arbitrary precision integers. http://en.wikipedia.org/wiki/Arbitra...ion_arithmetic lists a few. 


#5
Feb1313, 10:49 AM

Sci Advisor
HW Helper
P: 4,300

Another approach: for how many games does this need to work? Because if the problem is that the bonus of 0.05 adds up to e.g. 1.0 over 20 games, you can fix it by increasing the precision to 0.0005 so that it doesn't influence the end result when rounded to 1 decimal.



#6
Feb1313, 05:12 PM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930




#7
Feb1313, 05:16 PM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930




#8
Feb1313, 05:31 PM

HW Helper
P: 6,189

Hmm, how bad can it be...?
The guy may be interested in what the effect is if he replaces all his numbers by doubles. Just adding another digit of precision or introducing a fractionlibrary kind of raises my hackles  the first doesn't really solve the problem and the second is overkill. 


#9
Feb1413, 04:54 AM

Sci Advisor
HW Helper
P: 4,300

The way I understand it he doesn't have access to the source code, but somehow there is an external parameter that determines to how many decimal places results are rounded.



#10
Feb1413, 03:39 PM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930




#11
Feb1413, 04:13 PM

HW Helper
P: 6,189

From your story I get the impression they have never heard of them! Once they see them at work, they may gain so much respect for you!! Or are you bent on an integertype fixedpoint solution? Once upon a time (20 years ago by now) I worked for a company that had to process a lot of measurements. I implemented an algorithm using fixedpoint arithmetic, in my naivety thinking that the gain in performance was significant enough to warrant it. (I had only just completed my studies and was still a bit estranged from reality I'm afraid.) However, my employer complained that the results did not fit well enough on what was expected. I tried to explain how important it was to use an integertype algorithm... but he didn't buy it. Just to make a point, I converted the algorithm to floating point to show it would fit perfectly then. ... afterward every calculation was always done with full length double arithmetic. 


#12
Feb1413, 04:21 PM

Mentor
P: 12,083

Float should make rounding errors (changing positions) very unlikely, and a double should remove them completely, I agree with AlephZero. 


#13
Feb1413, 05:55 PM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930

Thanks everyone for your comments.
Yes, I understand that IEEE double gives 53 bits. (The benefit of switching to floating point, of course, is nothing to with the the difference between fixed and floating per se; it's just that double uses a 64 bit word as standard. 64 bit fixed would be even better.) If my analysis is right then that seems adequate, but is it? Part of the software fix would be applying a threshold to the total scores for considering two values to be equal. Assuming some error accumulation, we may only have, say, 50 bit accuracy by the end. The total scores could be up to 900, leaving 40 bits for the fractional part. My analysis says real differences could be as small as 2E12, or about 2^{12}. OK, so it doesn't absolutely guarantee no mistakes, but the odds of such a pathological case are so small it won't happen in the life of the universe. That leaves me with two concerns:  is my analysis right? (This is really why I posted on this forum  I thought the problem was mathematically interesting  but nobody has commented on that.)  what if the code is so archaic that 64 bit formats are not available? I guess I'll just have to ask. 


#14
Feb1413, 05:59 PM

HW Helper
P: 6,189

It really is a realworld problem. 


#15
Feb1413, 07:14 PM

Engineering
Sci Advisor
HW Helper
Thanks
P: 7,288

But I would guess the system is sopposed to be useable without needing computer software, so it doesn't seem right that it needs extremely accurate calculations. Maybe the theoretical "worst cases" rarely happen in practice? 


#16
Feb1413, 09:13 PM

Homework
Sci Advisor
HW Helper
Thanks
P: 9,930

 each pair gets a set of N scores  each score is a fraction, p/q, 0 <= p <= q <= N. What is the closest two total scores can be without being equal? 


#17
Feb1513, 12:51 PM

Mentor
P: 12,083

This is a trivial lower bound on the deviation between scores, as all scores are a multiple of this value. At the same time, it can be reached, and I think this can be proven with induction. It might require a lot of games, however. 


#18
Feb1513, 03:09 PM

Engineering
Sci Advisor
HW Helper
Thanks
P: 7,288

In practice, I would guess most hands would be played approximately the same number of times, so the realistic range is 1/lcm(M,N) where M is fairly close to N.
For eaxmplle lcm(1,2, ... 30) = 2^4.3^3.5^2.7.11.13.17.19.23.29 = 2.33 x 10^12. But lcm(27,28,29,30) is only 2^2.3^3.5.7.29 = 109620. Also, there will be some complcated constraints on the number of times different hands are played, because (presumably) each player or pair of players can only play their games sequentially, and may have to wait for their next opponents to finish. As an extreme (and completely unrealistic) example, it is probably impossible to have 29 hands each played 30 times and one hand only played once. 


Register to reply 
Related Discussions  
Making Cooper Pairs of Cooper Pairs?  Atomic, Solid State, Comp. Physics  4  
Help With Statistical Ranking  Set Theory, Logic, Probability, Statistics  10  
Photon Pairs  Can photons travel in pairs?  Quantum Physics  7  
Conjugate pairs versus not conjugate pairs  Chemistry  1  
Angelic Ranking  General Discussion  0 