Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Ranking the pairs in a bridge match - what precision?

  1. Feb 13, 2013 #1

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    I've come across a bug in a program that calculates rankings from the results of a bridge tournament. It should be fixable by increasing the precision, but it's not obvious what the precision needs to be. (I don't have access to the source and anticipate push back from the coder, so I need it to be an evidently straightforward change.)
    Here's how standard scoring works:
    Each hand is played on up to some number of tables (at most 30, say). Suppose at some table pair X plays the NS cards against pair Y as EW. A score is calculated based on the result at that table. This score is compared with the other scores for the same hand when played by other pairs. If the hand is played N times in all, the NS pair that got the worst result scores 0 points, and the best gets 2(N-1), with other rankings linearly interpolated (i.e. 0, 2, 4... 2(N-1)). NS pairs that got the same result share their points, so e.g. two equal tops get 2N-3 each. Similarly the EW pairs.
    An awkward complication is that some hands might be played more often than others. So after the points scoring described above, the scores are rescaled. If a hand was only played M times instead of N times, the points are scaled up by (N-1)/(M-1). This can produce rounding errors. The existing code rounds to one decimal place(!), so 8.25 becomes 8.3 etc., giving some pairs a 0.05 points bonus. Pairs that benefit from this more than others may end up with a correspondingly higher overall points score, when perhaps they should be equal.
    So here's the challenge: how precise do the calculations need to be such that the true overall placings can be deduced from the total points? (The total may be made up of up to 30 scores for individual hands.) Bear in mind that a theoretical exact score may involve a sum of fractions like n/28, n/27, ...
    One obvious solution would be to work entirely in integers by scoring each hand out of 2*LCM(29, 28...) instead, but that's 4.6E12. Does this imply the precision would need to be 2E-13?
     
  2. jcsd
  3. Feb 13, 2013 #2

    CompuChip

    User Avatar
    Science Advisor
    Homework Helper

    Can't you just do the calculation with floating point numbers and only round at the end. There may still be rounding errors but they will ve of the order of the machine precision.
     
  4. Feb 13, 2013 #3

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    As I say, I don't have access to the source, and I don't even know what language it is in. Even if floating point is an option, it does not have unlimited precision. So I'd like some basis for saying what precision will always be good enough.
     
  5. Feb 13, 2013 #4

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    If your analysis that gives 4.6e12 is correct, floating point 64-bit (or "double") precision should be OK since that is good for 16 significant figures.

    Or, you could use a language or a library that supports exact rational integer arithmetic with arbitrary precision integers. http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic lists a few.
     
  6. Feb 13, 2013 #5

    CompuChip

    User Avatar
    Science Advisor
    Homework Helper

    Another approach: for how many games does this need to work? Because if the problem is that the bonus of 0.05 adds up to e.g. 1.0 over 20 games, you can fix it by increasing the precision to 0.0005 so that it doesn't influence the end result when rounded to 1 decimal.
     
  7. Feb 13, 2013 #6

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Yes, I realise that might work, depending on the language it's written in. But since it's going to be a battle anyway, I need to be sure of my fix. So, partly, I'm looking for confirmation of my analysis.
    And I quite possibly would have done that had I written the software, but I see no prospect of getting this guy to make such a radical change.
     
  8. Feb 13, 2013 #7

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    The .05 was the relatively simple case that made me aware of the bug. But as I explained in the OP, the exact scores can be a sum of fractions with denominators up to 29. This means that a difference like 1/28-1/29 could be a genuine difference between two totals, not just rounding error. Or even 1/26-1/27-1/28+1/29, etc.
     
  9. Feb 13, 2013 #8

    I like Serena

    User Avatar
    Homework Helper

    Hmm, how bad can it be...?
    The guy may be interested in what the effect is if he replaces all his numbers by doubles.
    Just adding another digit of precision or introducing a fraction-library kind of raises my hackles - the first doesn't really solve the problem and the second is overkill. :wink:
     
  10. Feb 14, 2013 #9

    CompuChip

    User Avatar
    Science Advisor
    Homework Helper

    The way I understand it he doesn't have access to the source code, but somehow there is an external parameter that determines to how many decimal places results are rounded.
     
  11. Feb 14, 2013 #10

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    No, there's no external parameter, but I am assuming that increasing precision to some specified number of bits should be straightforward, if that number of bits is not too great. I'd like to be able to announce to the programmer, and other customers, "just increase it to N bits and the whole problem will be solved". When I sat down to analyse it, I was surprised how tricky it is mathematically.
     
  12. Feb 14, 2013 #11

    I like Serena

    User Avatar
    Homework Helper

    Imagine the great impression you can make when introducing floating point numbers.
    From your story I get the impression they have never heard of them!
    Once they see them at work, they may gain so much respect for you!! :tongue:
    Or are you bent on an integer-type fixed-point solution?

    Once upon a time (20 years ago by now) I worked for a company that had to process a lot of measurements.
    I implemented an algorithm using fixed-point arithmetic, in my naivety thinking that the gain in performance was significant enough to warrant it.
    (I had only just completed my studies and was still a bit estranged from reality I'm afraid.)
    However, my employer complained that the results did not fit well enough on what was expected.
    I tried to explain how important it was to use an integer-type algorithm... but he didn't buy it.
    Just to make a point, I converted the algorithm to floating point to show it would fit perfectly then.
    ... afterward every calculation was always done with full length double arithmetic. o:)
     
  13. Feb 14, 2013 #12

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Unless the code is very hacky ("store 1/10 scores as integer"), I would expect that the internal numbers are floats or doubles anyway. The coder would just have to move the rounding from individual hands to the display of the results.
    Float should make rounding errors (changing positions) very unlikely, and a double should remove them completely, I agree with AlephZero.
     
  14. Feb 14, 2013 #13

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Thanks everyone for your comments.
    Yes, I understand that IEEE double gives 53 bits. (The benefit of switching to floating point, of course, is nothing to with the the difference between fixed and floating per se; it's just that double uses a 64 bit word as standard. 64 bit fixed would be even better.)
    If my analysis is right then that seems adequate, but is it? Part of the software fix would be applying a threshold to the total scores for considering two values to be equal. Assuming some error accumulation, we may only have, say, 50 bit accuracy by the end. The total scores could be up to 900, leaving 40 bits for the fractional part. My analysis says real differences could be as small as 2E-12, or about 2-12. OK, so it doesn't absolutely guarantee no mistakes, but the odds of such a pathological case are so small it won't happen in the life of the universe.
    That leaves me with two concerns:
    - is my analysis right? (This is really why I posted on this forum - I thought the problem was mathematically interesting - but nobody has commented on that.)
    - what if the code is so archaic that 64 bit formats are not available?
    I guess I'll just have to ask.
     
  15. Feb 14, 2013 #14

    I like Serena

    User Avatar
    Homework Helper

    You have too many variables for a proper mathematical analysis.
    It really is a real-world problem.
     
  16. Feb 14, 2013 #15

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    I agree it's interesting - but not being familiar with bridge tournment scoring, I found your description of the system hard to follow. (i.e. too much effort required to understand it and then try to analyse it).

    But I would guess the system is sopposed to be useable without needing computer software, so it doesn't seem right that it needs extremely accurate calculations. Maybe the theoretical "worst cases" rarely happen in practice?
     
  17. Feb 14, 2013 #16

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Yes, it's a bit of an investment. I'd no idea it was this intricate until I investigated the problem. Let me try to simplify it a bit without, I hope, misrepresenting it:
    - each pair gets a set of N scores
    - each score is a fraction, p/q, 0 <= p <= q <= N.
    What is the closest two total scores can be without being equal?
    I don't think anybody previously appreciated the existence of the problem. Certainly the theoretical worst cases can be assumed never to happen. Calculating how bad it can be with what frequency would be some orders of magnitude harder, though, so I hoped to have a guaranteed solution.
     
  18. Feb 15, 2013 #17

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    With exact calculations, 1/lcm(1,2,...N).

    This is a trivial lower bound on the deviation between scores, as all scores are a multiple of this value.
    At the same time, it can be reached, and I think this can be proven with induction. It might require a lot of games, however.
     
  19. Feb 15, 2013 #18

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    In practice, I would guess most hands would be played approximately the same number of times, so the realistic range is 1/lcm(M,N) where M is fairly close to N.

    For eaxmplle lcm(1,2, ... 30) = 2^4.3^3.5^2.7.11.13.17.19.23.29 = 2.33 x 10^12.
    But lcm(27,28,29,30) is only 2^2.3^3.5.7.29 = 109620.

    Also, there will be some complcated constraints on the number of times different hands are played, because (presumably) each player or pair of players can only play their games sequentially, and may have to wait for their next opponents to finish. As an extreme (and completely unrealistic) example, it is probably impossible to have 29 hands each played 30 times and one hand only played once.
     
  20. Feb 15, 2013 #19

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Yes, that's not bad. This suggests a prepass which computes the LCM to use based on the actual numbers of times each hand was played.
    Thanks.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Ranking the pairs in a bridge match - what precision?
  1. 6 Matches (Replies: 8)

Loading...