OK, this is probably a bit too easy for you guys, but we're kind of stuck so help would be much appreciated. My question is about finding an analytical solution to ”the expected golf handicap” of a virtual golfer with a given mean score and standard deviation within the USGA handicap system. We are a group of people working on improving the Swedish handicap system and it would be very helpful to us if we could understand better how to analytically solve the expected value within the US system. The principal problem can be formulated this way: If you take a sample of two observations from a population with a given mean, M, and standard deviation, SD, and then discard the higher of the samples, what would be the expected value of the lower? The golf handicap system is based on the mean hcp differential of the best 10 rounds of the 20 most current times 0.96, so the sample n would be higher, but the principal should remain the same. This is as far as we’ve gotten: In an infinitely large sample the solution where you discard the higher half of the observation to the hcp problem would have the solution: Hcp Index = (mean diff-0,675xSD)0,96. The mean differential is easily calculated based on the known mean score of the population, so it is not taken from the sample mean. This solution is naturally based on the fact that as we discard the higher half of the scores in an infinitely large sample to calculate our mean diff we have only to calculate the mean of the lower “half” of the Gauss-clock of the standard normal distribution. The expected value in an infinitely large sample would be where alfa equals 0.25 which gives us an approximate Z of 0.675. Our question is if there is a precise analytical solution to how we should modify the standard deviation to find a correct expected value of the mean of the 10 lowest in a sample of 20 from a population with a given mean and standard deviation. So the snag seams to be how to handle the standard deviation for a given sample size. The only thing we've come up with is using SE=SD/the square root of n, and that can not be correct since we don’t want to calculate the standard error for the mean of a sample in a case where we have the actual mean and standard deviation of the population already. In more general terms I think we could probably manage if we got help solving the more principal problem which is finding the expected value of the lower of two samples taken from a population with a given mean and standard deviation. Many thanks in advance from a sunny spring Stockholm.