# Ratio of Dice Rolls

Consider two independent random events A and B. Is the event A/B independent from event B?

My real problem stems from a debate between myself and a colleague dealing with using the arithmetic mean vs. the geometric mean. My colleague used an example of rolling dice to discredit arithmetic mean and support geometric mean. I think the example was flawed. It went something like this:

"Suppose you have two 6-sided dice, A and B. Consider the ratio of all possible dice throws A/B. Since both dice are independent, each combination of A and B has the same probability (1/36). If you take the arithmetic mean of the ratios you get approximately 1.43 . Which means on average, A will be larger than B. We know that both dice have the same probability, so we expect this ratio to be 1 on average. If we take the geometric mean of the ratio of A/B, we get the right result, so the geometric mean is the appropriate mean to use."

This sounds great, but it is flawed. Intuitively, I would think you should take the ratio of the averages of A and B, i.e., ##\frac{\overline A}{\overline B}##. Doing this, gives you the correct ratio of 1. But is this right?

I know that the "correct" way is to look at expectation values. Given an event X, the expectation value is given by:

$$E(X) = \sum \limits_{i=1} ^n \left(x_i \cdot P(x_i)\right)$$

for ##x_i \in X## with probability ##P_i##. For a single 6-sided die we get:

$$E(A) = \sum \limits_{i=1}^6 \left(a_i* P(a_i) \right) = \frac{1}{6} * \sum \limits_{i=1} ^6 a_i = 3.5$$

Let's apply this to A/B. Let R = A/B:

$$E(R) = \sum \limits_{i=1} ^n \left(r_i*P(r_i)\right)$$

If you carry out this calculation you get approximately ##E(R) = 1.33##. Meaning that on average, A will be 33% greater than B. This is supposed to be the "correct" way. But how can this be? A and B were chosen arbitrarily, and each have the same probability.

I know that in general:

$$E(A/B) \neq \frac{E(A)}{E(B)}$$

I feel like the only way for this dice example to make any sense is IF this is true. In fact, it would be true if B was independent from A/B. How would one prove the dependency between B and A/B?

In terms of two independent dice throws A and B, what is the meaning of E(A) / E(B), and how does this meaning differ from that of E(A/B)? Also, if we had one die, what is the meaning of E(1/A)? I feel that my result above of "1.33" has a totally different meaning than I am interpreting it to be. This might be the source of the confusion.

Also, if A and B are independent, then this SHOULD be true:

$$E(A/B) = E(A)*E(1/B)$$

I ran some numbers in excel and I find:

E(A/B) = 1.33
E(A)*E(1/B) = 1.43

Can anyone shed some light into this problem? It's driving me crazy! Thanks in advance.

Related Set Theory, Logic, Probability, Statistics News on Phys.org
$$E(R) = \sum \limits_{i=1} ^n \left(r_i*P(r_i)\right)$$
What exactly is ##r_i## and ##P(r_i)##?

r is a single event in the set of possible events R. It is one of the 23 possible ratios in rolling two 6 sided dice. P(r) is the probability of rolling dice that result in the ratio r. I apologize if my notation is bad.

I also forgot to disable smileys and not sure how to do it from the mobile app. I'm new to the forums and didn't realize that smileys would ruin my post

pbuk
Gold Member
Ignoring some errors and inconsistency in your terminology, it is true that the expected value of the ratio A/B is approximately 1.43. But this does NOT mean in any sense that "on average, A will be larger than B". The expected value of the ratio B/A is exactly the same value, and clearly the statement "on average, A will be larger than B and B will be larger than A" is nonsense.

If you carry out this calculation you get approximately ##E(R) = 1.33##.
This is incorrect. There are 36 possible results not 23 and the calculation of the arithmetic mean is identical to the calculation of the expected value, resulting in approximately 1.43.

I know that in general:

$$E(A/B) \neq \frac{E(A)}{E(B)}$$
So why are you surprised that ## E(\frac AB) \approx 1.43 \neq \frac{E(A)}{E(B)} = 1 ##?

If you take the arithmetic mean of the ratios you get approximately 1.43 . Which means on average, A will be larger than B.
I don't know why you say this. I don't think it follows at all that A is larger than B. The average of the ratio is indeed ##1.43##. And this is indeed what you will get on average. So if you throw the dice enough times, then the ratio ##A/B## will be around ##1.43##.

I know that in general:

$$E(A/B) \neq \frac{E(A)}{E(B)}$$

I feel like the only way for this dice example to make any sense is IF this is true. In fact, it would be true if B was independent from A/B. How would one prove the dependency between B and A/B?
B and A/B are indeed dependent. This is not difficult to see. For example, let's say you know that you have thrown B=1. Then you know that ##A/B\geq 1##. So information about B does give us nontrivial information on A/B.

Also, if A and B are independent, then this SHOULD be true:

$$E(A/B) = E(A)*E(1/B)$$

I ran some numbers in excel and I find:

E(A/B) = 1.33
E(A)*E(1/B) = 1.43
[/QUOTE]

I calculated ##E(A/B)## and I got approximately 1.43. So perhaps you should show how you calculated ##E(A/B)##.

EDIT: MrAnchovy beat me to it.

Last edited:
Ignoring some errors and inconsistency in your terminology, it is true that the expected value of the ratio A/B is approximately 1.43. But this does NOT mean in any sense that "on average, A will be larger than B". The expected value of the ratio B/A is exactly the same value, and clearly the statement "on average, A will be larger than B and B will be larger than A" is nonsense.
This is exactly what my colleague was trying to say. Which makes no sense. So again, what is the meaning of E(A/B) in terms of the dice?

This is incorrect. There are 36 possible results not 23 and the calculation of the arithmetic mean is identical to the calculation of the expected value, resulting in approximately 1.43.
There are only 23 ratios you could get with two 6-sided dice. Keep in mind that 1/1 = 2/2 = 3/3=..., etc. This is equivalent to assigning equal probabilities to the ratio of each possible dice roll. There was an error in my excel file. I do get 1.43 computing it with both methods.

The questions still remain: What is the appropriate interpretation for:

E(1/A)
E(A/B)
E(A)/E(B)

This is exactly what my colleague was trying to say. Which makes no sense. So again, what is the meaning of E(A/B) in terms of the dice?
It means that if you roll the two dice a certain number of times and record A/B each time, if you then take the arithmetic mean of all the measurements A/B, then you will get a number close to 1.43. This is something you can do on a computer or by throwing dice yourself.

pbuk
Gold Member
This time micromass beat me to it The questions still remain: What is the appropriate interpretation for:

E(1/A)
E(A/B)
E(A)/E(B)
There are no "hidden" interpretations. E(1/A) is simply the expected value of the reciprocal of the value shown by the first die. E(A/B) we have dealt with and E(A)/E(B) is of course 1 because the dice are identical.

The fact that ## E(\frac AB) = E(A)E(\frac 1B) ## is not immediately obvious (to me at least), but it becomes clearer once you start writing out terms (or if you do it on a spreadsheet and think about the numbers you are adding together).

FactChecker
Gold Member
If you carry out this calculation you get approximately ##E(R) = 1.33##. Meaning that on average, A will be 33% greater than B. This is supposed to be the "correct" way. But how can this be? A and B were chosen arbitrarily, and each have the same probability.
There is no mystery here. When B is bigger than A, the range of numerical results of A/B is small (between 1/6 and 1). But when B is smaller than A, the range of numerical results is much greater (between 1 and 6). So the expected values of the ratios is distorted toward the high side.

For an extreme example that should make things obvious, consider two identical independent random variables A, B with 50/50 probability of values 1/1,000,000 and 1. Then A/B takes values 1,000,000, 1, 1/1,000,000 with probabilities 1/4, 1/2, and 1/4. respectively. E(A/B) = 250,000 + 1/2 + 1/4,000,000.

Last edited:
Ok thanks for the help everyone. It makes more sense now.

FactChecker
Gold Member
One additional note: Expected value is integration (over a probability measure), which is linear. So E( aX+b) = a*E(X)+b. But 1/X is not a linear function. So E(1/X) is not 1/E(X).

An extreme example is the random variable X which takes values 1,000,000 and 1 with probabilities 1/4 and 3/4. E(X) = 250,000 + 3/4; E(1/X) = 1/4,000,000 + 3/4;

After all of the help from you guys, this does make sense. But intuitively we would like to think "since A and B are identical, in the long run their ratio should be 1". However, for one single set of rolls (one for A and one for B ), A and B will be different 5/6 of the time. So looking at the average ratio of each set of rolls, it definitely shouldn't be 1. Thinking about it like this cleared up my confusion.

Here's a question for you guys. It's part of the same debate (my colleague used the results of the dice problem to support his answer for this question):

Suppose you have two servers. You wish to compare their performance using their average tip percentage. We assume that each server serves the same set of customers, and every customer has an order of $100. We also assume that each customer will tip the servers based on their performance (i.e. the customer doesn't tip just based on the total dollar amount of the meal). What is the appropriate method to determine the average tip percentage for each server? (The current debate is arithmetic mean vs. Geometric mean --- I believe that the arithmetic mean is correct, and that it can be derived; however, my colleague believes geometric mean is appropriate) What is the appropriate method to compare the tip percentage of the two servers? (Do you compare total average tip to total average tip? Or perhaps find a ratio of the tips each server receives for each set of customers? Or is there a better way? ) Thanks in advance. I need to rephrase that. Assume that each customer tips a specific percentage of the$100 bill, but will adjust it according to the server's performance. For instance, let's say customer 1 will tip 20% on average but might tip 15% on average to server A and 22% on average to server B. Whereas customer 2 might tip 30% on average but will tip server A 33% on average and server B 29% on average. The same two questions still apply. I hope this makes sense.

FactChecker
Gold Member
What is the appropriate method to compare the tip percentage of the two servers? (Do you compare total average tip to total average tip? Or perhaps find a ratio of the tips each server receives for each set of customers? Or is there a better way? )
When studying percentages, it is usually best to take the logarithm of the data and apply statistical methods to that data. The reason is that you are studying a multiplier of the original cost and trying to separate the random multiplier factor from the predictable, non-random, multiplier factor. Taking the logarithms converts that into a linear model where the random part is added. Then linear statistical methods, like regression, can be applied.

When studying percentages, it is usually best to take the logarithm of the data and apply statistical methods to that data. The reason is that you are studying a multiplier of the original cost and trying to separate the random multiplier factor from the predictable, non-random, multiplier factor. Taking the logarithms converts that into a linear model where the random part is added. Then linear statistical methods, like regression, can be applied.
In this particular case we could change the percent signs to dollar signs, and nothing would change (since the total bill is $100). If it was decided that the servers should be compared according to each table, then I would agree. I guess the answer relies on how we answer: "who is the better server?". Suppose we only have 4 different types of customers, and each customer will tip a significantly different amount on average. Do we rate each server independently, then compare servers? Or do we compare server to server for each customer type? Assuming 4 customer types, consider the following case: Tips A receives: 1, 10, 20, 30 Tips B receives: 5, 10, 20, 30 Who is the better server and by what percent? Obviously B is better. Comparing individually: Average tips: A: 15.25 B: 16.25 B/A = 1.07 So server B earns about 7% more in tips. Comparing against each customer type: B/A: 5, 1, 1, 1 The appropriate way to average these ratios is using the geometric mean: GM(5,1,1,1) = 1.50 *Note that GM(B/A) = GM(B) / GM(A) So by this result, server B earns 50% more tips than A on average. (I would assume that this means server B earns 50% more than A for each customer type.) So given these results, by what percent is B better than A? I would say this: Overall B is 7% better than A, and B earns 50% more than A will on average for each type of customer. Is this correct? FactChecker Science Advisor Gold Member In this particular case we could change the percent signs to dollar signs, and nothing would change (since the total bill is$100).
If you are talking only about tips added to $100 checks, you can directly compare the amount added and use standard linear statistical methods. But the statistical results (mean, MLE, standard deviation, confidence intervals, etc) will not apply to tip percentages on other sized checks. For that, you will need to represent 0%, 5%, 10% tips as multipliers 1.0, 1.05, 1.1, respectively and take the logarithms of those numbers. Tips A receives: 1, 10, 20, 30 Tips B receives: 5, 10, 20, 30 Comparing against each customer type: B/A: 5, 1, 1, 1 The appropriate way to average these ratios is using the geometric mean: GM(5,1,1,1) = 1.50 I'm starting to think this is wrong based on the results of the dice problem. 1/4 of the time B earns 5 times the tips that A does. The rest of the time they will earn the same. Using the arithmetic mean, we expect B to earn twice as much as A for each customer on average: 1/4*5 + 3/4*1 = 2 This matches how we analyzed the dice throwing. But, is this correct? We use natural logs (or geometric mean, same thing) when the additive structure of the problem is logarithmic. For instance, interest rates. You have some initial amount that grows exponentially over time, so to isolate interest rates, you would have to take the natural log. Then we are free to apply statistical analysis. In this case we don't have a logarithmic additive structure so I would say we would NEVER use the geometric mean to analyze tips in this server problem. Even if we allowed each customer type to have a different total bill that is different from$100, I don't believe it would matter. Arithmetic mean wins regardless.

Can someone confirm or tell me where my thinking is going wrong? I think I'm on the right track, but I don't feel 100% confident.

WWGD
Gold Member
2019 Award
Seems you can also use a standard difference-of-means method, at your choice of level of significance.

haruspex