Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Problem with marbles

  1. May 24, 2005 #1
    Hello all,

    I have a problem that hopefully belongs in this forum.

    Suppose that I have a bag full of M marbles, each marble being one of N colors, N =< M. The number of marbles of color = N is p_n, so the sum over n of p_n equals M. Next I want to arrange all of my marbles in order from 1 to M. The question: how many unique ways are there to do this, as a function of the variables p_n?

    I know for example that if each marble is its own separate color, ie M = N, then there are exactly M! different ways to arrange them. Alternatively if there is only one color, ie N = 1, then there is only one way to arrange them. It is the more general case that I have not yet solved.

    David
     
  2. jcsd
  3. May 24, 2005 #2
    First, permute all the colors. This is M!. Then divide out what you don't want. What is the overcount factor due to each color? Consider with M = 3
    123
    132
    231
    212
    312
    321

    Now say that 2 and 3 are blue and 1 is red. What do you need to divide by to get to the correct answer? Why? Consider what the duplicates that you discard have in common with the corresponding ones that you accept.
     
  4. May 25, 2005 #3
    Hmm, in this case I divide by 2 because (2,3,*) looks like (3,2,*), etc. I suppose I'm actually dividing by 2!.

    If I had 5 marbles, then I start with 5! Then if I assume that 3 of them (say #1,#2,#3) are red, with #4 blue, #5 green, then I divide by 3! because, eg, (1,2,3,*,*) looks like eg (3,2,1,*,*), and there are a total of 3! ways to get (r,r,r,*,*). In more general terms, every individual of the 5! sequences that I start out with will belong to a subset with 3! elements, each one of which is indistinguishable from the other elements of the same subset. So yes, I divide in this case by 3!, and in this case the final answer is 5! / 3!.

    So now let's say we have 5!/3! sequences. Next let's say that we take the blue and green marbles and set them both to green. We repeat the above line of reasoning, ie, every individual of my 5!/3! sequences will belong to a subset with 2! elements, each one of which is indistinguishable from the other elements of the same subset. So we divide again by 2!.

    Generalizing, I get M! / [p_1! * p_2! * ... * p_N! ] In the specific case that p_i = 1 for all N, and N = M, this reduces to M!. And in the specific case that p_1 = M, all other p_i's = 0, this reduces to M! / M! = 1. Both of these are as expected.

    Is that right? I just reasoned it out as I typed -- not sure if I've made an error.

    :rolleyes:

    David
     
  5. May 25, 2005 #4
    Yep, that's right.
     
  6. May 25, 2005 #5
    OK cool ... on to the next step.

    The above is a prelude to a more difficult (I think) problem that I have in mind. Suppose I roll an N-sided die a total of M times and write down the results as an ordered sequence of M values. In total, there will be N^M different possible sequences. For any individual sequence, suppose we define p_n as the number of times that the die roll shows up as n. (So n is an integer in [1,N], and p_n is an integer in [1,M].) What I am trying to do is calculate what proportion f(p_n) of the N^M sequences have the result n showing up p_n times.

    For example, suppose the die is 6-sided, and we roll it 10 times. We get a total of 6^10 sequences of die rolls. Of these 6^10 sequences, how many get, let's say, a 3 showing up 5 times?

    One way to start would be to take the expression from my previous post and do sums over all possible values of each p_i for i not equal to n. I end up with a complicated expression, though, and I'm thinking there might be an easier way to approach the problem ...

    David
     
  7. May 25, 2005 #6
    First count all the ways you can have the other N - 1 sides come up on M - p_n rolls. Then count all the ways you can mix the p rolls of side n with the ways to roll the N - 1 sides over M - p_n rolls.

    For example, with 4 rolls and a 3 sided die, and concerned with 2 rolls of side 1, then first count the ways to roll 2 and 3 (4-2) = 2 times.
    22
    23
    32
    33
    Now find the number of ways to mix 11 with those. For example (mixing 11 with 22),
    2211
    2121
    2112
    1212
    1221
    1122
    Or mixing 11 with 23
    2311
    2131
    2113
    1213
    1231
    1123

    I think that should work.
     
  8. May 26, 2005 #7
    Yes, that is simpler.

    Let's see. There is only one way to get result n p times. Next, the number of ways to get anything other than n M-p times is (n-1)^(M-p). Then we multiply by the number of ways to mix an ordered sequence with p elements and an ordered sequence of M-p elements, ie the number of ways of distributing p elements over M elements, which is, ummm, is it M choose p? Putting these together, we get the expression:

    (n-1)^(M-p) M! / [p!(M-p)!]

    This looks a lot like the Poisson distribution:

    http://mathworld.wolfram.com/PoissonDistribution.html

    I need to check all this -- gotta run now though --

    David
     
  9. May 26, 2005 #8
    Yep. Doesn't look like the Poisson to me though.
     
  10. May 27, 2005 #9
    Dude! I'm making progress. (With your help Bicycle, thanks!) Yes, Poisson is a little different.

    I have one final step. Let us once again consider an N-sided die that we roll M times. Let's arbitrarily pick one result of the die roll, say, 3. What I would ultimately like to show is that if we let M go to infinity, then we will find that "most of the time," we will get a 3 exactly 1/N of the time. In other words, for any individual sequence, say that p_3 is the number of times out of M rolls that we get a 3. Count up all of the N^M sequences in which p_3/M = 1/N. From the above, we know that this will happen in
    (N - 1)^(M-p) M! / [p!(M-p)!] out of the N^M sequences. So the ratio of these two expressions, ie the percentage of sequences in which we get a 3 showing up 1/N of the time, is (N - 1)^(M-p) M! / [p!(M-p)!N^M] . I would like to show that as M goes to infinity, and keeping N constant, then the assumption that the above expression approaches 100 percent (approaches 1) will necessarily imply that p/M = 1/N. Let's see if I can make that look nicer. I want to show that:

    [tex]
    lim_{M \rightarrow \infty} \frac{(N-1)^{{M-p}}M!}{p!(M-p)!N^{M}} \rightarrow 1 \Rightarrow p = \frac{M}{N}
    [/tex]

    But now I'm stuck, possibly because it's been years since I've worked with limits. I've just rediscovered l'Hopital's rule, but haven't been able to get very far with it. I also note that the exponential terms should dominate over the factorial terms, I think. (Well, they usually win out, right?) Any hints for me?

    :blushing:

    David
     
  11. May 27, 2005 #10
    I don't understand what you're trying to do--your explanation is unclear to me. The percentage of the time that you have p_3/M = 1/N is the percentage of the time that you have p_3 = M/N, which is (N - 1)^(M-M/N) M! / [(M/N)!(M-M/N)!*(N^M)], assuming M/N is integer (the probability is 0 if it isn't).
     
  12. May 27, 2005 #11
    I suppose I should add in the assumption that M is an integer multiple of N, so that M/N is integer.

    The way I phrased the question is to assume the limit equals 1 and use that to solve for p in terms of M and N to demonstrate that p = M/N. Although we could turn the problem around like this: assume first that p = M/N, and then demonstrate that the above limit is 1. It seems intuitive to me that it should be, but I haven't been able to prove it yet.

    David
     
  13. May 27, 2005 #12
    Well, I plugged in 100 for M and 10 for N and got 0.131865 (in the formula (N - 1)^(M-M/N) M! / [(M/N)!(M-M/N)!*(N^M)]). Then I tried it with 200 for M and 10 for N and got 0.093636. It seems to be heading for 0 if it's going anywhere.
     
  14. May 28, 2005 #13
    :mad: , you're right.

    I will have to reformulate the problem once again.

    Let me explain how this came up. In another thread, I am discussing the multiple worlds interpretation (MWI) of quantum mechanics. I won't go into all the esoterics of that discussion. So I'll just say that for the context of this problem with the N-sided die rolled M times, I am imagining that this results in N^M "parallel worlds," with a separate "observer" in each world. Typically in these discussions we consider preparing M identical spin-whatever particles, with spin n being an integer in [1,N] upon measurement, and the predicted probability of measuring spin = n being m_n. For this thread, let's stick with the example of an N-sided die, and define the predicted probability of getting an n on any particular throw as being the probability "measure" m_n. Typically, for a die roll, we assume that each side is equiprobable, ie m_n = 1/N. But in the more general case, we simply require that [tex] \sum_{n = 1}^{N} m_{n} = 1 [/tex].

    Let us suppose that an experimenter rolls the die M times and calculates the percentage of time that the n-th result shows up, p_n/M, and compares this with the predicted probability measure m_n. We will imagine that at the end of the M die rolls, we have N^M different experimenters (or "observers") who are living in N^M "parallel worlds."

    One of the esoterics of this other thread is that we are entertaining the notion that each of these N^M worlds is, ontologically speaking, on an "equal footing." So here's the difficulty. If the predicted probability measure m_n is NOT equal to 1/N, then as M goes to infinity, "most" of the observers in these N^M worlds will come to the conclusion that the predicted probability measure m_n is WRONG. So my objective is to show this, but more rigorously. in other words, I want to show that m_n = 1/N is the ONLY probability measure such that "most" of the observers will conclude that the predicted probability measure, m_n, is correct.

    The way that any individual observer tests the predicted probability measure m_n is to compare it with the observed quantity, p_n/M. The difference between the predicted value m_n and the observed value p_n/M is the "error" [tex] \epsilon_n = |m_{n} - p_{n}/M| [/tex]. So what I am looking for is an expression for m_n such that the following Criterion is true: for any arbitrary cutoff [tex] \delta [/tex], the proportion f_n of these N^M worlds such that the error is less than the cutoff, [tex] \epsilon < \delta [/tex] approaches 1 in the limit as M approaches infinity. It seems intuitive that m_n = 1/N will meet the Criterion. But what I am trying to prove is that m_n = 1/M is the ONLY expression for m_n that will meet the Criterion. Thus, my method is to define the equation [tex] lim_{M \rightarrow \infty} f_{n} \rightarrow 1 [/tex] and show that this implies that m_n = 1/N.

    So the issue is how to calculate f_n. Initially, I was thinking of defining [tex] \delta = 1/M [/tex], so that as M approaches infinity, the cutoff approaches zero. The reason I did this was to make the calculation of f_n easier: the denominator is the total number of worlds N^M, and the numerator is the number of worlds in which the predicted value matches exactly with the observed value, m_n = p_n/M. But as we have seen, for f_n defined this way, the limit goes to zero.

    So I suppose what I should do is to define the cutoff to be arbitrary but fixed, ie not a function of M. So the numerator in the expression for f_n is the sum over all of the values of p_n that are close to m_n, ie such that [tex] |m_n - p_n/M| < \delta [/tex].

    Does my statement of the problem make sense to you?

    David
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?