# Probability Distributions (Countably Infinite domain)

• B
Suppose we have a "particle" which can be at some position x∈N (where N={0,1,2,...}). The probability that the particle is at position x can be written as:
P(x) = 1/(2x+1)

Now suppose we have two particles p1 and p2.To keep things simple, assume that the individual probability distribution for each particle is the same as above (that is, if the other particle was absent).

The main condition is that we don't want to allow both particles being at the same position (note that we are talking about either (i) at the same time or (ii) assume both particles to be still).

The question is that what would be the probability that p1 is at position a and p2 is at position b (where a≠b) at some given time?

-----
Here is my own attempt for the answer:
The probability should be:
C*P(a)*P(b)
where C is a constant that we have to determine.

We define a constant c to be:
c=(1/2)2+(1/4)2+(1/8)2+(1/16)2+......
Now we define:
C=1/(1 - c)

==========

The main idea I had in mind was something along these lines. Few months ago I was reading about "spaceships" in Conway's game of life. I was thinking that suppose we have a spaceship that launches from the origin and is moving along the positive y-axis. At some point along the way there is a "dust storm" (say initially composed of thousand or so cells that stretch over a strip of finite vertical length but infinite horizontal length -- in terms of probability distribution). We want to make something like a computer simulation to determine the probability that the spaceship will not be "destroyed" by the dust storm. We would need to specify precisely a more precise definition of the term "destroyed" though it seems.

Last edited:

StoneTemplePython
Gold Member
You haven't specified the mechanism for handling collisions, which could be important, though I have a feeling that I get what it is.

One very simple interpretation is that, so long as ## a \neq b##, then we can say ##P(a,b) = P(a)\frac{P(b)}{1 - P(a)}##. This can be interpreted as meaning that when you have ##P(a)## occurring, then the probability of some different ##b## occurring is the same as it was before, except the sample space is reduced as ##a## has been removed from your very special, countable weighted 'deck of cards'. In effect once ##a## has been 'dealt' all remaining cards have the same relative probabilities amongst themselves, and these probabilities just need grossed up / normalized by a factor of ##\frac{1}{1-P(a)}## so that they once again all sum to one.

You haven't specified the mechanism for handling collisions, which could be important, though I have a feeling that I get what it is.
Well I was using the term "particle" as an analogy just to denote that only one slot can be occupied by a given particle. I didn't mention it for the following two reasons:
(a) If you take the analogy of particles more seriously then yes, how to manage collisions could be added (according to some reasonable rule). However, I mentioned the following in the original post:
"note that we are talking about either (i) at the same time or (ii) assume both particles to be still"

So you can think of the distributions (in the original post) as essentially being given at a specific instant of time OR both particles being stationary. Hence the original post then can be seen as a simple version of the more general case.

(b) I was originally thinking about the problem more in context of conway's GoL (game of life). There the idea is that of a "cell" (and number of cells that are active can increase or decrease with time) rather than of particles.

==========

Given your response, it seems that you have interpreted the asked question correctly. The method you seem to be describing seems to be very reasonable.
I was doing something similar by excluding the weight of prohibited possibilities and then trying to "normalise" essentially. However, since the detail was somewhat different I will have to check whether the result actually turns out to be the same or not in both cases. Then I will be able to give a more detailed answer.

edit:
To elaborate in some detail, what I was trying to do in original post (assuming the individual probability distribution for both particles to be same and denoted by p:N→[0,1]):
take the sum:
c= ∑ [p(i)]2 (summation from i=0 to infinite)

By doing this I was trying to remove the weight of all "prohibited" possibilities in the combined "sample space" (don't know what the rigorous term would be). Then by multiplying with 1/(1-c) I was trying to normalise the probability to 1 for the "remaining" possibilities in sample space.

further edit:
With your method/interpretation of the question, wouldn't we have to distinguish between whether:
(i) we "place" the particle p1 at position a first and the particle p2 at position b second
(ii) we "place" the particle p2 at position b first and the particle p1 at position a second

Last edited:
StoneTemplePython
Gold Member
Well I was using the term "particle" as an analogy just to denote that only one slot can be occupied by a given particle. I didn't mention it for the following two reasons:
(a) If you take the analogy of particles more seriously then yes, how to manage collisions could be added (according to some reasonable rule). However, I mentioned the following in the original post:
"note that we are talking about either (i) at the same time or (ii) assume both particles to be still"

So you can think of the distributions (in the original post) as essentially being given at a specific instant of time OR both particles being stationary. Hence the original post then can be seen as a simple version of the more general case.

(b) I was originally thinking about the problem more in context of conway's GoL (game of life). There the idea is that of a "cell" (and number of cells that are active can increase or decrease with time) rather than of particles.

==========

Given your response, it seems that you have interpreted the asked question correctly. The method you seem to be describing seems to be very reasonable.
I was doing something similar by excluding the weight of prohibited possibilities and then trying to "normalise" essentially. However, since the detail was somewhat different I will have to check whether the result actually turns out to be the same or not in both cases. Then I will be able to give a more detailed answer.

edit:
To elaborate in some detail, what I was trying to do in original post (assuming the individual probability distribution for both particles to be same and denoted by p:N→[0,1]):
take the sum:
c= ∑ [p(i)]2 (summation from i=0 to infinite)

By doing this I was trying to remove the weight of all "prohibited" possibilities in the combined "sample space" (don't know what the rigorous term would be). Then by multiplying with 1/(1-c) I was trying to normalise the probability to 1 for the "remaining" possibilities in sample space.

further edit:
With your method/interpretation of the question, wouldn't we have to distinguish between whether:
(i) we "place" the particle p1 at position a first and the particle p2 at position b second
(ii) we "place" the particle p2 at position b first and the particle p1 at position a second

Saying that "i) at the same time or (ii) assume both particles to be still" just doesn't mean much to me. Neither does mentioning the game of life as I haven't looked at it in closely recently. Having some idea of the underlying process that gets you to this joint distribution, is quite helpful.

= 0.25\frac{0.0625}{1 - 0.25} +0.0625\frac{0.25}{1 - 0.0625}##

- - - -
Btw, I found your posts a bit tough to interpret -- using LaTeX would make it easier for others to read and seems to be the custom on this forum.

Saying that "i) at the same time or (ii) assume both particles to be still" just doesn't mean much to me. Neither does mentioning the game of life as I haven't looked at it in closely recently. Having some idea of the underlying process that gets you to this joint distribution, is quite helpful.
Thinking about it a bit more, lets consider all the possible answers:
i) P(a)*P(b) / (1 - P(a))
ii) P(a)*P(b) / (1 - P(b))
iii) P(a)*P(b)*[ 1/(1-P(a)) + 1/(1-P(b)) ]
iv) the method I described in posts#1 and #3

I think that within a certain context all of these make sense (I am doubtful whether they would be equivalent in the general case). It seems to me that perhaps in (i) to (iii) when we make one placement and "amplify" the probability for the other particle/cell it is also a part of context of the problem.

However, here is the original context of the problem (I have restricted to finite domain to keep things simple -- for infinite domain the description remains the same with A replaced by N):
Consider the set A={0,1,2,3,4,5,6,7,8,9}. Also assume a probability function p:A→[0,1]. The probability that a given cell (with position x∈A) in the set A will be marked as "1" is given by p(x) (and only one cell is marked "1"). Also, similarly the probability that a given cell in the set A will be marked as "2" is also given by p(x).

Question: What is the probability two cells with position a∈A and b∈A (where a≠b) will be marked as "1" and "2" respectively "given" the constraint that two cells with the same position can't both be marked as "1" and "2"?

Btw, I found your posts a bit tough to interpret -- using LaTeX would make it easier for others to read and seems to be the custom on this forum.
I don't know how to use it.

Last edited:
Stephen Tashi
The question is that what would be the probability that p1 is at position a and p2 is at position b (where a≠b) at some given time?

I think the problem does not have a unique answer and that your method gives one of many possible answers.

Consider a simplified and finite version of such a problem. Suppose there are 5 possible positions and we want the probability of each particle being at the k_th position to be 1/5. The joint distribution of the location of the two particles is a 5x5 table with 25 entries. We want the diagonal entries on the table to be zero so there is no possibility that both particles are in the same position. We want the sum of each row and the sum of each column to be 1/5 since, for example, the sum of the first column gives the probability that one particle is in the 1st position and the other particle is at some other position.

The constraints on the sums of the rows and columns give 10 equations on the entries of the table. The constraint on the diagonal entries leaves (5)(5) - 5 = 20 entries that we must fill in. Considering these 20 entries to be unknowns, we have 10 equations that they must satisfy. So it appears that there could be several different solutions to the 10 equations. (Of course a solution must also obey the inequality constraints that each entry must be in [0,1].) In general, for ##N## positions, there are ##2N## equations and ##N^2 - N## unknowns.

Your method amounts to constructing a solution in a particular manner. First we fill in the table with the joint probability distribution assuming the particles select their positions independently. Then we set the diagonal entries of the table equal to zero. Then we find the appropriate constant to multiply the non-diagonal entries by, so that this restores the sum of entries in the table to 1.0.

One physical interpretation of your solution is that it describes a process where we select the position of the two particles independently and if two particles land in the same position, we "throw out" that datum and do the selection again.

StoneTemplePython
Yes, it does seem that the context of the question needs to be quite clear. Partly the reason I posted the question was to understand better the assumptions involved in arriving at some specific answer.

What's your opinion of the following description (which I wrote in post#5)? Is the question still ambiguous or it is delineated well enough for a specific answer?
However, here is the original context of the problem (I have restricted to finite domain to keep things simple -- for infinite domain the description remains the same with A replaced by N):
Consider the set A={0,1,2,3,4,5,6,7,8,9}. Also assume a probability function p:A→[0,1]. The probability that a given cell (with position x∈A) in the set A will be marked as "1" is given by p(x) (and only one cell is marked "1"). Also, similarly the probability that a given cell in the set A will be marked as "2" is also given by p(x).

Question: What is the probability two cells with position a∈A and b∈A (where a≠b) will be marked as "1" and "2" respectively "given" the constraint that two cells with the same position can't both be marked as "1" and "2"?

Stephen Tashi
What's your opinion of the following description (which I wrote in post#5)? Is the question still ambiguous or it is delineated well enough for a specific answer?

The question is still ambiguous. Instead of speaking of a "constraint" you could say that the marks "1" and "2" are placed independently of each other. Then you could ask for the conditional probability that cell x has a "1" in it given that no cell contains both marks. This amounts to "throwing away" cases where a cell contains both marks.

SSequence
StoneTemplePython
Gold Member
I think the problem does not have a unique answer and that your method gives one of many possible answers.

Consider a simplified and finite version of such a problem. Suppose there are 5 possible positions and we want the probability of each particle being at the k_th position to be 1/5. The joint distribution of the location of the two particles is a 5x5 table with 25 entries...

I think this is the way to go. I was originally going to suggest a 3 location only problem but shelved that comment. There are special instances, but generally speaking people should tackle finite cases before considering the infinite. I also tend to think knowing how to code helps here --- if you can't figure out a way to even approximately simulate your problem, then you know you aren't thinking about it clearly.

It seems to me there are 3, maybe 4, varieties of approaches here. One is sequential dealing (which I suggested). Another is a simultaneous approach with rejection of batches where there are collisions. The third is a local approach perhaps markov chain monte carlo -- though frequently this is constructed so that you can get the same results as in the second approach.

The fourth approach is is technically a different problem -- chop up the buckets that the particles can land in, into smaller and smaller buckets-- i.e. convert the geometric distribution into an exponential distribution (ala shrinking a Bernouli process into Poisson) so that the probability of collision goes to zero. Technically this is a different problem, but having a way to no longer worry about collisions can be quite nice.

I don't know how to use it.

There is a Latex primer here: https://www.physicsforums.com/help/latexhelp/

Keep in mind that at there was a point in time for each of us where we did not know LaTeX. (Somewhat recently for me in fact.) People just decide to put in the work and get better at it, one day at a time.

SSequence
The question is still ambiguous. Instead of speaking of a "constraint" you could say that the marks "1" and "2" are placed independently of each other. Then you could ask for the conditional probability that cell x has a "1" in it given that no cell contains both marks. This amounts to "throwing away" cases where a cell contains both marks.
Yes that's pretty much what I had in mind. I should have perhaps mentioned more explicitly that probability for placement of particles 1 and 2 is entirely independent of each other (apart from the fact that we don't want to include cases where there position is same).
The word "constraint" can just be read as informal expression for "conditional probability" (unless there is supposed to be some definitive subtle difference that I am not aware of).

This got me thinking that if in a toy model the particles 1 and 2 did "interact" with each other in the sense that presence of another particle altered the probability distribution for a given particle (and vice versa). In that case, I guess we would just use the modified probability distribution for both particles (and applying the condition of different position on top of that).
Edit: But the exclusion of the possibility of same position for both particles can also be seen as some kind of "interaction" because it would generally also alter the probability distribution (even if indirectly) of a given particle. Then there is a question perhaps whether there is or isn't a genuine distinction between direct or indirect alteration of probability distribution for a particle?

There are special instances, but generally speaking people should tackle finite cases before considering the infinite. I also tend to think knowing how to code helps here --- if you can't figure out a way to even approximately simulate your problem, then you know you aren't thinking about it clearly.
My guess is that its going to turn out the same way as mentioned before:
One physical interpretation of your solution is that it describes a process where we select the position of the two particles independently and if two particles land in the same position, we "throw out" that datum and do the selection again.

But yes, it could certainly be instructive if one has enough time/interest to do it.

Last edited: