Probability Distributions (Countably Infinite domain)

In summary, the conversation discusses the probability of two particles being at different positions at the same time and the method of calculating this probability. The method involves defining a constant and normalizing the probability distribution. The possibility of collisions is also discussed, with the understanding that the particles are either placed at the same time or are stationary. The conversation also mentions using Conway's game of life as an analogy for this problem.
  • #1
SSequence
553
95
Suppose we have a "particle" which can be at some position x∈N (where N={0,1,2,...}). The probability that the particle is at position x can be written as:
P(x) = 1/(2x+1)

Now suppose we have two particles p1 and p2.To keep things simple, assume that the individual probability distribution for each particle is the same as above (that is, if the other particle was absent).

The main condition is that we don't want to allow both particles being at the same position (note that we are talking about either (i) at the same time or (ii) assume both particles to be still).

The question is that what would be the probability that p1 is at position a and p2 is at position b (where a≠b) at some given time?

-----
Here is my own attempt for the answer:
The probability should be:
C*P(a)*P(b)
where C is a constant that we have to determine.

We define a constant c to be:
c=(1/2)2+(1/4)2+(1/8)2+(1/16)2+...
Now we define:
C=1/(1 - c)

==========

The main idea I had in mind was something along these lines. Few months ago I was reading about "spaceships" in Conway's game of life. I was thinking that suppose we have a spaceship that launches from the origin and is moving along the positive y-axis. At some point along the way there is a "dust storm" (say initially composed of thousand or so cells that stretch over a strip of finite vertical length but infinite horizontal length -- in terms of probability distribution). We want to make something like a computer simulation to determine the probability that the spaceship will not be "destroyed" by the dust storm. We would need to specify precisely a more precise definition of the term "destroyed" though it seems.
 
Last edited:
Physics news on Phys.org
  • #2
You haven't specified the mechanism for handling collisions, which could be important, though I have a feeling that I get what it is.

One very simple interpretation is that, so long as ## a \neq b##, then we can say ##P(a,b) = P(a)\frac{P(b)}{1 - P(a)}##. This can be interpreted as meaning that when you have ##P(a)## occurring, then the probability of some different ##b## occurring is the same as it was before, except the sample space is reduced as ##a## has been removed from your very special, countable weighted 'deck of cards'. In effect once ##a## has been 'dealt' all remaining cards have the same relative probabilities amongst themselves, and these probabilities just need grossed up / normalized by a factor of ##\frac{1}{1-P(a)}## so that they once again all sum to one.
 
  • #3
StoneTemplePython said:
You haven't specified the mechanism for handling collisions, which could be important, though I have a feeling that I get what it is.
Well I was using the term "particle" as an analogy just to denote that only one slot can be occupied by a given particle. I didn't mention it for the following two reasons:
(a) If you take the analogy of particles more seriously then yes, how to manage collisions could be added (according to some reasonable rule). However, I mentioned the following in the original post:
"note that we are talking about either (i) at the same time or (ii) assume both particles to be still"

So you can think of the distributions (in the original post) as essentially being given at a specific instant of time OR both particles being stationary. Hence the original post then can be seen as a simple version of the more general case.

(b) I was originally thinking about the problem more in context of conway's GoL (game of life). There the idea is that of a "cell" (and number of cells that are active can increase or decrease with time) rather than of particles.

==========

Given your response, it seems that you have interpreted the asked question correctly. The method you seem to be describing seems to be very reasonable.
I was doing something similar by excluding the weight of prohibited possibilities and then trying to "normalise" essentially. However, since the detail was somewhat different I will have to check whether the result actually turns out to be the same or not in both cases. Then I will be able to give a more detailed answer.

edit:
To elaborate in some detail, what I was trying to do in original post (assuming the individual probability distribution for both particles to be same and denoted by p:N→[0,1]):
take the sum:
c= ∑ [p(i)]2 (summation from i=0 to infinite)

By doing this I was trying to remove the weight of all "prohibited" possibilities in the combined "sample space" (don't know what the rigorous term would be). Then by multiplying with 1/(1-c) I was trying to normalise the probability to 1 for the "remaining" possibilities in sample space.

further edit:
With your method/interpretation of the question, wouldn't we have to distinguish between whether:
(i) we "place" the particle p1 at position a first and the particle p2 at position b second
(ii) we "place" the particle p2 at position b first and the particle p1 at position a second
 
Last edited:
  • #4
SSequence said:
Well I was using the term "particle" as an analogy just to denote that only one slot can be occupied by a given particle. I didn't mention it for the following two reasons:
(a) If you take the analogy of particles more seriously then yes, how to manage collisions could be added (according to some reasonable rule). However, I mentioned the following in the original post:
"note that we are talking about either (i) at the same time or (ii) assume both particles to be still"

So you can think of the distributions (in the original post) as essentially being given at a specific instant of time OR both particles being stationary. Hence the original post then can be seen as a simple version of the more general case.

(b) I was originally thinking about the problem more in context of conway's GoL (game of life). There the idea is that of a "cell" (and number of cells that are active can increase or decrease with time) rather than of particles.

==========

Given your response, it seems that you have interpreted the asked question correctly. The method you seem to be describing seems to be very reasonable.
I was doing something similar by excluding the weight of prohibited possibilities and then trying to "normalise" essentially. However, since the detail was somewhat different I will have to check whether the result actually turns out to be the same or not in both cases. Then I will be able to give a more detailed answer.

edit:
To elaborate in some detail, what I was trying to do in original post (assuming the individual probability distribution for both particles to be same and denoted by p:N→[0,1]):
take the sum:
c= ∑ [p(i)]2 (summation from i=0 to infinite)

By doing this I was trying to remove the weight of all "prohibited" possibilities in the combined "sample space" (don't know what the rigorous term would be). Then by multiplying with 1/(1-c) I was trying to normalise the probability to 1 for the "remaining" possibilities in sample space.

further edit:
With your method/interpretation of the question, wouldn't we have to distinguish between whether:
(i) we "place" the particle p1 at position a first and the particle p2 at position b second
(ii) we "place" the particle p2 at position b first and the particle p1 at position a second

Saying that "i) at the same time or (ii) assume both particles to be still" just doesn't mean much to me. Neither does mentioning the game of life as I haven't looked at it in closely recently. Having some idea of the underlying process that gets you to this joint distribution, is quite helpful.

Your question in your further edit are on point. To be sure (I think I know the answer here but...) we should consider whether (a,b) and (b,a) are different. For example in Poker (Hold 'em), if you receive Ace of spades, King of spades as hole cards, the ordering doesn't matter -- so you'd be interested in getting probability of king of spades first, then ace of spaces as second card, and also probability of ace of spades as first card and king of spades as second card. Extending the analogy to the problem here, if relevant, your interest may be in ##P(a,b) + P(b,a) = P(a)\frac{P(b)}{1 - P(a)} + P(b)\frac{P(a)}{1 - P(b)}##. More concretely, if you see results of ##\{1, 3\}##, your interest may be in the ordered tuples of (1,3) and (3,1) = ##P(1,3) + P(3, 1)
= 0.25\frac{0.0625}{1 - 0.25} +0.0625\frac{0.25}{1 - 0.0625}##

- - - -
Btw, I found your posts a bit tough to interpret -- using LaTeX would make it easier for others to read and seems to be the custom on this forum.
 
  • #5
StoneTemplePython said:
Saying that "i) at the same time or (ii) assume both particles to be still" just doesn't mean much to me. Neither does mentioning the game of life as I haven't looked at it in closely recently. Having some idea of the underlying process that gets you to this joint distribution, is quite helpful.
Thinking about it a bit more, let's consider all the possible answers:
i) P(a)*P(b) / (1 - P(a))
ii) P(a)*P(b) / (1 - P(b))
iii) P(a)*P(b)*[ 1/(1-P(a)) + 1/(1-P(b)) ]
iv) the method I described in posts#1 and #3

I think that within a certain context all of these make sense (I am doubtful whether they would be equivalent in the general case). It seems to me that perhaps in (i) to (iii) when we make one placement and "amplify" the probability for the other particle/cell it is also a part of context of the problem.However, here is the original context of the problem (I have restricted to finite domain to keep things simple -- for infinite domain the description remains the same with A replaced by N):
Consider the set A={0,1,2,3,4,5,6,7,8,9}. Also assume a probability function p:A→[0,1]. The probability that a given cell (with position x∈A) in the set A will be marked as "1" is given by p(x) (and only one cell is marked "1"). Also, similarly the probability that a given cell in the set A will be marked as "2" is also given by p(x).

Question: What is the probability two cells with position a∈A and b∈A (where a≠b) will be marked as "1" and "2" respectively "given" the constraint that two cells with the same position can't both be marked as "1" and "2"?

StoneTemplePython said:
Btw, I found your posts a bit tough to interpret -- using LaTeX would make it easier for others to read and seems to be the custom on this forum.
I don't know how to use it.
 
Last edited:
  • #6
SSequence said:
The question is that what would be the probability that p1 is at position a and p2 is at position b (where a≠b) at some given time?

I think the problem does not have a unique answer and that your method gives one of many possible answers.

Consider a simplified and finite version of such a problem. Suppose there are 5 possible positions and we want the probability of each particle being at the k_th position to be 1/5. The joint distribution of the location of the two particles is a 5x5 table with 25 entries. We want the diagonal entries on the table to be zero so there is no possibility that both particles are in the same position. We want the sum of each row and the sum of each column to be 1/5 since, for example, the sum of the first column gives the probability that one particle is in the 1st position and the other particle is at some other position.

The constraints on the sums of the rows and columns give 10 equations on the entries of the table. The constraint on the diagonal entries leaves (5)(5) - 5 = 20 entries that we must fill in. Considering these 20 entries to be unknowns, we have 10 equations that they must satisfy. So it appears that there could be several different solutions to the 10 equations. (Of course a solution must also obey the inequality constraints that each entry must be in [0,1].) In general, for ##N## positions, there are ##2N## equations and ##N^2 - N## unknowns.

Your method amounts to constructing a solution in a particular manner. First we fill in the table with the joint probability distribution assuming the particles select their positions independently. Then we set the diagonal entries of the table equal to zero. Then we find the appropriate constant to multiply the non-diagonal entries by, so that this restores the sum of entries in the table to 1.0.

One physical interpretation of your solution is that it describes a process where we select the position of the two particles independently and if two particles land in the same position, we "throw out" that datum and do the selection again.
 
  • Like
Likes StoneTemplePython
  • #7
Yes, it does seem that the context of the question needs to be quite clear. Partly the reason I posted the question was to understand better the assumptions involved in arriving at some specific answer.

What's your opinion of the following description (which I wrote in post#5)? Is the question still ambiguous or it is delineated well enough for a specific answer?
However, here is the original context of the problem (I have restricted to finite domain to keep things simple -- for infinite domain the description remains the same with A replaced by N):
Consider the set A={0,1,2,3,4,5,6,7,8,9}. Also assume a probability function p:A→[0,1]. The probability that a given cell (with position x∈A) in the set A will be marked as "1" is given by p(x) (and only one cell is marked "1"). Also, similarly the probability that a given cell in the set A will be marked as "2" is also given by p(x).

Question: What is the probability two cells with position a∈A and b∈A (where a≠b) will be marked as "1" and "2" respectively "given" the constraint that two cells with the same position can't both be marked as "1" and "2"?
 
  • #8
SSequence said:
What's your opinion of the following description (which I wrote in post#5)? Is the question still ambiguous or it is delineated well enough for a specific answer?

The question is still ambiguous. Instead of speaking of a "constraint" you could say that the marks "1" and "2" are placed independently of each other. Then you could ask for the conditional probability that cell x has a "1" in it given that no cell contains both marks. This amounts to "throwing away" cases where a cell contains both marks.
 
  • Like
Likes SSequence
  • #9
Stephen Tashi said:
I think the problem does not have a unique answer and that your method gives one of many possible answers.

Consider a simplified and finite version of such a problem. Suppose there are 5 possible positions and we want the probability of each particle being at the k_th position to be 1/5. The joint distribution of the location of the two particles is a 5x5 table with 25 entries...

I think this is the way to go. I was originally going to suggest a 3 location only problem but shelved that comment. There are special instances, but generally speaking people should tackle finite cases before considering the infinite. I also tend to think knowing how to code helps here --- if you can't figure out a way to even approximately simulate your problem, then you know you aren't thinking about it clearly.

It seems to me there are 3, maybe 4, varieties of approaches here. One is sequential dealing (which I suggested). Another is a simultaneous approach with rejection of batches where there are collisions. The third is a local approach perhaps markov chain monte carlo -- though frequently this is constructed so that you can get the same results as in the second approach.

The fourth approach is is technically a different problem -- chop up the buckets that the particles can land in, into smaller and smaller buckets-- i.e. convert the geometric distribution into an exponential distribution (ala shrinking a Bernouli process into Poisson) so that the probability of collision goes to zero. Technically this is a different problem, but having a way to no longer worry about collisions can be quite nice.
SSequence said:
I don't know how to use it.

There is a Latex primer here: https://www.physicsforums.com/help/latexhelp/

Keep in mind that at there was a point in time for each of us where we did not know LaTeX. (Somewhat recently for me in fact.) People just decide to put in the work and get better at it, one day at a time.
 
  • Like
Likes SSequence
  • #10
Stephen Tashi said:
The question is still ambiguous. Instead of speaking of a "constraint" you could say that the marks "1" and "2" are placed independently of each other. Then you could ask for the conditional probability that cell x has a "1" in it given that no cell contains both marks. This amounts to "throwing away" cases where a cell contains both marks.
Yes that's pretty much what I had in mind. I should have perhaps mentioned more explicitly that probability for placement of particles 1 and 2 is entirely independent of each other (apart from the fact that we don't want to include cases where there position is same).
The word "constraint" can just be read as informal expression for "conditional probability" (unless there is supposed to be some definitive subtle difference that I am not aware of).This got me thinking that if in a toy model the particles 1 and 2 did "interact" with each other in the sense that presence of another particle altered the probability distribution for a given particle (and vice versa). In that case, I guess we would just use the modified probability distribution for both particles (and applying the condition of different position on top of that).
Edit: But the exclusion of the possibility of same position for both particles can also be seen as some kind of "interaction" because it would generally also alter the probability distribution (even if indirectly) of a given particle. Then there is a question perhaps whether there is or isn't a genuine distinction between direct or indirect alteration of probability distribution for a particle?

StoneTemplePython said:
There are special instances, but generally speaking people should tackle finite cases before considering the infinite. I also tend to think knowing how to code helps here --- if you can't figure out a way to even approximately simulate your problem, then you know you aren't thinking about it clearly.
My guess is that its going to turn out the same way as mentioned before:
Stephen Tashi said:
One physical interpretation of your solution is that it describes a process where we select the position of the two particles independently and if two particles land in the same position, we "throw out" that datum and do the selection again.

But yes, it could certainly be instructive if one has enough time/interest to do it.
 
Last edited:

What is a probability distribution with a countably infinite domain?

A probability distribution with a countably infinite domain is a mathematical function that assigns probabilities to all possible outcomes of a countably infinite set of events. This means that the set of possible outcomes can be listed in a one-to-one correspondence with the natural numbers.

What is the difference between a discrete and a continuous probability distribution?

A discrete probability distribution has a finite or countably infinite set of possible outcomes, while a continuous probability distribution has an uncountably infinite set of possible outcomes. In other words, a discrete distribution can be represented by a list or table of values, while a continuous distribution is represented by a mathematical function.

What is the expected value of a probability distribution with a countably infinite domain?

The expected value, also known as the mean or average, of a probability distribution with a countably infinite domain is the sum of each possible outcome multiplied by its corresponding probability. This can be thought of as the long-term average outcome if the experiment or event is repeated many times.

What is the role of the probability density function in a continuous probability distribution?

The probability density function (PDF) is a mathematical function that describes the relative likelihood of different outcomes in a continuous probability distribution. It can be used to calculate probabilities of specific outcomes or to determine the overall shape of the distribution. The total area under the PDF curve is equal to 1, representing the total probability of all possible outcomes.

How do you calculate the variance of a probability distribution with a countably infinite domain?

The variance of a probability distribution with a countably infinite domain is a measure of how spread out the distribution is around the expected value. It is calculated by taking the sum of the squared differences between each possible outcome and the expected value, multiplied by the corresponding probabilities. This value is then squared to get the variance.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
346
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
818
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
2K
Back
Top