# How does one formulate continuous probabilities/pdfs?

Discrete examples are easy enough. Toss a coin, 1/2, toss a die, 1/6.

Continuous examples, Probability of a nucleus decaying during observation, 1-exp(-λt), Probability of a neutron moves x without interaction, exp(-Σx), where Σ can be assumed to be the inverse of the mean free path i.e. the distance a neutron travels without interaction on average.

My point is that I don't really have an idea as to how these continuous probabilities are derived. Any assistance?

StoneTemplePython
Gold Member
Discrete examples are easy enough. Toss a coin, 1/2, toss a die, 1/6.

Continuous examples, Probability of a nucleus decaying during observation, 1-exp(-λt),

There's basically two ways to interpret continuous time probability distributions. One is that they are 'merely' a limiting form of discrete cases. The other is that they exist as probability models on their own right. Both interpretations give you some insight.

Former interpretation: your probability of a nucleus decaying during observation -- that is the CDF of an exponential distribution. If you wanted to count the number of these occurrences, you'd count these 'arrivals' via a Poisson Process. Consider tossing a coin that is heads with probability ##p \in (0,1)## and tossing it##n## times. The mean is given by ##\lambda := np##. If you take the limit in such a way that as ##n \to \infty##, ##\lambda## is constant (or bounded in some desired range), then you recover the Poisson distribution. High level the result can be interpreted as tossing an arbitrarily large number of coins at an arbitrarily small probability ##p##, while preserving the essence which is encapsulated in the mean. You may want to look into Le Cam's Theorem which gives a relatively simple setup for something like the above that not only shows the limiting value is Poisson, but gives a useful finite ##n## bound on the (total variation) distance between actual distribution and an idealized one like Poisson.

Latter interpretation: People may have uncovered these distributions as a limiting form of something discrete but they stand on their own two legs. Someone say in physics may have decided classically, that a continuous time model is most appropriate. There may have been a lot of theoretical or physical insights first, or it may have been, basically, an experimental fit. Earthquakes (the big ones, not the aftershocks) are modeled as a Poisson process by the way -- if you want a memoryless counting process in continuous time you really have no other choice.

• random_soldier
Stephen Tashi
Discrete examples are easy enough. Toss a coin, 1/2, toss a die, 1/6.
Those are famous examples, but they are not "derived" from any physical theory. They result from assuming each possibility has the same probability of occuring.

Continuous examples, Probability of a nucleus decaying during observation, 1-exp(-λt), Probability of a neutron moves x without interaction, exp(-Σx), where Σ can be assumed to be the inverse of the mean free path i.e. the distance a neutron travels without interaction on average.

My point is that I don't really have an idea as to how these continuous probabilities are derived. Any assistance?

It isn't clear what you mean by "derived". Are you asking whether they can be deduced from some simple mathematical assumption analogous to "all possibilities have the same probability of occurring"? In the case of radioactive decay, the assumption is that the probability of each individual atom of a given type decaying in time t follows the same continuous probability distribution. That is the assumption analogous to "all possibilities have the same probability". To deduce exactly what that continuous probability distribution is, requires finding the continuous probability distribution for individual atoms whose predictions fit experimental data for a large number of atoms.

• random_soldier
RPinPA
• 