# Geometric, Exponential and Poisson Distributions - How did they arise?

I'm going through the Degroot book on probability and statistics for the Nth time and I always have trouble 'getting it'. I guess I would feel much better if I understood how the various distribution arose to begin with rather than being presented with them in all there dryness without context.

For instance, the geometric and exponential distributions have extremely convenient properties of being memoryless. Furthermore, the exponential distribution is perfect for modeling the time between events in a Poisson process. The Poisson distribution itself is wonderful for calculating the number of events in a given time period in a stationary process.

My question is this, are these properties just conveniences or were these distribution sought after originally to model such processes? How did they arise historically so I can better understanding the grounding for them in present text rather than just falling from the sky with convenient properties? This is a strange question but would settle my 'not getting it' feeling for probability. It seems that in all the other branches of mathematics there is more context for how the collective mind proceeded from step 1 to step 10 whereas in statistics my feeling of uncertainty arises since it feels as though everything just fell from the sky.

Related Set Theory, Logic, Probability, Statistics News on Phys.org
mathman
If you google "probability distribution history" you will get a lot of references which may help.

I can relate to your difficulties. If you studied more math, you would undoubtedly run into many more difficulties of the same sort, so statistics is not anywhere near being exceptional in this regard. I attribute this to a gross negligence towards the motivations on the part of most of the mathematical community. I think that the culture of mathematics is so focused on proving new results/getting publications, above all else, that pedagogy and consolidating the results we already have have not been given the attention they deserve. You've only experienced the tip of the iceberg with your statistics. Part of it may also be the difficulty of teaching and writing to a general audience of whatever students seem to be rolling in, such that things end up going towards the lowest common denominator. It can be a lot of work to understanding things more deeply and too few people are willing to do that (or have the interest for it). Unfortunately, things have been made MUCH more difficult for such people than they need to be by the tyranny of the unquestioning majority.

The Poisson process is explained pretty well somewhere in this course (you may have to look at more than just the lecture that deals with Poisson processes, though):

http://ocw.mit.edu/courses/electric...ures/lecture-1-probability-models-and-axioms/

The part he doesn't explain quite as well as I would like is the more precise relationship going from Bernoulli processes to Poisson processes. The best way to understand this is to look at the formula for getting the k+1st binomial coefficient from the the kth and seeing what happens to that in the limit.

From what I've seen, the roots of the normal distribution, starting with the work of De Moivre, seem to be extremely ugly, so there is a good reason people don't talk about it much. However, there are ways to cut through this ugliness that have been neglected in many treatments.

http://stats.stackexchange.com/ques...nation-is-there-for-the-central-limit-theorem

There are other ways of looking at it, too, but that one is the most elementary. Two other ways of looking at it come from understanding characteristic functions or moment-generating functions, and secondly, there's a sort of entropy/information approach. Those could take a lot of work to understand fully--I have the general idea, but I don't know all the details, myself.

1 person