Geometric, Exponential and Poisson Distributions - How did they arise?

Click For Summary
SUMMARY

The discussion focuses on the historical context and understanding of geometric, exponential, and Poisson distributions, emphasizing their memoryless properties and applications in modeling events. The exponential distribution is particularly noted for its role in modeling time between events in a Poisson process, while the Poisson distribution calculates event occurrences in stationary processes. Participants express a desire for deeper insights into the origins of these distributions, suggesting that the mathematical community often overlooks pedagogical approaches in favor of new results. Resources such as the MIT OpenCourseWare on probabilistic systems are recommended for further exploration.

PREREQUISITES
  • Understanding of basic probability concepts
  • Familiarity with Poisson processes
  • Knowledge of Bernoulli processes
  • Basic statistics terminology and distributions
NEXT STEPS
  • Research the historical development of the Poisson distribution
  • Study the relationship between Bernoulli and Poisson processes
  • Explore the Central Limit Theorem and its implications
  • Learn about moment-generating functions and their applications
USEFUL FOR

Students of probability and statistics, educators seeking to enhance their teaching methods, and anyone interested in the historical context of statistical distributions.

TheOldHag
Messages
44
Reaction score
3
I'm going through the Degroot book on probability and statistics for the Nth time and I always have trouble 'getting it'. I guess I would feel much better if I understood how the various distribution arose to begin with rather than being presented with them in all there dryness without context.

For instance, the geometric and exponential distributions have extremely convenient properties of being memoryless. Furthermore, the exponential distribution is perfect for modeling the time between events in a Poisson process. The Poisson distribution itself is wonderful for calculating the number of events in a given time period in a stationary process.

My question is this, are these properties just conveniences or were these distribution sought after originally to model such processes? How did they arise historically so I can better understanding the grounding for them in present text rather than just falling from the sky with convenient properties? This is a strange question but would settle my 'not getting it' feeling for probability. It seems that in all the other branches of mathematics there is more context for how the collective mind proceeded from step 1 to step 10 whereas in statistics my feeling of uncertainty arises since it feels as though everything just fell from the sky.
 
Physics news on Phys.org
If you google "probability distribution history" you will get a lot of references which may help.
 
I can relate to your difficulties. If you studied more math, you would undoubtedly run into many more difficulties of the same sort, so statistics is not anywhere near being exceptional in this regard. I attribute this to a gross negligence towards the motivations on the part of most of the mathematical community. I think that the culture of mathematics is so focused on proving new results/getting publications, above all else, that pedagogy and consolidating the results we already have have not been given the attention they deserve. You've only experienced the tip of the iceberg with your statistics. Part of it may also be the difficulty of teaching and writing to a general audience of whatever students seem to be rolling in, such that things end up going towards the lowest common denominator. It can be a lot of work to understanding things more deeply and too few people are willing to do that (or have the interest for it). Unfortunately, things have been made MUCH more difficult for such people than they need to be by the tyranny of the unquestioning majority.

The Poisson process is explained pretty well somewhere in this course (you may have to look at more than just the lecture that deals with Poisson processes, though):

http://ocw.mit.edu/courses/electric...ures/lecture-1-probability-models-and-axioms/

The part he doesn't explain quite as well as I would like is the more precise relationship going from Bernoulli processes to Poisson processes. The best way to understand this is to look at the formula for getting the k+1st binomial coefficient from the the kth and seeing what happens to that in the limit.

From what I've seen, the roots of the normal distribution, starting with the work of De Moivre, seem to be extremely ugly, so there is a good reason people don't talk about it much. However, there are ways to cut through this ugliness that have been neglected in many treatments.

http://stats.stackexchange.com/ques...nation-is-there-for-the-central-limit-theorem

There are other ways of looking at it, too, but that one is the most elementary. Two other ways of looking at it come from understanding characteristic functions or moment-generating functions, and secondly, there's a sort of entropy/information approach. Those could take a lot of work to understand fully--I have the general idea, but I don't know all the details, myself.
 
  • Like
Likes   Reactions: 1 person

Similar threads

  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 14 ·
Replies
14
Views
6K