- #1
Livio Arshavin Leiva
- 29
- 9
These days I've been reading in the internet about the Poisson Distribution because that was a concept I couldn't manage to understand completely when I studied it, so since then I've been always quite curious about Poisson processes, and how there are a lot of natural phenomena (mostly the random ones) that can be very accurately described modeling them as Poisson processes. One of those multiple examples were the number of goals in a world cup football match, and this was very interesting to me because I love football and being able to relate it with probability and statistics in a formal way seemed to be fun. Then I started searching about the facts that permit modeling the goals as a poisson process. For example, one could plot the histogram of frequencies of certain number of events in a fixed time and compare with a Poisson distribution and with goals the fitting is very good. Also I saw, for example, that the time intervals between goals followed, as expected from a Poisson process, an exponential distribution. Not exactly exponential because right after a goal is scored there is a time where both teams in same way have "to digest" the goal, so for very short times (less than 5 minutes) the exponential fit was not good but after that very good. In the other hand, the distribution of the quantity of goals during a match (I mean, dividing the 90 minutes in subintervals of about 10 minutes and counting the total goals in each interval), expected as constant in a Poisson process, was approximately constant except for the "psychological" fact that in the beginning of the match were too few goals (both teams has to "settle on the field" first) and the late goals were too many. (Obviously as a result of taking risks at the end when you're losing, either for better or worse).
What I want to say is, there were some "indicators" that it could be a Poisson process, but the most important, since it is used to derive the Poisson distribution from a binomial distribution, is the fact that the events has to be "rare". In the demonstration, the limit applied to the binomial distribution consists in tend the "numer of trials" ##n## to infinity but at the same time to maintain constant the average number of successes ##\Lambda=np##, with ##p## the probability of success in each trial. That is, ##p## tends to 0. So here one could take the notion of what a rare event is. But there are a lot of cases where there is nothing that one can understand as a "trial". For example in a radioactive decay, there are no such trials. One can not say: "Wait a minute, this atom here maybe is trying to decay now! Let me check it. Oh no, it missed, it is still Uranium." So one can not compute a probability of success ##p##. One also can not design an experiment to generate an estimator of the quantity ##p##. So, finally, and I must apologize for the extension, my question is: How can I determinate if certain events that are happening are really a Poisson process? I know the main condition is that the events must occur independently, but where is the relation of that with the "rare" issue? For example in Drude's theory for metal conductivity, it works to model the electron collisions as a Poisson process, but occur millions of them a second, they are happening all the time. So they're not rare. It would seem that the most important fact is the "randomness" and not being rare after all. I mean, Poisson processes mathematically are points in a certain domain, for them to be frequent or rare is just a scale factor. But in the reality, maybe there is some kind of causality argument? Expecting that very distanced in time events should dissipate the influence of one on another? I mean, "very distanced in time" compared with the characteristics times of the processes generation mechanisms...
Just wondering, if somebody knows the real justification of the thing...
Thank you very much in advance.
What I want to say is, there were some "indicators" that it could be a Poisson process, but the most important, since it is used to derive the Poisson distribution from a binomial distribution, is the fact that the events has to be "rare". In the demonstration, the limit applied to the binomial distribution consists in tend the "numer of trials" ##n## to infinity but at the same time to maintain constant the average number of successes ##\Lambda=np##, with ##p## the probability of success in each trial. That is, ##p## tends to 0. So here one could take the notion of what a rare event is. But there are a lot of cases where there is nothing that one can understand as a "trial". For example in a radioactive decay, there are no such trials. One can not say: "Wait a minute, this atom here maybe is trying to decay now! Let me check it. Oh no, it missed, it is still Uranium." So one can not compute a probability of success ##p##. One also can not design an experiment to generate an estimator of the quantity ##p##. So, finally, and I must apologize for the extension, my question is: How can I determinate if certain events that are happening are really a Poisson process? I know the main condition is that the events must occur independently, but where is the relation of that with the "rare" issue? For example in Drude's theory for metal conductivity, it works to model the electron collisions as a Poisson process, but occur millions of them a second, they are happening all the time. So they're not rare. It would seem that the most important fact is the "randomness" and not being rare after all. I mean, Poisson processes mathematically are points in a certain domain, for them to be frequent or rare is just a scale factor. But in the reality, maybe there is some kind of causality argument? Expecting that very distanced in time events should dissipate the influence of one on another? I mean, "very distanced in time" compared with the characteristics times of the processes generation mechanisms...
Just wondering, if somebody knows the real justification of the thing...
Thank you very much in advance.