Register to reply 
Just what the heck is probability anyway? 
Share this thread: 
#1
Apr1814, 03:35 AM

P: 4

Why do I feel like I am about to be fed to a pack of hungry wolves? I take it as a foregone conclusion that anyone who bothers to answer this question is on an entirely different level of math knowledge than me...however even lay people should be able, and encouraged, to ask question here right? I have a fairly competent math background (completed about 75% of my physics degree before switching majors), but to this day I still manage to baffle myself regarding the essential nature of probabilities. Especially when I try to explain it to someone else. So please...someone try and explain to me exactly what a probability is.
To try and illustrate just how I get myself into trouble please forgive the following abysmal attempt at formulating a question: let's take a coin flip. The probability of landing on heads when a coin is flipped is 1/2. Now, if I say "what does that mean?" someone might say it means the chance of landing on heads is 0.5. Someone might say, less precisely that it "will land on heads half the time." Or they might say that "on average the coin will land on heads 50% of the time." I say "sure, okay, I can buy that," but then I keep thinking. I may flip a coin 100 times and find that I get heads every single time, right? And there is nothing wrong with this. And someone might say that they can tell me the probability of such a thing happening...and assure me that it is small. And this makes sense to me. But then I start to wonder...wasn't it supposed to be 50%? I could even ask, what are the chances that if I flip a coin 100 times that I will get exactly 50 heads? I don't know the answer, but I know its obviously not 100%. I'm sure that it is the most likely number of heads to observe (whatever that means), but even the most likely number may not be a very likely outcome in an absolute sense. So just what exactly is meant by the statement "the probability of heads is 0.5?" How can one verify empirically such a statement? Or how can that statement lead to predictions about what a set of coin flips will look like? With any given set, no matter how large, how can you know that you are not observing an unlikely series? Aren't all series of coin flips equally likely? HHH is no more or less likely than HTT or HTH right? So how can HHH...100 times in a row be considered to be unlikely? Isn't just as likely as any other set of outcomes? I know this isn't a well formulated question, but I hope you can see the difficulty I have and I hope a bit of discussion will help enlighten me. Thanks for taking the time... 


#2
Apr1814, 03:54 AM

Mentor
P: 18,327

This is a very good question. But I fear that nobody really know an answer to this. Probability is one of those intuitive notions that we can't really explain.
So indeed, if you throw a coin ##100## times, then our intuition expects around ##50## tails. Of course, this isn't quite correct. We might have ##0## tails or ##100##! The probability (that word again) that this happens is very small though. I think probability can only really be understood with infinities. That is, with limits. So let's say you throw a coin ##n## times. Let ##f(n)## be the amount of tail. For example, if we throw 100 times and 43 is tail, then ##n= 100## and ##f(100) = 43##. The idea is that if ##n## is large enough, then ##f(n)## is very close to ##n/2##. In mathematical language, we write this as [tex]\lim_{n\rightarrow +\infty} \frac{f(n)}{n} = \frac{1}{2}[/tex] Probability doesn't necessarily say anything about "finite" occurences. Even if we throw 1000000 times, we might still have 0 tails. But rest assured, if we keep throwing, then eventually we will get close to 50%. The problem is that we don't know when this will happen. Nevertheless, "finite" occurences can still be well approximated by probability. So if we throw 1000000 times then we can't exactly be absolutely sure we have around 50% tails, but will almost always be the case. So we can't give precise definition of probability. In math we solve this by introducing probability axiomatically, for example through the Kolmogorov axioms. It turns out (and can be proven) that this axiomatic approach is a good model for our intuition. 


#3
Apr1814, 06:13 AM

P: 477

For emphasis ...



#4
Apr1814, 07:12 AM

P: 963

Just what the heck is probability anyway?
You can find some food for thought at: http://en.wikipedia.org/wiki/Probabi...nterpretations
Sometimes I think of probability as a particular sort of lack of knowledge about a physical system. We have only a general idea about the initial conditions that go into a particular flip of a coin. But we know that the result will be "heads" or "tails" and we know that, in some sense, the set of initial conditions that will lead to an outcome of "heads" and the set of initial conditions that will lead to an outcome of "tails" are equally large. (Disregarding the possibility of coins landing on edge, a twoheaded coins, loaded coins, etc). In mathematical treatments, the notion of largeness of sets gets formally fleshed out with measure theory  the probability of an outcome is given by the "measure" of the [possibly infinite] set of conditions that correspond to that outcome. 


#5
Apr1814, 07:36 AM

P: 693

This is a hard question  unfortunately most probability books seem so sweep this all under a rug. In general I tend to think along the lines of gopher_p above (EDIT: and jbriggs444  the comment about initial conditions is spot on of course).
One book that discusses some of your questions headon is "Probability Theory: the logic of science" by Jaynes. He was a physicist, so you will likely enjoy some of his writing. A legal free copy of a draft of the book can be found at http://omega.albany.edu:8008/JaynesBook.html You may enjoy chapter 10, which analyzes the physics of the coin flipping experiment (and other "random" experiments"). jason 


#6
Apr1814, 09:48 AM

P: 59

When I was in high School I also use to think why we read these courses like Calculus, Probability, etc and what the heck they actually mean. But as I started digging the things after my College, What I figure out is that If you haven't told the practical use of the course then its really hard to build the interest for that course. But if you understand how to actually use it, then believe me you'll start learning things at more greater pace. Enough of this lecture now, lets talk about your issue of Probability.
We can relate probability with Chances or expectations. It doesn't mean "exact" its just mean "expectations". Suppose when I flip a Coin 100 times and say that the Probability of Head P(H)=1/2 then it doesn't mean head will turn up exactly 50 times out 100 flips. It just mean we expect total heads turns up somewhere close 50. When you do practical then you'll find it's right. Just try to flip a coin hundred times, If you are unlucky then It may even possible that out 100 flips u didn't get head in any flip. But as you repeat this experiment more and more number of times then you'll see P(H) is close to 50. In Engineering and Scientific studies most of the time we come accross this situation. So Probability is the mathematical way for dealing with those situations. 


#7
Apr1814, 12:05 PM

P: 4

Thanks for all the replies. Definitely gives me something to think about. Thanks for not just pasting the definition of probability as well. I am a health professional and the way that probability comes up in my profession is generally less in the "axiomatic" way and more in terms of how reliable empiric evidence is. A study on a drug may come out showing that the drug lowers blood pressure and reduces death by some amount. And that study will have a pvalue of something like 0.988. When it comes to assigning probability based on evidence, I think I get more confused and it taints the whole concept of probability in my mind.
Suppose that we don't know the probability of heads, but instead do an experiment by flipping a coin and recording the results. On the one hand I understand that the experiment is likely (there's that word  it's hard no to be circular) to show a frequency near 0.5 for heads. And the more times I flip the coin, the closer the frequency should generally be to 0.5. But after any arbitrary number of flips, do I really have any information about the probability of future flips? After all, any particular series of flips is equally likely to occur. And the series HHH...a million times in a row can't be said to be any less likely to occur than any other series of flips. And there is no number of observations that I can make which will save me from this doubt. If after a large number of flips I find the frequency of heads to be 0.487, then by statistical method I can make a statement like "the measured probability of heads is 0.487 +/ something, and the real probability of heads is 99% likely to fall into that range." But that seems circular to me. Because now I am back to the original question, what does it mean to have a 99% confidence interval? And I'm sort of back where I started. Thanks for the conversation guys. 


#8
Apr1814, 12:17 PM

Mentor
P: 18,327

This is why I brought up infinities in my post. Probability only makes a statement about infinite occurences, not about a mere finite number of trials. Still, it forms a very reliable approximation to reality, even with very little trials. It's just that you can't claim absolute certainty with a finite number of trials. The more trials you do, the more certainty, but never absolute! What I always like to do is to compare probabilities to reallife statements. For example, let's say I have a friend in NY. Then my chance of winning the lottery is the chance of me knocking on a random door in NY and my friend opening it. That puts a lot of perspective on things. Even though people have a intuitive idea of probability, their intuition is always a little off: http://www.psychologytoday.com/artic...theoddswrong It's a tricky little thing. 


#9
Apr1814, 01:43 PM

P: 1,271




#10
Apr1814, 02:29 PM

P: 131

For a clear, thorough, opinionated, Bayesian answer to your question, you can peruse the following free textbook:
http://uncertainty.stat.cmu.edu/ I find it helpful to think of probability as a measure of (un)certainty. Others insist it can only be defined in terms of limiting frequencies. The uncertainty view accommodates all these cases, but it also handles perfectly sensible questions which the frequencyonly view cannot. (For example: "What is the probability that it will rain tomorrow?".) Actually, your best bet would be to read "Understanding Uncertainty", by the late Dennis Lindley. http://www.amazon.com/Understanding.../dp/0470043830 It's a terrific read. 


#11
Apr1814, 10:43 PM

P: 543

Uncertainty seems to be a primary part of it.
Before you flip the coin, you assign heads p=.5 After you flip the coin, you have a result... say, heads. How do you characterize the probability figure after the fact? Now that you know, do you think it was actually p=1 all along (but you just didn't know yet)... because that is what happened? And so p changed from .5 to 1? Or do you continue to think that p=.5 describes heads for that flip, even after the fact? Or do you say that p is only meaningful for things that have not yet happened? If so, does this make probability different from other fundamental physics because it is not time reversible? 


#12
Apr1914, 03:31 PM

P: 327

1) You say, all exact sequences are equally likely. As @homeomorphic points out, there is only one exact sequence that gives 100 heads whereas there are many sequences that give 50 heads. So 50 heads is much more likely. The supposed independence of coin flips also makes all heads much less likely (see point 3) 2) If you saw a coin flip that gave 100 heads in a row, even though that is just as likely as any other sequence, you should doubt that it is a fair coin. A Chisquared goodness of fit test will show that it is very unlikely that this sequence came from a 50/50 fair coin. Another sequence that is more typical for a fair coin will give a much higher probability in the Chisquared test. 3) To say that there were 100 independent flips of a fair coin says a lot more than 50/50 probability. The independence implies a lot about zero correlation. So there are many statistical tests where the series of all heads is much less likely than other random sequences. In those tests, the sequence of alternating heads and tails would also score low. Any of the tests can be used to show that either the allheads coin was not fair, or that you have witnessed something so rare that no one else has ever seen it. 


#13
Apr2214, 11:44 AM

P: 4




#14
Apr2214, 12:35 PM

P: 131

I actually explored precisely this scenario in a recent post. micromass's model is the one I called [itex]\text{fair}[/itex]; yours (if you choose a uniform distribution) is the one I called [itex]\text{biased}[/itex]. You can actually combine them using a mixture model. Then, for any sequence [itex]S[/itex] of coin flips, you will have some posterior probability for each model. In other words, just as you say: the coin flips will influence your beliefs about future coin flips. To be perfectly thorough, you want to compute [itex]P(\text{H}S)[/itex], the probability to see "heads" next, given the sequence [itex]S[/itex] of flips so far. This is given as [tex] P(\text{H}S) = P(\text{fair}S) \frac{1}{2} + P(\text{biased}S) \int\limits_0^1 p P(p\text{biased},S)\,\,dp [/tex] (Obviously, [itex]P(\text{fair}S) + P(\text{biased}S) = 1[/itex].) Interestingly, the upshot is the opposite of what we'd see if the Gambler's Fallacy were true. If you've seen a lot of tails, you shouldn't expect you're "due" for heads; you should expect tails slightly more than heads! (Assuming you assign nonzero probability to the coin being biased.) (*) This assumption would fail if you had a skilled, mischievous coin flipper, who could choose the outcome every time. 


#15
Apr2214, 06:20 PM

P: 4

That's a very interesting point you make about the gambler's fallacy! It's true, the more you see heads, the more you ought to expect heads due to the fact that the coin is more and more likely to be biased. I'm not sure I understand the need to Fair vs Biased calculation though. Could you not simply look at the frequency of heads, say 0.45, and then say that the p for H is 0.45 +/ something. Or construct a confidence interval centered about 0.45? Of course, I suppose if you have reason to believe that the coin really is fair, then that ought to modify your confidence in the observed P. Thanks for the replies...



Register to reply 
Related Discussions  
What the heck?  General Discussion  1  
What the heck is it?  Biology  20  
What the heck?  Special & General Relativity  4  
What the heck?  Forum Feedback & Announcements  6  
What the heck did I see?  Astronomy & Astrophysics  6 