# Basic probability question

1. Nov 22, 2013

### johnqwertyful

Say you were to pick a number from [0,1] at random.
The probability is simply a measure. So the probability of picking any point is 0, because the measure of any point is zero. But you do pick a number at random. The probability of picking the point you picked is 0. This seems contradictory. Can anyone explain this?

2. Nov 22, 2013

### Simon Bridge

How did you actually go about picking the number?

3. Nov 22, 2013

### R136a1

Probability zero doesn't mean that it is impossible. The reason we think it is an impossibility is because we are so used to finite experiments. That is, everything in our world that we have an experience with is finite. There are finitely many balls in a container. There are finitely many coin tosses we can ever make. We can only ever pick finitely many numbers. Indeed, most numbers in $[0,1]$ can not even be defined! So how can we pick a number. Usually, we just pick numbers like $1/2$ or maybe $\pi$, but we can never pick a number truly arbitrary in our real life!

The probability measure on $[0,1]$ is just an approximation of the finite situation. But $[0,1]$ is infinite, which is a thing we have no experience with. So naturally, apparent contradictions occur.

4. Nov 22, 2013

### HallsofIvy

Staff Emeritus
When dealing with discrete probability distributions, "probability 0" means "impossible". When dealing with continuous probability distributions, that is not true. (And "probability 1" does not mean "certain".)

(In more advanced probability courses, probability 0 is termed "almost impossible" and probability 1 is termed "almost certain".)

5. Nov 22, 2013

### Simon Bridge

The probability of picking exactly one point in an interval is zero. In order to pick exactly one point, say, by stabbing it with a pin, we'd need a pin that is infinitely thin. Such objects do not exist. Anywhere we jab the pin will make a hole over infinitely many points - over a range of values.

HallsofIvy has a point though in that the way of thinking about what the continuous probability distribution means is somewhat oversimplified when you start out learning about them. Initially your teachers are keen to get you off the idea that p(x) has a meaning by itself in the way that P(X=x) can. It is p(a<x<b) that has a meaning. The behavior of p(x) at point x is not important, it is the behaviour about point x that is important.

6. Nov 22, 2013

### Jano L.

There are more issues in your post. I will try to go through them one by one.

As Simon points out, there is some ambiguity in the part "at random". But from the next part of your first post, it seems clear that you have in mind some situation/process where results are characterized by two properties:

1) they can be any real number from $[0,1]$;
2) they have uniform probability density over $[0,1]$.

Similar situations with continuous set of results are often considered, but only as auxiliary mental constructions. In such description of possible results, as R136a1 said, it is alright to assign probability 0 to possible results. This is because in continuous sets the notion of probability is not primarily meant for points, but rather for integrable sets of these points, in your example for sub-intervals of $[0,1]$ of non-zero length. Points get probability incidentally, as a by product, a limit when the sub-interval is contracted to one point.

In order to obtain consistent theory, we have to admit assignment of probability 1 to events that may not happen. This is just necessary to conform to measure theory and common sense. Certainty and impossibility of some event then cannot be derived from their numerical probability, but is an independent information.

Here we have a problem. We can use real intervals and probabilities as abstractions to calculate useful probabilities. In probability $theory$, there is no step where we do the "pick" part. That belongs to practice.

Now, in actual experiments, we cannot obtain all numbers from a real interval. Real intervals are useful in calculations, but they contain numbers that cannot be obtained in a measurement, like irrational numbers or even numbers with infinite definition.

When we measure something, we report results as rational numbers with finite number of significant digits, which means we use finite set of results. Computer random generator like the Mersenne twister generates numbers from $(0,1)$ that are well modelled by the above 1), 2), but eventually only finite number of rational numbers from $[0,1]$ can be generated by the underlying algorithm.

So, it seems one should rather rephrase the question for a set of results that actually can be realized, that is, for finite set of results, otherwise the "pick" part makes no sense.

For example, consider throwing a die. We have finite set of results 1,2,3,4,5,6, which can all be obtained in an experiment, which is much better for the "pick" part of your question. Now, all numbers 1,2,3,4,5,6 are assigned equal non-zero probability 1/6, so the case with probability 0 does not occur.

7. Nov 22, 2013

### Jano L.

I agree that impossible results should be assigned probability 0. However, I do not know any good reason to think the converse, that is, that probability 0 (perhaps obtained as a result of calculation) implies the result is impossible. Can you think of some argument for it?

8. Nov 23, 2013

### Stephen Tashi

That would be a matter of Physics or Metaphysics. Standard mathematical probability theory does not deal with the concepts of "possible" or "impossible" events in a way that can be applied to determining whether events "actually" happen. Terminology in probability theory like "almost sure", "almost certain" is suggestive of a theory that deals with whether events actually happen but these concepts are only defined in terms of probabilities or limits of probabilities.

In elementary probability theory we discuss problems involving taking (exact) random samples from a continuous distribution, but there is no assumption in probability theory that you can actually do this. A practical person would say you can't.

9. Nov 23, 2013

### Simon Bridge

10. Nov 24, 2013

### Jano L.

Yes, I agree with you Stephen completely. What I meant by that question was whether there is some other reason, independent of the mathematical theory of probability, that would suggest that $p(E)=0$ implies $E$ is impossible. Perhaps applied statistics or various distinct interpretations of probability could provide different views on this.

Of course, if the result with zero probability came out many times, it would be strong evidence against the model, although it is hard to quantify this. But one or few "strange" results of measurement is a weak evidence.

11. Nov 24, 2013

### Hornbein

Since we are finite beings it is impossible to pick a point in [0,1] with every point equally probable. So if we assume we can do that, then we are in an imaginary world. If we assume we can do this impossible thing, then we get result that seems impossible.

Here's another way to look at it. Suppose we say that the probability of choosing a point in the set [x,y] is y-x for 1>=y>=x>=0. Seems pretty reasonable, right? Then the probability of choosing a point in the set [x,x] is zero. There is no getting around that.

Note that it works more or less the same way for natural numbers. It is equally impossible to choose a natural number with each such number having equal probability of being chosen. To prove this, think of the most absurdly large natural number that you can. It has to be a specific number -- none of that "bigger than the biggest imaginable number" stuff -- but aside from that it can be anything. Now imagine that you have selected a natural number at random. The chance that your randomly selected natural number is less than or equal to your specific number is zero.

12. Nov 24, 2013

### Simon Bridge

No. You are not paying attention. If a result with a modeled zero probability of occurring came out even once, that would mean the model needs to be modified.
If it did not happen, then we have no information.

This is the response to your question about when P(X)=0 means that X is impossible.

The probability distribution is an artifact of the model - you seem to be confusing the map for the territory.

13. Nov 25, 2013

### Jano L.

Do you have some published example where people rejected model in this way? Or do you think this is obviously the correct way to understand probability?

I would not call the probability distribution that came out as result of the model an "artifact". I agree it is just a part of the model, not some really existing thing. It can be viewed as a device describing our expectation of the event.

14. Nov 25, 2013

### johnqwertyful

Whoa, thanks for the help everyone. Very interesting. I've been reading over all this, and it's been very helpful.

So impossible implies probability 0. The converse is not true, unless things are discrete. This is clear, and pretty cool actually.

15. Nov 25, 2013

### Hornbein

Right. Actually, discrete/continuous has nothing to do with it. You get this with any infinite set with a uniform distribution.

16. Nov 25, 2013

### Office_Shredder

Staff Emeritus
Discrete typically means countable, which doesn't have a uniform distribution is the point.

17. Nov 25, 2013

### Simon Bridge

Probably anything by Popper I guess.
You'll also see this in beginning texts on quantum mechanics - though not in so many words.
If something happens then the probability for it happening clearly should not be zero - it follows that something happened that was not accounted for in the model.

Exactly.

If we model a random number generator as a six-sided die, then the probability that it will roll a seven is zero right? So if the die turned up a 7, or anything not in {1,2,3,4,5,6} we would have to revisit the model right?

Maybe there are more than 6 sides? Maybe the sides are marked different from what the model says?

This would be the same whenever the empirical statistics are sufficiently different from the model statistics.

How else would you do it?

18. Nov 26, 2013

### Jano L.

Why? Because it happened once?

Suppose we answer the last question by "yes". Then what probability different from zero should be assigned? I doubt you have a definite number. It is difficult to assign "correct" probability for A based on one observed instance of A. And even if we did somehow, this is probability based on observation, not a probability calculated from model, which we were discussing. It is not clear how these two are related.

I do not agree. Model accounted for that event, and assigned it probability 0. Observation of the event may catch interest, but it is not clear how/whether to modify the model then.

If we model a random number generator as a six-sided die, then not only the result "7" has probability 0, it is even considered impossible. That does not prevent it from happening, though :-)

In practice not necessarily. There was a king who won an island in the game of dice. He threw two dice and one of them broke in two pieces during rolling, showing 3 and 4 I think, which the gamblers agreed meant the result was 7. But we did not modify our model of die since then, because this event is so rare nobody knows what probability other than 0 it should be assigned.

This is better. If 7 came out systematically, there would be a compelling reason to assign it positive probability, perhaps based on its relative frequency in a long run of experiments.

19. Nov 27, 2013

### Simon Bridge

I don't need to know the "correct" probability to know that the model was incorrect though.
That's a question that needs to be addressed when coming up with a new model.

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook