Is there such a thing as Objective Probability ?

  • Thread starter murrmac
  • Start date
  • Tags
    Probability
In summary, the conversation discusses the concept of "objective probability" and whether it truly exists. The speaker argues that probability is not a property of events but rather a reflection of our knowledge and information about them. They also mention the use of probability in predicting outcomes in various scenarios, such as in horse races, and the limitations of this approach. The conversation ends with a mention of quantum mechanics and the distinction between probability theory and statistics.
  • #1
murrmac
15
0
Is there such a thing as "Objective Probability"?

IMHO, no.

I am no mathematician, I am in awe of the mathematical prowess manifested by all the contributors to this forum, but I do make my living by assessing probabilities (intuitively), and thankfully I am able to make the correct decisions most of the time.

It has always seemed to me that it is fallacious to talk about the "probability of an event happening", what we should say is "the probability of me being correct if I forecast the outcome of the event to be such-and-such". Which would of course be much too long-winded.

Nonetheless, it is my contention that probability is not a property of the event itself, it is a function of the information which we possesses about the event.

For example, if we knew the exact state of our Newtonian universe at any given moment in time, there would be no probability involved in a coin toss, the outcome would be a foregone certainty. Our assessment of 50% probability is a reflection of our lack of information, and is not an intrinsic property of the event itself.

I can see heads shaking all around me at this point, but let us move on to another type of event, where the probabilities are not quite so clear-cut.

In a horse race, probabilities are estimated by bookmakers (and, on Betfair, both by punters and layers.) Now, there seems to be a widespread view that there exists a "true" probability for each horse to win.

IMHO that is nonsense.

To talk of "the probability of the horse winning " is fallacious. A professional punter, armed with years of experience, will predict the winner of a 10 horse race maybe once in every 4 races. A once a year racecourse visitor, who knows nothing about racing, but makes their selection at random, will select the winner once in every 10 races.

So, if it is fallacious to talk about the probability of a horse winning, it is equally fallacious to talk about the probability of any other event happening. As I said earlier, the correct approach is to say "What is the probabilty of me being correct if I forecast the outcome of this event to be such-and-such".

All of which is of no practical use, I agree, but I just thought it might be fruitful to drop a (hopefully) philosophical offering into the mathematical pool ...
 
Physics news on Phys.org
  • #2


murrmac said:
IMHO, no.

I am no mathematician, I am in awe of the mathematical prowess manifested by all the contributors to this forum, but I do make my living by assessing probabilities (intuitively), and thankfully I am able to make the correct decisions most of the time.

It has always seemed to me that it is fallacious to talk about the "probability of an event happening", what we should say is "the probability of me being correct if I forecast the outcome of the event to be such-and-such". Which would of course be much too long-winded.

Nonetheless, it is my contention that probability is not a property of the event itself, it is a function of the information which we possesses about the event.

For example, if we knew the exact state of our Newtonian universe at any given moment in time, there would be no probability involved in a coin toss, the outcome would be a foregone certainty. Our assessment of 50% probability is a reflection of our lack of information, and is not an intrinsic property of the event itself.

I can see heads shaking all around me at this point, but let us move on to another type of event, where the probabilities are not quite so clear-cut.

In a horse race, probabilities are estimated by bookmakers (and, on Betfair, both by punters and layers.) Now, there seems to be a widespread view that there exists a "true" probability for each horse to win.

IMHO that is nonsense.

To talk of "the probability of the horse winning " is fallacious. A professional punter, armed with years of experience, will predict the winner of a 10 horse race maybe once in every 4 races. A once a year racecourse visitor, who knows nothing about racing, but makes their selection at random, will select the winner once in every 10 races.

So, if it is fallacious to talk about the probability of a horse winning, it is equally fallacious to talk about the probability of any other event happening. As I said earlier, the correct approach is to say "What is the probabilty of me being correct if I forecast the outcome of this event to be such-and-such".

All of which is of no practical use, I agree, but I just thought it might be fruitful to drop a (hopefully) philosophical offering into the mathematical pool ...

What you are saying is basically correct.

In the physical world probability theory is almost always a model that is used in face of some sort of ignorance -- often ignorance of the finer details of initial conditions. So, the probability quoted is, as you say, a measure of the likelihood that prediction will be correct.

The one contrary example of which I am aware is quantum mechanics. There is appears that the world really is stochastic. But from a purely operational point of view the probabilities calculated in quantum mechanics refer to the probability of some measurement lying within a given range.

One further thing for you to think about. Strictly speaking, mathematical probabiity theory starts out with a probability space. In other words, probability theory is the study of the consequences of probability laws (aka measures aka distributions) and assumes that the probabilities are known. The actual determination of approximate probability laws from data is the science of statistc which is a somewhat different thing. Quite often there are approximations used that work quite well in practice but are difficult to justify theoretically.

So sometimes I would expect that your intuitive notion of the probability of some event is just as good as one based on poor assumptions followed by a page or two of confusing formulas.
 
  • #3


murrmac said:
IMHO, no.
It has always seemed to me that it is fallacious to talk about the "probability of an event happening", what we should say is "the probability of me being correct if I forecast the outcome of the event to be such-and-such". Which would of course be much too long-winded.

Nonetheless, it is my contention that probability is not a property of the event itself, it is a function of the information which we possesses about the event.
IMO, these are not contradictory. Probability is a way to assess the likelihood of some event, based on previous observations of the system giving rise to the event.
murrmac said:
For example, if we knew the exact state of our Newtonian universe at any given moment in time, there would be no probability involved in a coin toss, the outcome would be a foregone certainty. Our assessment of 50% probability is a reflection of our lack of information, and is not an intrinsic property of the event itself.
For some events, our understanding of the universe is good enough that we can make predictions that are extremely likely. For example, if I predict that the sun will rise tomorrow morning at 6:53am (or whatever time the almanac says), the probability of this event is pretty close to 1. Other events, even very simple ones such as coin tosses, are much harder to predict. There are a lot of unknowns to think about, such as the angle the coin makes with the horizontal, the force used to flip it, the location at which the force is applied, and so on. Based on a large number of trials, I know that a coin flipped into the air will land in one of two ways (I have never seen one land and stay on its edge). If these events are equally likely, the probability of each is 1/2.

Probability is not as useful in predicting individual events as it is in predicting large numbers of events. For example, if we have some amount of a radioactive element, we can say with certainty that 1/2 of it will have decayed in a specific amount of time, but we have no way of knowing when a particular radioactive atom will decay.



murrmac said:
I can see heads shaking all around me at this point, but let us move on to another type of event, where the probabilities are not quite so clear-cut.

In a horse race, probabilities are estimated by bookmakers (and, on Betfair, both by punters and layers.) Now, there seems to be a widespread view that there exists a "true" probability for each horse to win.

IMHO that is nonsense.

To talk of "the probability of the horse winning " is fallacious. A professional punter, armed with years of experience, will predict the winner of a 10 horse race maybe once in every 4 races. A once a year racecourse visitor, who knows nothing about racing, but makes their selection at random, will select the winner once in every 10 races.
I agree with you here. If the same set of horses raced every time, then we could come up with more accurate probabilities.

Someone can talk about an objective or theoretical probability, but it is not possible to know this probability, since the experiment with the same horses is not performed many times.
murrmac said:
So, if it is fallacious to talk about the probability of a horse winning, it is equally fallacious to talk about the probability of any other event happening.
I disagree. You go from an "experiment" (horse race) in which there really isn't much information available, and extrapolate this to all events.

If you take an unweighted (fair) die, the probability of it landing on anyone face is the same as that for any other face, which means the probability of throwing a one, say, is 1/6. If you throw this die hundreds or thousands of times, and record the outcome of each throw, you should see about 1/6 of the throws resulted in a 1, about 1/6 resulted in a 2, and so on.

Knowing this probability wouldn't help you predict what the next throw would be, though.

In contrast, if you throw two dice, the likelihood of getting an 6 is much higher than that of getting a 12, since there are five ways to get a 6, but only one way of getting a 12.


murrmac said:
As I said earlier, the correct approach is to say "What is the probabilty of me being correct if I forecast the outcome of this event to be such-and-such".

All of which is of no practical use, I agree, but I just thought it might be fruitful to drop a (hopefully) philosophical offering into the mathematical pool ...
 
  • #4


murrmac said:
As I said earlier, the correct approach is to say "What is the probabilty of me being correct if I forecast the outcome of this event to be such-and-such".
The major problem with your viewpoint is that it necessarily follows there is no objective probability for the event of you being correct if you forecast the outcome of some event to be such-and-such. Instead, the correct approach is to say "What is the probability of me being correct if I forecast that you will be correct if you forecast the outcome of some event to be such-and-such". Of course, there is no objective probability for that event either...
 
  • #5


I completely agree with your premises murrmac, and I'd like to extend your argument a bit further, covering something that I've been thinking about. That is the whole notion of the probability that some particular thing exists. For example, it has never been observed unicorns, but is it correct to say that the probability for that there exist a unicorn is very low? I would say no, a low probability does not equal lack of justification for belief. It is nonsense to say that the probability for that some thing exists is high or low, because these are facts that are either true or false, and any information regarding the existence would not affect the justification for any probability value. Even if I say, "I think I saw a unicorn, but I can't be entirely sure" - it is nonsense to say that the probability for the unicorns existence has risen. We can talk about the probability for observing a particular thing, but with no information we have no basis for a calculation of the probability for observing it. Probability always depends on measurable parameters.
 
  • #6


Jarle said:
I completely agree with your premises murrmac, and I'd like to extend your argument a bit further, covering something that I've been thinking about. That is the whole notion of the probability that some particular thing exists. For example, it has never been observed unicorns, but is it correct to say that the probability for that there exist a unicorn is very low? I would say no, a low probability does not equal lack of justification for belief. It is nonsense to say that the probability for that some thing exists is high or low, because these are facts that are either true or false, and any information regarding the existence would not affect the justification for any probability value. Even if I say, "I think I saw a unicorn, but I can't be entirely sure" - it is nonsense to say that the probability for the unicorns existence has risen. We can talk about the probability for observing a particular thing, but with no information we have no basis for a calculation of the probability for observing it. Probability always depends on measurable parameters.

Probability theory is one of the most misused branches of mathematics.

You are correct, with no further explanation, the concept of "the probability that a unicorn exists" is completely meaningless.

The theory of probability starts with the assumption that one has a probability space -- which is a set of events and a means of assigning probabilities to those events. Probability theory actually has nothing whatever to do with how one finds that means of assigning probabilities (called a " probability measure").

There is also the notion of "random", a word which is used freely in everyday conversation, but which is most difficult to define rigorously. Probability theory neatly sidesteps that problem in the following way: a "random variable" is defined as simply a (measurable) function defined on a probability space. ("Measurable" here is technical term relating to the abstract theory of measure and integration and has nothing whateve to do with the everyday meaning of the word,) But in everday language "random" means only something that is somehow described by the mathematics of probability and is not deterministically knowable.

So the underlying assumption when the probability of something is discussed is that the "something" is an event in a probability space, and in ordinary language that it is random. Now the existence of unicorns is not random in any sense of the word. Either unicorns exist or they do not. Period. If you see a unicorn then unicorns must exist. That does not make the probability that unicorns exist 1. I means that unicorns darn well exist, no probability to it.

It might surprise you to learn that having probabilty 1 does NOT mean that the associated event always happens. To understand this, let me describe the reason non-rigorously. Basically the probability of an event is the number of times it is expected to occur in N independent trials divided by N, in the limit as N tends to infinity. So if something fails to occur only very sporadically then in the limit the ratio can still be 1. Similarly something can occur only very sporadically, but still occur sometimes, and have probability 0.

Bottom line -- not everything can be reasonably described as a probability. The existence of unicorns is one of those things.
 
  • #7


DrRocket said:
It might surprise you to learn that having probabilty 1 does NOT mean that the associated event always happens.

I am familiar with this, and this is one reason for why I think a mere probability value in [0,1] is insufficient for the task of describing chance for all purposes. The probability that a dart hits 0.5 on the interval [0,1] over R is 0 (assuming a uniform distribution), but this is also the probability that the same dart hits -0.5 on [0,1]. In the first case it is however possible, in the latter it is not. There is thus a categorical difference between the chance of the two events, which leads me to think that we ought to find some other way of representing chance in cases such as these (i.e. when it seems necessary). The traditional notion of a probability in [0,1] is just not satisfactory when the set of outcomes is infinite. Perhaps an ordered extension of [0,1] would do. Algebraic infinitesimals is a plausible candidate for my example.
 
Last edited:
  • #8


Jarle said:
I am familiar with this, and this is one reason for why I think a mere probability value on [0,1] is insufficient for the task of describing chance. The probability that a dart hits 0.5 on [0,1] over R is 0, but this is also the probability that the same dart hits -0.5 on [0,1]. In the first case it is however possible, in the latter it is not. There is thus a categorical difference between the chance of the two events, which leads me to think that we ought to find some other way of representing chance in cases such as these (i.e. when it's necessary). Perhaps an ordered extension of [0,1] would do. Algebraic infinitesimals is a plausible candidate for my example.

I don't think so.

Probability does a good job of describing the phenomena that it is designed to describe, if properly understood and applied.

As a discipline of pure mathematics it is quite well developed, and is really a branch of the abstract theory of measure and integration -- since being rigorously formulated in that way by Kolmogorov.

But one must understand that some of the imagery of probablity --- "certain event", "impossible event", etc. are not completely consistent with the everyday meaning of those words.

In the case that you describe, you are correct that the probability of the dart hitting any specific number is zero, but that there is something qualitiatively different between 0.5 and -0.5. However, what a reasonable probabilisitic model would predict is not the likelihood of hitting any specific number, but rather the probability of hitting a number in some small interval [tex] ;[ x_0- \epsilon, x_0 + \epsilon][/tex] which it would do quite well if an appropriate probability distribution were used.

There are other somewhat practical limitations of probability theory as well. One in particular is the assumption that the probability measures are countably additive. I know of no particularly good justification for this assumption from the point of view of what one intuitively thinks probability should mean, but if one merely assumes finite additivity then one leaves behind some powerful machinery from the general theory of measure and integration.

So, while you are certainly free to formulate some other model for probability, I think you will find it difficult to improve on what Kolmogorov did many years ago. It is the basis for the modern theory, and it is a very powerful and coherent theory.

But if you feel differently, then go ahead and develop and alternative. It might be interesting to see how deep a theory you are able to put together. You have your work cut out for you.
 

1. What is objective probability?

Objective probability is the likelihood of an event occurring in the real world based on factual evidence and data. It is not influenced by personal beliefs or opinions.

2. How is objective probability different from subjective probability?

Subjective probability is based on an individual's personal beliefs and opinions, while objective probability is based on factual evidence and data. Subjective probability is often influenced by biases and emotions, while objective probability is more reliable and consistent.

3. Can objective probability be calculated?

Yes, objective probability can be calculated using mathematical formulas and statistical analysis. However, the accuracy of the calculation is dependent on the quality and quantity of the data used.

4. Is objective probability always 100% accurate?

No, objective probability is not always 100% accurate. It is an estimation based on available data, and there is always a margin of error. However, with a larger sample size and more reliable data, the accuracy of objective probability increases.

5. How is objective probability used in scientific research?

Objective probability is an essential tool in scientific research. It is used to make predictions and test hypotheses, as well as to analyze and interpret data. It allows scientists to make evidence-based conclusions and draw reliable conclusions about the natural world.

Similar threads

  • General Discussion
Replies
1
Views
790
  • Set Theory, Logic, Probability, Statistics
Replies
16
Views
1K
Replies
9
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
  • Programming and Computer Science
Replies
9
Views
3K
Replies
12
Views
907
  • Set Theory, Logic, Probability, Statistics
7
Replies
212
Views
11K
  • Set Theory, Logic, Probability, Statistics
2
Replies
47
Views
3K
  • Calculus and Beyond Homework Help
Replies
14
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
3K
Back
Top