Defining Probability: Beyond Relative Frequency

  • Context: Graduate 
  • Thread starter Thread starter RobtO
  • Start date Start date
  • Tags Tags
    Probability
Click For Summary

Discussion Overview

The discussion revolves around the definition of probability, particularly the distinction between probability as relative frequency and other interpretations, such as Bayesian probability. Participants explore theoretical frameworks, practical implications, and the nuances of different probability interpretations.

Discussion Character

  • Debate/contested
  • Conceptual clarification
  • Technical explanation

Main Points Raised

  • Some participants highlight that relative frequencies are not synonymous with probabilities, referencing a statement from "The Quantum Theory of Measurement" that emphasizes this distinction.
  • One participant suggests that while relative frequencies can be predicted from theory, experimental data only provides estimates, using coin flipping as an example.
  • Another participant introduces the frequentist and Bayesian interpretations of probability, noting that both are valid and that the formalism of probability is not sensitive to the interpretation used.
  • A participant mentions the axiomatic definition of probability developed by Kolmogorov, asserting that both frequentist and Bayesian concepts fit within this framework.
  • Examples of the power of Bayesian methods, such as Bayesian estimation and inferencing, are provided, along with links to resources.
  • Some participants assert that probability is a measure, and that conditional probability is distinct from relative frequency probability.
  • There is a discussion about the implications of defining relative frequency as the limit of measured values and whether this necessitates a distinction from probability itself.
  • One participant questions whether there is a barrier to applying a frequency interpretation to probabilities, while another clarifies that the mathematical definition of measure differs from experimental measurement.
  • Concerns are raised about whether conditional probabilities can be reduced to relative frequencies, with participants arguing that they represent different measures.
  • An example is provided illustrating that events with very small probabilities may not occur in practice, emphasizing the difference between theoretical probability and observed frequency.

Areas of Agreement / Disagreement

Participants express differing views on the relationship between probability and relative frequency, with no consensus reached on the necessity of distinguishing between the two. The discussion remains unresolved regarding the implications of these interpretations.

Contextual Notes

Participants reference various interpretations and definitions of probability, including the frequentist and Bayesian perspectives, and the axiomatic approach by Kolmogorov. There are unresolved questions about the implications of these definitions and their practical applications.

RobtO
Messages
17
Reaction score
0
I am reading "The Quantum Theory of Measurement," by Busch, Lahti, and Mittelstaedt, and I came across this statement (p. 44, 1996 ed.):

"The difficulties encountered in giving a precise formulation of this idea are due to the facts that relative frequencies are not probabilities, and probabilities need not be relative frequencies."

Again, on p. 47, they mention that "... the concept of probability cannot be reduced to that of relative frequency."

Now, I was taught at my mother's knee (well, my physics professor's) that probability was defined in terms of relative frequency. Can anyone help me understand what probability means, if not relative frequency, and under what conditions "probabilities need not be relative frequencies"?
 
Physics news on Phys.org
If relative frequencies can be predicted from theory, then these would be probabilities. However experimental data can give only estimates. A simple example - coin flipping. Theoretically (assuming a good coin) heads or tails each have probability 1/2. However if you flip a coin twice, you will get one head and one tail only half the time. A large number of flips give you approximately 50% heads and 50% tails, but the chances of getting exactly those results is small.
 
RobtO said:
Now, I was taught at my mother's knee (well, my physics professor's) that probability was defined in terms of relative frequency.

That's the frequentist interpretation of probability, which is a common one (particularly amongst physicists), but by now means the only widely-accepted definition. The other big one is the Bayesian interpretation, which views probabilities as "degrees of belief" (or some other subjective entity).

In terms of the gory details of how this stuff is really defined, axiomatically, it does not matter which interpretation you employ. The formalism isn't sensitive to it.
 
There are useful aspects of both the frequentist and Bayesian interpretations. I go by the duck probability model: If it looks like a duck and quacks like a duck ...

... and in this case the duck is embodied in measure theory. Suppose [itex](\Omega,{\mathcal F}, \nu)[/itex] is a measure space -- i.e., [itex]\mathcal F[/itex] is a σ-algebra on the set [itex]\Omega[/itex] and [itex]\nu[/itex] is a measure function on [itex]\mathcal F[/itex]. If [itex]\nu(\Omega)=1[/itex] then [itex](\Omega,{\mathcal F}, \nu)[/itex] is a probability space. In this case, the set [itex]\Omega[/itex] is typically called the sample space and the measure function [itex]\nu[/itex] is typically replaced by [itex]P[/itex] to denote probability.

Both the frequentist and Bayesian concepts of probability fall within this axiomatic definition of probability, which was developed by Kolmogorov.
 
A couple of good examples of where the Bayesian view is extremely powerful: Bayesian estimation (e.g., Kalman filters) and Bayesian inferencing (e.g., causal networks). A tutorial on Bayesian estimation and Kalman filters: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.7026. A course on Bayesian inferencing: http://ite.gmu.edu/~klaskey/SYST664/SYST664.html .
 
Last edited by a moderator:
Probability is a measure. Think in terms of Kolmgorov's axiomatic definition. Conditional probability is a different measure than relative frequency type probability.
 
Thanks for the responses. It seems that what folks are saying is: probabilities are what you predict, but relative frequencies are what you actually measure. If this is the case, I'm not sure why the authors make such a big deal about the distinction. In physics, of course, we constantly have to deal with the distinction between predictions (theory) and experiment.

They actually define the relative frequency as the limit N -> infinity of the measured values, then prove as a theorem that the relative frequency is equal to the probability under appropriate conditions. I'm still not sure why you would need to do this, if it's just a matter of the interpretation you put on probability.

But anyway, would you all agree that there's no barrier to putting a frequency interpretation on a given set of probabilities?
 
ssd said:
Probability is a measure. Think in terms of Kolmgorov's axiomatic definition. Conditional probability is a different measure than relative frequency type probability.

But doesn't the formula
[itex]P(A|B) = {P(A \cap B) \over P(B)}<br /> [/itex]
reduce the conditional probability to relative frequencies?
 
RobtO said:
Thanks for the responses. It seems that what folks are saying is: probabilities are what you predict, but relative frequencies are what you actually measure.
No. You misinterpreted what I and others wrote. We used the term mathematical definition of "measure", which is not at all the same as an experimental measurement. The mathematical concept of measure is essentially a generalization of the concept of length. You can google "measure theory" to get a taste of the concept.

A primer on measure theory: http://www.math.uconn.edu/~bass/meas.pdf .
How it relates to probability: http://www.math.uconn.edu/~bass/prob.pdf .
 
Last edited by a moderator:
  • #10
RobtO said:
But doesn't the formula
[itex]P(A|B) = {P(A \cap B) \over P(B)}<br /> [/itex]
reduce the conditional probability to relative frequencies?

Not really. P(A|B) is another measure, this does not require concept of frequency.

http://en.wikipedia.org/wiki/Probability_space
http://www.probabilityandfinance.com/articles/06.pdf

I will give an example. Events with very very small (but non zero) probabilities does not occur in practice.
Let us think that a person is trying to insert an envelop into a letter box from a distance of 10meters by throwing the envelop.
The slit of the box is just 1mm wider and longer than the thickness and width respectively of the envelop. Theoritically his chance [probability measure(by 'measure', I loosely mean 'a basis for comparison')] of success is not zero. But practically, however large number of trials he performs, his relative frequency of success will be exactly zero.
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 69 ·
3
Replies
69
Views
8K
  • · Replies 13 ·
Replies
13
Views
10K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 54 ·
2
Replies
54
Views
8K
  • · Replies 3 ·
Replies
3
Views
822
  • · Replies 1 ·
Replies
1
Views
5K