# Defining Probability: Beyond Relative Frequency

• RobtO
In summary, the authors are discussing the concept of probability and relative frequency. They state that the two are not the same and that conditional probability is a different measure than relative frequency. They then give an example of how these concepts are related.
RobtO
I am reading "The Quantum Theory of Measurement," by Busch, Lahti, and Mittelstaedt, and I came across this statement (p. 44, 1996 ed.):

"The difficulties encountered in giving a precise formulation of this idea are due to the facts that relative frequencies are not probabilities, and probabilities need not be relative frequencies."

Again, on p. 47, they mention that "... the concept of probability cannot be reduced to that of relative frequency."

Now, I was taught at my mother's knee (well, my physics professor's) that probability was defined in terms of relative frequency. Can anyone help me understand what probability means, if not relative frequency, and under what conditions "probabilities need not be relative frequencies"?

If relative frequencies can be predicted from theory, then these would be probabilities. However experimental data can give only estimates. A simple example - coin flipping. Theoretically (assuming a good coin) heads or tails each have probability 1/2. However if you flip a coin twice, you will get one head and one tail only half the time. A large number of flips give you approximately 50% heads and 50% tails, but the chances of getting exactly those results is small.

RobtO said:
Now, I was taught at my mother's knee (well, my physics professor's) that probability was defined in terms of relative frequency.

That's the frequentist interpretation of probability, which is a common one (particularly amongst physicists), but by now means the only widely-accepted definition. The other big one is the Bayesian interpretation, which views probabilities as "degrees of belief" (or some other subjective entity).

In terms of the gory details of how this stuff is really defined, axiomatically, it does not matter which interpretation you employ. The formalism isn't sensitive to it.

There are useful aspects of both the frequentist and Bayesian interpretations. I go by the duck probability model: If it looks like a duck and quacks like a duck ...

... and in this case the duck is embodied in measure theory. Suppose $(\Omega,{\mathcal F}, \nu)$ is a measure space -- i.e., $\mathcal F$ is a σ-algebra on the set $\Omega$ and $\nu$ is a measure function on $\mathcal F$. If $\nu(\Omega)=1$ then $(\Omega,{\mathcal F}, \nu)$ is a probability space. In this case, the set $\Omega$ is typically called the sample space and the measure function $\nu$ is typically replaced by $P$ to denote probability.

Both the frequentist and Bayesian concepts of probability fall within this axiomatic definition of probability, which was developed by Kolmogorov.

A couple of good examples of where the Bayesian view is extremely powerful: Bayesian estimation (e.g., Kalman filters) and Bayesian inferencing (e.g., causal networks). A tutorial on Bayesian estimation and Kalman filters: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.7026. A course on Bayesian inferencing: http://ite.gmu.edu/~klaskey/SYST664/SYST664.html .

Last edited by a moderator:
Probability is a measure. Think in terms of Kolmgorov's axiomatic definition. Conditional probability is a different measure than relative frequency type probability.

Thanks for the responses. It seems that what folks are saying is: probabilities are what you predict, but relative frequencies are what you actually measure. If this is the case, I'm not sure why the authors make such a big deal about the distinction. In physics, of course, we constantly have to deal with the distinction between predictions (theory) and experiment.

They actually define the relative frequency as the limit N -> infinity of the measured values, then prove as a theorem that the relative frequency is equal to the probability under appropriate conditions. I'm still not sure why you would need to do this, if it's just a matter of the interpretation you put on probability.

But anyway, would you all agree that there's no barrier to putting a frequency interpretation on a given set of probabilities?

ssd said:
Probability is a measure. Think in terms of Kolmgorov's axiomatic definition. Conditional probability is a different measure than relative frequency type probability.

But doesn't the formula
$P(A|B) = {P(A \cap B) \over P(B)}$
reduce the conditional probability to relative frequencies?

RobtO said:
Thanks for the responses. It seems that what folks are saying is: probabilities are what you predict, but relative frequencies are what you actually measure.
No. You misinterpreted what I and others wrote. We used the term mathematical definition of "measure", which is not at all the same as an experimental measurement. The mathematical concept of measure is essentially a generalization of the concept of length. You can google "measure theory" to get a taste of the concept.

A primer on measure theory: http://www.math.uconn.edu/~bass/meas.pdf .
How it relates to probability: http://www.math.uconn.edu/~bass/prob.pdf .

Last edited by a moderator:
RobtO said:
But doesn't the formula
$P(A|B) = {P(A \cap B) \over P(B)}$
reduce the conditional probability to relative frequencies?

Not really. P(A|B) is another measure, this does not require concept of frequency.

http://en.wikipedia.org/wiki/Probability_space
http://www.probabilityandfinance.com/articles/06.pdf

I will give an example. Events with very very small (but non zero) probabilities does not occur in practice.
Let us think that a person is trying to insert an envelop into a letter box from a distance of 10meters by throwing the envelop.
The slit of the box is just 1mm wider and longer than the thickness and width respectively of the envelop. Theoritically his chance [probability measure(by 'measure', I loosely mean 'a basis for comparison')] of success is not zero. But practically, however large number of trials he performs, his relative frequency of success will be exactly zero.

Last edited:

## 1. What is the definition of probability?

The definition of probability is the likelihood or chance of a specific event occurring. It is a measure of how likely it is for an event to happen, expressed as a number between 0 and 1, where 0 indicates impossibility and 1 indicates certainty.

## 2. What is the difference between theoretical and experimental probability?

Theoretical probability is based on mathematical calculations and assumes that all outcomes are equally likely. Experimental probability, on the other hand, is based on the results of actual experiments or observations. It is an estimation of the theoretical probability based on real-world data.

## 3. What is the difference between independent and dependent events?

Independent events are events where the outcome of one event does not affect the outcome of another event. Dependent events, on the other hand, are events where the outcome of one event is influenced by the outcome of another event.

## 4. How is probability used in real-life situations?

Probability is used in a wide range of fields, including statistics, economics, science, and engineering. It can be used to predict the likelihood of an event occurring, make informed decisions, and assess risk in various situations.

## 5. What are some common misconceptions about probability?

One common misconception about probability is that it can predict the outcome of a single event. In reality, probability is based on the likelihood of an event occurring over a large number of trials. Another misconception is that past events can affect the probability of future events, when in fact, each event is independent and unaffected by previous outcomes.

• Quantum Physics
Replies
69
Views
5K
• Special and General Relativity
Replies
30
Views
3K
• Beyond the Standard Models
Replies
23
Views
3K
• Quantum Physics
Replies
2
Views
3K
• Biology and Medical
Replies
2
Views
3K
• Mechanical Engineering
Replies
1
Views
3K
• Beyond the Standard Models
Replies
9
Views
3K
• Programming and Computer Science
Replies
8
Views
2K
• Sci-Fi Writing and World Building
Replies
15
Views
3K