Understanding the Uniform Probability Distribution in Statistical Ensembles

  • #51
A. Neumaier said:
Then it is the logarithm of the probability density with respect to a prior measure. This is surely less fundamental than the notion of probability, which is independent of a prior measure.
I cannot (literally) argue against that. I was struck by a similarity to the path integral but that's probably spurious.

Ordinary folk, interestingly, have no idea of probability. One person I knew said after hearing there was a 40% chance of rain, asked '40% of what ?'
What people experience is 'confidence' and they can express it as likelihood ratios or 'odds'
 
Physics news on Phys.org
  • #52
Mentz114 said:
Because physics is about phase and configuration space. Most of what you've been saying is off topic. You're moving the goalposts around wildly so I don't know what you are trying to say.

I'm sorry you feel that way. I'm just saying that volume in phase space is not the definition of likelihood. In certain circumstances, it's reasonable to assume that equal volumes in phase space imply equal likelihood, but that's an assumpion--it's not the definition of likelihood.


I know what phase space is.
 
  • #53
The reason for using the phase space probability density is ergodicity. Ergodicity is supposed to single out the microcanonical ensemble and the other ensembles can be derived from it. Unfortunately, it's too hard to prove ergodicity for even the simplest physical systems. Nevertheless, it's a reasonable assumption in most situations. So at least for ergodic systems, the microcanonical ensemble is a hard, objective prediction of the theory.
 
  • Like
Likes stevendaryl
  • #54
rubi said:
The reason for using the phase space probability density is ergodicity. Ergodicity is supposed to single out the microcanonical ensemble and the other ensembles can be derived from it. Unfortunately, it's too hard to prove ergodicity for even the simplest physical systems. Nevertheless, it's a reasonable assumption in most situations. So at least for ergodic systems, the microcanonical ensemble is a hard, objective prediction of the theory.

Related to the ergodicity assumption is the assumption that ensemble average of a quantity is equal to the time average.
 
  • #55
stevendaryl said:
Related to the ergodicity assumption is the assumption that ensemble average of a quantity is equal to the time average.
Right, this is the more physical way of stating the ergodic hypothesis. In modern mathematical language, one usually defines ergodicity as a requirement on the probability measure. The equality of time averages and ensemble averages then follows from the so called ergodic theorems, for instance the Birkhoff ergodic theorem.
 
  • #56
rubi said:
Ergodicity is [...] a reasonable assumption in most situations.
... though it is in fact known to be wrong in many physically relevant cases. It thus only has heuristic value.
 
  • #57
A. Neumaier said:
... though it is in fact known to be wrong in many physically relevant cases. It thus only has heuristic value.
Well, I agree that this issue hasn't been addressed in a fully satisfactory way yet. But at least ergodic theory gives some confidence in the validity of the microcanonical ensemble.

(Here's a side question that interests me: Do you know whether such systems that are known not to be ergodic are usually well described by the microcanonical ensemble in experiments nevertheless?)
 
  • #58
rubi said:
Do you know whether such systems that are known not to be ergodic are usually well described by the microcanonical ensemble in experiments nevertheless?
Probably yes (if they are large and simple enough), since in the thermodynamic limit the ensemble is equivalent to the grand canonical ensemble. Working with the latter is much simpler, closer to the formulas used in the applications, needs much weaker assumptions, and works identically in the classical and in the quantum case.
 
  • Like
Likes rubi and Mentz114
  • #59
Demystifier said:
Then let me use an example. Suppose that you flip a coin, but only ONCE. How would you justify that the probability of getting heads is ##p=1/2##? Would you use an ensemble for that?
Edited 'ones' to 'ONCE'.

Interesting question! In the context of your opening reply to the OP, see interesting answer: http://arnold-neumaier.at/physfaq/topics/singleEvents
 
  • #61
A. Neumaier said:
Bayesian statistics is not intrinsically related to a subjective view of probability.

Because "Bayesian statistics" as used by many is not Bayesian. It just means one uses Bayes's rule, which is common to both Bayesian and Frequentist views.

But couldn't one say that the subjective view is more general, since from the subjective view, the frequentist view can be derived with an additional assumption (via https://en.wikipedia.org/wiki/Exchangeable_random_variables), but the subjective view cannot (I think) be derived from the frequentist view?
 
Last edited:
  • #62
atyy said:
couldn't one say that the subjective view is more general, since from the subjective view, the frequentist view can be derived
Only if like stevendaryl one calles everything subjective, including the choice of an operational criterion to give a concept an objective meaning. I find such a usage of the terms unacceptable.
 
  • #63
A. Neumaier said:
Only if like stevendaryl one calles everything subjective, including the choice of an operational criterion to give a concept an objective meaning. I find such a usage of the terms unacceptable.

Hmmm, I'm not sure I would go that far (in fact, I'm personally a frequentist). But would you call de Finetti's view subjective or objective?
 
  • #64
atyy said:
Hmmm, I'm not sure I would go that far (in fact, I'm personally a frequentist). But would you call de Finetti's view subjective or objective?
It is long ago that I had looked at de Finetti. I read his work during the stage when I formed my own view but since then lost interest in keeping in mind all possible views. Could you please summarize the essence of his view, in as far as it differs from the frequentist view?

The point is that once objective is taken to mean something definite relevant for science (in the philosophical sense, irrespective of the fact that one can question everything) then probability, a key scientific concept, needs an operational definition, and this defines an objective meaning, hence objective probability. Objective not in the sense that it can be always specified to arbitrarily many digits but in the sense that one can communicate its meaning without ambiguity to others within the uncertainty that is inherent in any concept. (Unlike stevendaryl I strictly distinguish between uncertainty, probability, and subjectivity. Uncertainty is often not probabilistic and objective.)
 
  • #65
A. Neumaier said:
It is long ago that I had looked at de Finetti. I read his work during the stage when I formed my own view but since then lost interest in keeping in mind all possible views. Could you please summarize the essence of his view, in as far as it differs from the frequentist view?

The point is that once objective is taken to mean something definite relevant for science (in the philosophical sense, irrespective of the fact that one can question everything) then probability, a key scientific concept, needs an operational definition, and this defines an objective meaning, hence objective probability. Objective not in the sense that it can be always specified to arbitrarily many digits but in the sense that one can communicate its meaning without ambiguity to others within the uncertainty that is inherent in any concept. (Unlike stevendaryl I strictly distinguish between uncertainty, probability, and subjectivity. Uncertainty is often not probabilistic and objective.)

https://faculty.fuqua.duke.edu/~rnau/definettiwasright.pdf

"In the conception we follow and sustain here, only subjective probabilities exist – i.e., the degree of belief in the occurrence of an event attributed by a given person at a given instant and with a given set of information." [de Finetti]

"All three authors proposed essentially the same behavioristic definition of probability, namely that it is a rate at which an individual is willing to bet on the occurrence of an event. Betting rates are the primitive measurements that reveal your probabilities or someone else’s probabilities, which are the only probabilities that really exist." [Nau's commentary on de Finetti]
 
  • #66
atyy said:
it is a rate at which an individual is willing to bet on the occurrence of an event.
This makes it truly subjective, and very restrictive, too. Most people never bet, hence couldn't use de-Finetti-probabilites.

In any case such a definition is meaningless for the scientific concept of probability. The decay probabilities of nuclear species are constants of nature and had objective values long before people with the ability to bet existed.
 
  • Like
Likes N88
  • #67
A. Neumaier said:
This makes it truly subjective, and very restrictive, too. Most people never bet, hence couldn't use de-Finetti-probabilites.

In any case such a definition is meaningless for the scientific concept of probability. The decay probabilities of nuclear species are constants of nature and had objective values long before people with the ability to bet existed.

Hmmmm, very different from my reasons for being a Frequentist. I think it is impractical to be coherent :)
 
Last edited:
  • #68
atyy said:
I think it is impractical to be coherent :)
Since many years I have been spending most of my spare time to make my view of physics coherent. It may be impractical initially and may seem like a waste of time and effort, but in the end it is very rewarding.
 
  • Like
Likes N88, bhobba and atyy
  • #69
I personally do not think that frequentism is completely coherent. I would actually call it incoherent. But I don't think that Bayesianism is completely coherent, either. It seems to me that in a lot of applications of Bayesianism, there seems to be a role for objective (though unknown) probabilities. So not everything seems to be subjective.

For the simplest example, with coin flips, you assume that the coin is governed by some unknown parameter q that is between 0 and 1, with all values equally likely. Then your subjective probability of "heads" is given by:

P(heads) = \int dq P(q) P(heads|q) = \int dq 1 \cdot q = \frac{q^2}{2}|_0^1 = \frac{1}{2}

If you flip the coin once, and get "heads", then you update the probability distribution on q using Bayes' rule, so instead of the flat distribution P(q) = 1, you have a weighted distribution: P'(q) = 2q, giving P'(heads) = 2/3. It works out nicely: the probability of heads starts out 1/2, and gradually increases or decreases depending on the history of past coin flips. But it seems to me that the parameter q in this analysis is an unknown objective probability. So this analysis isn't actually treating probability as completely subjective. Similarly, if you apply Bayesianism to quantum mechanics, it seems that you have to treat some probabilities, such as the probability of getting spin-up in the x direction given that the particle was prepared to have spin-up in the z-direction, as objective. So I don't see how Bayesianism really eliminates objective probability, and if it doesn't, then it doesn't give an interpretation of probability, in general.

In the example above, the probabilities that are subjective are in some sense "meta" probabilities--a subjective probability distribution on objective probabilities.
 
  • #70
A. Neumaier said:
This makes it truly subjective, and very restrictive, too. Most people never bet, hence couldn't use de-Finetti-probabilites.

I disagree. Any time you make a choice to do X or Y, based on probability, you're betting in a sense. There is cost for making the wrong choice. I suppose it's an oversimplification to assume that "costs" can be linearly compared (which is what measuring costs in terms of money assumes).

To say that, because there is no one around to place a bet, then a gambling-based definition of probability is meaningless is being a little bit literalist. There are lots of cases where the closest thing to a "definition" of a physical quantity is operational: the quantity describes what would happen if you were to perform a particular operation. But the quantity exists even if there is nobody around to perform that operation. I suppose in all such cases, you can just let the property be undefined, or primitive, and turn the "definition" into an axiom, rather than a definition, but that's just aesthetics.
 
Last edited:
  • #71
stevendaryl said:
your subjective probability of "heads" is given by:
Why? only if you subjectively believe that the coin is fair. if your subjective belief is that the coin is forged, the subjective probability can take any value between 0 and 1 depending on your belief - independent of whether this beleif is correct or incorrect.
stevendaryl said:
Any time you make a choice to do X or Y, based on probability, you're betting in a sense.
In a scientific discussion you should use the words in the common normative way. You are making a decision, not a bet. A bet means waging money with a particular odds.

Moreover, most of the decisions you were discussing earlier were not based on probability but based on a not further specified uncertainty. We rarely have perfect information, hence our decisions are also less than perfect, but in general this has nothing to do with probability. only if the uncertainty is of an aleatoric nature (rather than epistemic), a probabilistic model is adequate. To be reliable, aleatoric uncertainty must be quatified by objective statistics, not by subjective assignment of probabilities. And epistemic uncertainty costs must be treated completely differently. At least if one doesn't want to make more regrettable decisions than unavoidable! (I published a number of papers on uncertainty modeling in real life situation, so I know.)

stevendaryl said:
a "definition" of a physical quantity is operational: the quantity describes what would happen if you were to perform a particular operation.
But there you ask Nature, which is objective, rather than a person, which is subjective. Precisely this makes the difference.

You cannot in principle ask Nature how much it bets, since betting and money are social conventions. The only way to ask Nature (i.e., to be objective) is to make statistics, and this is the frequentist approach. While asking for betting odds means extracting subjective probabilities of the particular person asked.

Maybe you are motivated to read Chapter 3 of my FAQ before continuing the discussion here...
 
Last edited:
  • Like
Likes N88 and Mentz114
  • #72
Bayesian probability gives strict rules for determining probability when certain knowledge is given. These rules are perfectly objective, in the sense that you can program a computer to follow these rules and give Bayesian probability as the output. If anything is "subjective" about Bayesian probability, then it is knowledge itself. But all science (probabilistic or not) involves knowledge (e.g. knowledge obtained as a result of measurement), so Bayesian probability is not more subjective than science in general. Such a view of probability is defended in much more details in
Jaynes - Probability Theory: The Logic of Science
https://www.amazon.com/dp/0521592712/?tag=pfamazon01-20
 
  • #73
Jaynes was not a true Bayesian. He was an objective Bayesian.
 
  • #74
atyy said:
Jaynes was not a true Bayesian. He was an objective Bayesian.
Only objective Bayesian is a good Bayesian. :smile:
 
  • #76
Demystifier said:
Bayesian probability gives strict rules for determining probability when certain knowledge is given. These rules are perfectly objective, in the sense that you can program a computer to follow these rules and give Bayesian probability as the output. If anything is "subjective" about Bayesian probability, then it is knowledge itself. But all science (probabilistic or not) involves knowledge (e.g. knowledge obtained as a result of measurement), so Bayesian probability is not more subjective than science in general. Such a view of probability is defended in much more details in
Jaynes - Probability Theory: The Logic of Science
https://www.amazon.com/dp/0521592712/?tag=pfamazon01-20

Well, there is a distinction between (some) Bayesians and (most) frequentists when it comes to what sorts of uncertainty can be described by probability. (Some) Bayesians believe that any time you are uncertain about what is true, it is appropriate to express your degree of uncertainty using probability. Frequentists believe that probability should only be applied to repeatable events (like coin tosses). Applying probability to events that only happen once is perfectly fine if probability is interpreted subjectively, but doesn't make sense if probability is interpreted as relative frequency. (Although, I suppose you could turn any uncertainty into relative frequency if you consider the right type of ensemble.)
 
  • #77
stevendaryl said:
Well, there is a distinction between (some) Bayesians and (most) frequentists when it comes to what sorts of uncertainty can be described by probability. (Some) Bayesians believe that any time you are uncertain about what is true, it is appropriate to express your degree of uncertainty using probability. Frequentists believe that probability should only be applied to repeatable events (like coin tosses). Applying probability to events that only happen once is perfectly fine if probability is interpreted subjectively, but doesn't make sense if probability is interpreted as relative frequency. (Although, I suppose you could turn any uncertainty into relative frequency if you consider the right type of ensemble.)
I agree with this, except with the word "subjectively". Let me give you an example. If I give you one guzilamba with two possible states called gutu and baka, what is the probability that it will be in the state gutu?

Now if you are rational, your reasoning will be something like that:
- I have no idea what is guzilamba, let alone gutu and baka. But I do know that there are two possible states one of which is called gutu, and I have no rational reason to prefer one state over the other. Therefore, from what I know, it is rational to assign probability p=1/2 to gutu. Therefore the answer is 1/2.

Here the crucial word is rational. You can even program a computer to arrive at this rational answer, in which sense it is not subjective.
 
  • #78
Demystifier said:
I have no rational reason to prefer one state over the other. Therefore, from what I know, it is rational to assign probability p=1/2 to gutu. Therefore the answer is 1/2.
No. There is no rational reason to treat both state as equally likely unless you know what gutu and baka mean. Thus it is irrational to assign a probability of 1/2.

This is a case of epistemic uncertainty, and it is regarded as a mistake in modern uncertainty modeling to model it by equal probabilities.
 
  • #79
Demystifier said:
I agree with this, except with the word "subjectively". Let me give you an example. If I give you one guzilamba with two possible states called gutu and baka, what is the probability that it will be in the state gutu?

Now if you are rational, your reasoning will be something like that:
- I have no idea what is guzilamba, let alone gutu and baka. But I do know that there are two possible states one of which is called gutu, and I have no rational reason to prefer one state over the other. Therefore, from what I know, it is rational to assign probability p=1/2 to gutu. Therefore the answer is 1/2.

Here the crucial word is rational. You can even program a computer to arrive at this rational answer, in which sense it is not subjective.

I'm not sure that these priors are unique. Suppose I tell you further that there are two types of baka states: baka-A and baka-B. Then would you say that:
  1. There is probability 1/3 of being in state gutu, baka-A, or baka-B.
  2. There is probability 1/2 of being in state gutu or baka, and if you are in state baka, there is probability 1/2 of being in baka-A or baka-B.
One way of modeling gives probabilities \frac{1}{3}, \frac{1}{3}, \frac{1}{3} for (gutu, baka-A, baka-B). The other way of modeling gives probabilies \frac{1}{2}, \frac{1}{4}, \frac{1}{4}.

It becomes even more ambiguous if I said "A guzilamba has an associated property, called its butu-value, which can take on any real value between 0 and 1". Now what's the probability that a random guzilamba has a butu-value of less than 1/2?

You could model it using a flat distribution, which might be rational, since you don't know which values are more likely than which other values. In which case you would conclude that the answer is "1/2". But alternatively, you could define (for example) \theta = sin^{-1}(\beta), where \beta is the butu-value. Isn't it just as rational to assume that \theta has a flat distribution in the range 0, \frac{\pi}{2}? But that's a different prior.
 
Last edited:
  • #80
stevendaryl said:
I'm not sure that these priors are unique. Suppose I tell you further that there are two types of baka states: baka-A and baka-B. Then would you say that:
  1. There is probability 1/3 of being in state gutu, baka-A, or baka-B.
  2. There is probability 1/2 of being in state gutu or baka, and if you are in state baka, there is probability 1/2 of being in baka-A or baka-B.
One way of modeling gives probabilities \frac{1}{3}, \frac{1}{3}, \frac{1}{3} for (gutu, baka-A, baka-B). The other way of modeling gives probabilies \frac{1}{2}, \frac{1}{4}, \frac{1}{4}.
Well, I am a human and as such I am subjective and not always rational, so I could not decide easily between \frac{1}{3}, \frac{1}{3}, \frac{1}{3} and \frac{1}{2}, \frac{1}{4}, \frac{1}{4}. But if you program your computer to decide, it will decide without any problems. What its decision will be? It depends on the program, but if the programmed algoritm says that things with different names have equal probabilities, then the result is \frac{1}{3}, \frac{1}{3}, \frac{1}{3}. Now I myself have some additional information (from experience I know that things called something-A and something-B are often subtypes of the same thing, so it might or might not mean that...), but the computer does not posses such additional vague information so it's an easy task for the computer. More to the point, whatever information a computer possesses that information is never vague, so for the computer the task is never ambiguous.
 
  • #81
Demystifier said:
Well, I am a human and as such I am subjective and not always rational, so I could not decide easily between \frac{1}{3}, \frac{1}{3}, \frac{1}{3} and \frac{1}{2}, \frac{1}{4}, \frac{1}{4}. But if you program your computer to decide, it will decide without any problems. What its decision will be? It depends on the program, but if the programmed algoritm says that things with different names have equal probabilities, then the result is \frac{1}{3}, \frac{1}{3}, \frac{1}{3}. Now I myself have some additional information (from experience I know that things called something-A and something-B are often subtypes of the same thing, so it might or might not mean that...), but the computer does not posses such additional vague information so it's an easy task for the computer. More to the point, whatever information a computer possesses that information is never vague, so for the computer the task is never ambiguous.

Okay, but by that definition, nothing is subjective. For any subjective question, I can write a program that attempts to answer it, and call that answer objective. We can decide once and for all whether the Beatles were better than The Rolling Stones.
 
  • Like
Likes Demystifier
  • #82
A. Neumaier said:
No. There is no rational reason to treat both state as equally likely unless you know what gutu and baka mean. Thus it is irrational to assign a probability of 1/2.
You are prissoned by savages who tell you that one of them means "they will kill you" and the other means "they will release you", but they don't tell you which is which. Now they press you to choose: should they gutu you, or should they baka you? If you don't choose anything, they will kill you with certainty. What will you decide, gutu or baka? What is the rational choice? Is it rational to say "I choose nothing because I don't have sufficient data?".
 
  • #83
Demystifier said:
Only objective Bayesian is a good Bayesian. :smile:

No Bayesian would say such a thing, unless he had an irrational prior :)
 
  • Like
Likes Demystifier
  • #84
stevendaryl said:
Okay, but by that definition, nothing is subjective. For any subjective question, I can write a program that attempts to answer it, and call that answer objective. We can decide once and for all whether the Beatles were better than The Rolling Stones.
Of course. The only subjective thing here is the choice of the program itself.
 
  • #85
Demystifier said:
Of course. The only subjective thing here is the choice of the program itself.

Well, that's the sense in which anything is subjective. The choice of how to think about (or model) a problem is subjective, and if any result depends on that choice, then I would call the result subjective.
 
  • Like
Likes Demystifier
  • #86
stevendaryl said:
Well, that's the sense in which anything is subjective. The choice of how to think about (or model) a problem is subjective, and if any result depends on that choice, then I would call the result subjective.
Yes, but the point is that it is a general feature of science modeling, not an exclusive property of modeling Bayesian probability. Bayesian probability is not more subjective than any other method in theoretical science.
 
  • #87
Demystifier said:
What is the rational choice?
Each choice is rational. There is no associated probability.
 
  • #88
A. Neumaier said:
Each choice is rational. There is no associated probability.
But is one of them more rational than the other? No? Then what does it tell us about Bayesian probability? Or do you claim that Bayesian probability is not probability at all?
 
  • #89
Demystifier said:
But is one of them more rational than the other? No?
In the absence of further knowledge both choices are rational. There is no way to compare the quality of the choices except by waiting for the consequences. To make choices, no concept of probability is needed.
 
  • #90
A. Neumaier said:
In the absence of further knowledge both choices are rational. There is no way to compare the quality of the choices except by waiting for the consequences. To make choices, no concept of probability is needed.
OK, then let my try something completely different. I flip the coin twice, and I obtain the result:
heads, heads
What is the probability of getting heads? Can probability be assigned in that case?
 
  • #91
Demystifier said:
OK, then let my try something completely different. I flip the coin twice, and I obtain the result:
heads, heads
What is the probability of getting heads? Can probability be assigned in that case?
It is in [0,1].
 
  • #92
A. Neumaier said:
It is in [0,1].
So is there any case in science where one can assign definite probabilities, without performing an infinite number of experiments?
 
  • #93
A. Neumaier said:
In the absence of further knowledge both choices are rational. There is no way to compare the quality of the choices except by waiting for the consequences. To make choices, no concept of probability is needed.

It's not needed, but probabilities provide a coherent way to reason about uncertainties. That's one of the arguments in favor of the axioms of probability: If you express your uncertainties in terms of probability, then you have a principled way to combine uncertainties. If you don't, then you can become the victim of a "Dutch book" scam:

https://en.wikipedia.org/wiki/Dutch_book
 
  • #94
Demystifier said:
So is there any case in science where one can assign definite probabilities, without performing an infinite number of experiments?
Assuming that probabilities have definite (infinitely accurate) values is as fictitious as assuming the length of a stick to have a definite (infinitely accurate) value. Science is the art of valid approximation, not the magic of assigning definite values.

One uses statistics to assign uncertain probabilities according to the standard rules, and one can turn these probabilities into simple numbers by ignoring the uncertainty. That's the scientific practice, and that's what theory,
and standards such as NIST, tell one should do.
 
  • #95
stevendaryl said:
probabilities provide a coherent way to reason about uncertainties.
Only about aleatoric uncertainty. This is the consensus of modern researchers in uncertainty quantification. Se the links given earlier.
 
  • #96
A. Neumaier said:
Assuming that probabilities have definite (infinitely accurate) values is as fictitious as assuming the length of a stick to have a definite (infinitely accurate) value. Science is the art of valid approximation, not the magic of assigning definite values.

One uses statistics to assign uncertain probabilities according to the standard rules, and one can turn these probabilities into simple numbers by ignoring the uncertainty. That's the scientific practice, and that's what theory,
and standards such as NIST, tell one should do.
OK, then please use this scientific practice to determine probability in my post #90.
 
  • #97
stevendaryl said:
If you don't, then you can become the victim
You don't need to teach me how to reason successfully about uncertainty. Our company http://www.dagopt.com/en/home sells customized software that allows our industrial customers to save lots of money by making best use of the information available. They wouldn't pay us if they weren't satisfied with our service.

It is a big mistake to use probabilities as a substitute for ignorance, simply because with probabilities ''you have a principled way to combine uncertainties''.
 
Last edited by a moderator:
  • #98
Demystifier said:
OK, then please use this scientific practice to determine probability in my post #90.
Respectable scientists are no fools that would determine probabilities from the information you gave.
 
  • #99
A. Neumaier said:
Respectable scientists are no fools that would determine probabilities from the information you gave.
OK, what is the minimal amount of information that would trigger you to determine probabilities? How many coin flips is the minimum?
 
  • #100
A. Neumaier said:
Only about aleatoric uncertainty. This is the consensus of modern researchers in uncertainty quantification. Se the links given earlier.

I think there are times when the different types of uncertainty have to be combined. For example, if you're taking some action that's never been done before, such as a new particle experiment, or sending someone to Mars, or whatever, some of the uncertainties are statistical, and some of the uncertainties are epistemic--we may not know all the relevant laws of physics or conditions.

So suppose that there are two competing physical theories, theory A and theory B. If A is correct, then our mission has only a 1% chance of disaster, and a 99% chance of success. If B is correct, then our mission has a 99% chance of disaster and 1% chance success. But we don't know whether A or B is correct. What do we do? You could say that we should postpone making any decision until we know which theory is correct, but we may not have that luxury. It seems to me that in making a decision, you have to take into account both types of uncertainty. But how to combine them, if you don't use probability? I guess you could say that you're just screwed in that case, but surely there are extreme cases where we know what to do: If A is an accepted, mainstream, well-tested theory and B is just somebody's hunch, then we would go with A.
 
Back
Top