Understanding the Uniform Probability Distribution in Statistical Ensembles

  • #151
Demystifier said:
If you said that in the context of quantum foundations, I might not agree. But here, in the context of foundations of statistical mechanics, I agree. Even though I like to think that physics, in general, might be something more then a human tool for a description of nature, I don't have problems with admitting that statistical physics is not much more than that.
What makes quantum theory different from any other physical theory (in fact there's only one alternative, namely classical physics)? It's a quantitative description of (the objective aspects of) nature nothing more but also nothing less. It has a wide range of validity with the limitations yet unknown (except that there's no satisfactory quantum description of gravity). In this sense we consider it as the fundamental theory underlying all physics, but it's based on empirical evidence as any theory in physics and thus subject to changes whenever a reproducible contradiction between theory and experiment occurs!
 
Physics news on Phys.org
  • #152
A. Neumaier said:
Measurements are human artifacts used to check and perhaps to arrive at physical theories. But they are nothing fundamental - they don't figure in Newton's laws, Einstein's general relativity, or the standard model. Moreover, how things are measured changes considerable with time, while the fundamental physics is supposed to be time-invariant (though less completely known at earlier times). Otherwise we couldn't apply physics to the past or the far distance (where we only see radiation from the far past).
What else figures into any physical theory if not empirical experience? I don't know any physical theory that is successful in describing nature, which has no solid foundation in empirical evidence in form of quantitative observations/measurements. Of course, the technology has made and hopefully will still made a tremendous progress in just a few decades, and this also brings more to be observable and quantified (the most recent example are gravitational waves) and/or better and better accuracy. This in turn might force us to refine or even completely modify our contemporary theories and models. That's progress of science. Of course, a lot seems to be known already, and our theories are quite comprehensive (concerning about 4% of the energy-momentum content of our Universe ;-)), and the extrapolation of the locally discovered laws to even the evolution of the entire universe is pretty successful, but this doesn't mean that this is the end of physics. Who knows, what will be discovered with even better and more sensitive instruments in the future?
 
  • #153
vanhees71 said:
What makes quantum theory different from any other physical theory (in fact there's only one alternative, namely classical physics)? It's a quantitative description of (the objective aspects of) nature nothing more but also nothing less. It has a wide range of validity with the limitations yet unknown (except that there's no satisfactory quantum description of gravity). In this sense we consider it as the fundamental theory underlying all physics, but it's based on empirical evidence as any theory in physics and thus subject to changes whenever a reproducible contradiction between theory and experiment occurs!
Suppose that you live in the beginning of the 20'th century knowing nothing about modern QM. But you know very well pure classical mechanics (Newton, Lagrange, Hamilton), as well as works of Boltzmann and Gibbs on classical statistical mechanics. And suppose somebody tells you that pure classical mechanics tells us what really happens in Nature, while classical statistical mechanics only tells us what we can know about Nature in some circumstances involving many particles. What would you tell him?
 
  • #154
It depends on the application. You cannot describe a gas of ##10^{23}## particles on all microscopic detail and thus you look at the relevant coarse-grained macroscopic observables, employing probability theory for what I ignore. The answer is not different to that given to a 21st-century physicist "knowing" that QT describe what "really happens in nature".
 
  • #155
vanhees71 said:
It depends on the application. You cannot describe a gas of ##10^{23}## particles on all microscopic detail and thus you look at the relevant coarse-grained macroscopic observables, employing probability theory for what I ignore. The answer is not different to that given to a 21st-century physicist "knowing" that QT describe what "really happens in nature".
Yes but - the hypothetical person from the beginning of the 20'th century further argues - we don't use any probability in pure classical mechanics, so nothing is ignored. Does it mean that pure classical mechanics, when it is applicable, tells us what really happens?

What's your answer? (You still know nothing about modern QM.)
 
  • #156
I guess, I'd had argued that indeed within classical physics the probabilities are just an effective description of our ignorance due to the complexity of the many-body system. In fact it has been argued by Boltzmann, Gibbs et al at the time when they tried to establish the very idea of statistical physics. Of course, they had a very hard time to convince many of their colleagues about the effectiveness of their approach. The very existence of "atoms" was even highly suspicious to most of their contemporary physicist, while naturally they were pretty much accepted by chemists. E.g., Planck, as an expert on thermodynamics, didn't like the statistical approach at all, but got convinced later. The most important step for the acceptance of the atomistic structure of matter within physics has been perhaps Einstein's work on fluctuations, including is famous work on Brownian motion and, perhaps even more convincing, the critical opalescence and the quantitative determination of the Avogadro constant.

Now comes again some philosophical mambo-jambo, if you ask, whether physical theories tell us "what really happens". What does it mean, when you say something really happens. This can only be an opinion of any individual physicist but is not subject of science itself. I strongly believe that Nature doesn't care very much about us and our knowledge about what's going on. So I think it exists independently of us, and we never know, "what really happens", but we know not too badly what happens in a given situation due to the natural laws, which are descriptions of empirical quantitative observations of the objective part of our experience of what happens in nature or as it occurs to us, including a tremendous extention of our senses by technological aids.
 
  • Like
Likes zonde and Demystifier
  • #157
Demystifier said:
Yes but - the hypothetical person from the beginning of the 20'th century further argues - we don't use any probability in pure classical mechanics, so nothing is ignored. Does it mean that pure classical mechanics, when it is applicable, tells us what really happens?

What's your answer? (You still know nothing about modern QM.)

I'm not exactly sure what point you are making, but it does seem to me that there is a difference between classical and quantum physics in that classical physics was supposed to describe the way the world works, even if there are no scientists or observers or measurement devices around, while the usual interpretation of quantum mechanics, which is that it describes the probabilities of outcomes of measurements, is hard to make sense of in the absence of measurement devices.
 
  • #158
stevendaryl said:
the usual interpretation of quantum mechanics, which is that it describes the probabilities of outcomes of measurements, is hard to make sense of in the absence of measurement devices.
This just implies that the orthodox interpretations are much more limited that the true scope of quantum mechanics.
Quantum mechanics is known to apply to things everywhere and anytime in the world, including many situations where one can observe only very indirect consequences.
 
  • #159
A. Neumaier said:
This just implies that the orthodox interpretations are much more limited that the true scope of quantum mechanics.
Quantum mechanics is known to apply to things everywhere and anytime in the world, including many situations where one can observe only very indirect consequences.

I believe that, but there is a mismatch between that universal applicability and the way it is (usually) presented, which is in terms of probabilities for observables (or expectations for observables, in the density matrix formulation).
 
  • #160
stevendaryl said:
there is a mismatch between that universal applicability and the way it is (usually) presented, which is in terms of probabilities for observables (or expectations for observables, in the density matrix formulation).
The probability interpretation is questionable as a foundation, as it it always associated with the idea of frequent measurement (or even more anthropocentric ideas). But measurements are a comparably rare event in Nature (especially if we average over the duration of the existence of the universe).

The shut-up-and-calculate version of quantum mechanics is universally applied, always making use of the notion of expectation - typically without reference to measurements, and only sometimes using their interpretation in terms of probabilities (needed only for interpreting scattering experiments, where it has a rational basis in abundant statistics). Thus a good interpretation should only be based on expectation, not on probabilities.

Chapters 8 and 10 of my online book on quantum mechanics were designed explicitly to take this into account, resulting in a presentation without the mismatch that you mention. The basics were also discussed here on PF.

I got the idea from a book on classical probability by Peter Whittle, Probability via expectation (4th edition, 2000). From the preface to the third edition (starting with a reference to the first edition from 1970):
Peter Whittle said:
The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. [...] In re-examining the approach after this lapse of time I find it more persuasive than ever. [...] I would briefly list the advantages of the expectation approach as follows.
  • (i) It permits a more economic and natural treatment at the elementary level.
  • (ii) It opens an immediate door to applications, because the quantity of interest in many applications is just an expectation.
  • (iii) Precisely for this last reason, one can discuss applications of genuine interest with very little preliminary development of theory. On the other hand, one also finds that a natural unrolling of ideas leads to the development of theory almost of itself.
  • (iv) The approach is an intuitive one, in that people have a well-developed intuition for the concept of an average. Of course, what is found 'intuitive' depends on one's experience, but people with a background in the physical sciences have certainly taken readily to the approach. [...]
  • (v) The treatment is the natural one at an advanced level. [...] The accepted concepts and techniques of weak convergence and of generalized processes are characterized wholly in terms of expectation.
  • (vi) Much conventional presentation of probability theory is distorted by a preoccupation with measure-theoretic concepts which is in a sense premature and irrelevant. These concepts (or some equivalent of them) cannot be avoided indefinitely. However, in the expectation approach, they find their place at the natural stage.
  • (vii) On the other hand, a concept which is notably and remarkably absent from conventional treatments is that of convexity. (Remarkable, because convexity is a probabilistic concept, and, in optimization theory, the necessary invocations of convexity and of probabilistic ideas are intimately related.) In the expectation approach convexity indeed emerges as an inevitable central concept.
  • (viii) Finally, in the expectation approach, classical probability and the probability of quantum theory are seen to differ only in a modification of the axioms - a modification rich in consequences, but succinctly expressible.
The 4th edition treats quantum mechanics in the final Chapter 20. In particular, in Theorem 20.1.5, Whittle derives the Born rule as conditional probability, thus removing all weirdness from its interpretation. (Later, he characterizes the Schroedinger equation, unfortunately placing the ##i## systematically on the wrong side of the equation, so getting the dynamics backwards. But in spite of this small lapse, I can highly recommend the book!
 
Last edited:
  • #161
A. Neumaier said:
This just implies that the orthodox interpretations are much more limited that the true scope of quantum mechanics.
Quantum mechanics is known to apply to things everywhere and anytime in the world, including many situations where one can observe only very indirect consequences.
Apply in which sense? We always look on ensembles or otherwise coarse grained observables (expectation values) and compare them with the predictions by quantum theory. So what else is there within QT than the probablities predicted by the formalism and their experimental tests via the usual statistical methods?
 
  • #162
vanhees71 said:
Apply in which sense? We always look on ensembles or otherwise coarse grained observables (expectation values) and compare them with the predictions by quantum theory. So what else is there within QT than the probabilities predicted by the formalism and their experimental tests via the usual statistical methods?
Apply in the sense that statistical mechanics applies to a single glass of water. One uses ensemble expectation values for the single quantum system [and, according to Gibbs, nonphysical, imagined repetitions to justify the ensemble language for the single use case] to assign a temperature and other things that can be measured.

Single, nonrepeated measurements of temperature, pressure and volume can be used to check the predictions of quantum mechanics in equilibrium. These measurements have nothing to do with any of the mock measurements of identically prepared systems discussed in the traditional interpretations of quantum mechanics.
 
  • #163
A. Neumaier said:
Thus a good interpretation should only be based on expectation, not on probabilities.

I don't see a big difference, in principle, between basing it on expectation and basing it on probabilities. What is the difference in principle between saying that observable A has values a_i with probability p_i and saying observable A has expectation value \langle A \rangle?

In classical statistical mechanics, one would say either that A fluctuates unpredictably, but the average value is \langle A \rangle, or that A has a definite, though unknown, value, and \langle A \rangle represents the average over many systems that are macroscopically identical to the one of interest.
 
  • #164
stevendaryl said:
What is the difference in principle?
The difference is that in the second (expectation) case you don't need (and actually don't want!) a probability interpretation.

Nobody using statistical mechanics for applications employs the probability interpretation you propose. Instead, what is always (except when the subject matter is introduced) used is the interpretation given in an earlier PF discussion.
 
  • #165
stevendaryl said:
one would say either that A fluctuates unpredictably, but the average value is ⟨A⟩
The Hamiltonian ##H## is invariant in time, hence does not fluctuate at all. So which meaning do you ascribe to the internal energy ##\langle H \rangle\>## of a particular glass of water?

Note that this internal energy can be measured in the traditional sense of the notion - by computing it from single measurements of ##P,V,T## together with the equation of state of water (which can be derived in some approximation from classical statistical mechanics).
 
  • #166
If you think that an abstract mathematical concept (such as that of expectation) must necessarily be interpreted in the way it arose in the application it was abstracted from then you would also have to interpret every wave function (vector in a Hilbert space) as a little arrow in ordinary space, since that is what the concept of a vector originally meant.
 
  • #167
A. Neumaier said:
The Hamiltonian ##H## is invariant in time, hence does not fluctuate at all. So which meaning do you ascribe to the internal energy ##\langle H \rangle\>## of a particular glass of water?

If the system of interest is coupled to a reservoir at a constant temperature, then the total energy of the system is not constant, since it can exchange energy with the reservoir. In the case of a glass of water, there is the possibility of an exchange of energy with the environment.
 
  • #168
stevendaryl said:
If the system of interest is coupled to a reservoir at a constant temperature, then the total energy of the system is not constant, since it can exchange energy with the reservoir. In the case of a glass of water, there is the possibility of an exchange of energy with the environment.
Put the water in a thermally isolated flask; then no energy is exchanged.
 
  • #169
A. Neumaier said:
If you think that an abstract mathematical concept (such as that of expectation) must necessarily be interpreted in the way it arose in the application it was abstracted from then you would also have to interpret every wave function (vector in a Hilbert space) as a little arrow in ordinary space, since that is what the concept of a vector originally meant.

I think you're fooling yourself if you think that going from probabilities to expectation values means that you understand things better. You can certainly work with things abstractly, in which case, you don't actually need to know what you're talking about. That's the beauty of the "shut up and calculate" interpretation of QM. But if you think that you are doing anything more than shut up and calculate, I think you're fooling yourself.
 
  • #170
A. Neumaier said:
Put the water in a thermally isolated flask; then no energy is exchanged.

That's why I had an either/or. Either the expectation value represents fluctuation in time, or it represents microscopic differences between macroscopically identical systems. (Or both)
 
  • #171
stevendaryl said:
I think you're fooling yourself.
I think you simply want to make a fool of me because you don't understand the nature of abstraction.

Calling something an expectation is simply a choice of words like calling something a vector. It conveys no other information than what is given in the definition of the concept. For expectation, the definition (given by Whittle) requires linearity, positivity, and continuity. Nothing else.

In applications where one has sufficiently many repetitions one may interpret the expectation as an average, just as in applications where a vector has three position coordinates you may interpret the vector as an arrow in ordinary space.

In applications where one has no repetitions one cannot interpret the expectation as an average, just as in applications where a vector represents a wave function ##\psi(x)## one cannot interpret the vector as an arrow in ordinary space.

It is as simple as that. Only the application determines the way how an abstract concept is to be interpreted in a concrete situation.
 
Last edited:
  • #172
stevendaryl said:
That's why I had an either/or. Either the expectation value represents fluctuation in time, or it represents microscopic differences between macroscopically identical systems. (Or both)
But neither applies in case of a single bottle of water when the bottle is thermally isolated.
 
  • #173
A. Neumaier said:
But neither applies in case of a single bottle of water when the bottle is thermally isolated.

Yes, it does. Given the macroscopic description of the bottle of water, in terms of total energy, mass, etc., there are many different microscopic states that are consistent with that macroscopic description.
 
  • Like
Likes vanhees71
  • #174
A. Neumaier said:
I think you simply want to make a fool of me because you don't understand the nature of abstraction

I don't want to make a fool of you, but I think that you are claiming insights that you don't actually have. You aren't doing anything different than "shut up and calculate".
 
  • #175
stevendaryl said:
Yes, it does. Given the macroscopic description of the bottle of water, in terms of total energy, mass, etc., there are many different microscopic states that are consistent with that macroscopic description.
But the measurement is done on the single system only. The others are just fictitious copies (as Gibbs told us) without any influence on the measured system. Your expectation would be an average over fictitious measurements, which makes no sense.
 
  • #176
To make physical sense of an expectation value of an observable, you have to say what that expectation value means for an observation. And what is that? It isn't that a measurement of quantity A will always produce value \langle A \rangle. It isn't that it will always produce something in the range \langle A \rangle \pm std(A), where std(A) means the standard deviation. It seems to me that to connect expectation values with observations, you have to get into probabilities. So expectation values have all the same conceptual problems that probabilities do.
 
  • #177
stevendaryl said:
I don't want to make a fool of you, but I think that you are claiming insights that you don't actually have. You aren't doing anything different than "shut up and calculate".
I am just claiming that the meaning of an abstract concept is determined by its use, not by its historical origin. I know very well how expectations are used in statistical mechanics, and nowhere does one make the slightest use of probabilities. These probabilities are as fictitious as the ensembles Gibbs introduced to justify the expectation calculus (because in his time abstract algebra was still far in the future). Whereas one makes frequent use of the meaning discussed in the post referred to in post #164, which stands for itself without any reference to probabilities.
 
  • #178
A. Neumaier said:
But the measurement is done on the single system only. The others are just fictitious copies (as Gibbs told us) without any influence on the measured system. Your expectation would be an average over fictitious measurements, which makes no sense.

I don't know why it doesn't make sense to you, but everybody has his own limitations.
 
  • #179
stevendaryl said:
It seems to me that to connect expectation values with observations, you have to get into probabilities.
Only into uncertainty.

But to connect classical observables with observation you also have to get into uncertainty. Measuring the side and the diagonal of a square posed the basic conflict already 25 centuries ago.

It is illegitimate to equate uncertainty with probability, as you constantly do. Uncertainty had a meaning many centuries before probabilities were even conceived as a concept. And today it still has a different, far more encompassing meaning, as the link to wikipedia shows.
 
  • #180
A. Neumaier said:
I am just claiming that the meaning of an abstract concept is determined by its use, not by its historical origin. I know very well how expectations are used in statistical mechanics, and nowhere does one make the slightest use of probabilities.

Okay, what does it mean, in practice, to say that a thermodynamic quantity A has expectation \langle A \rangle?
 
  • #181
stevendaryl said:
I don't know why it doesn't make sense to you, but everybody has his own limitations.
Because fictitious systems cannot be measured! The measurement result on a single system must be a property of the single system, and cannot depend on properties of imagined copies.
 
  • #182
A. Neumaier said:
Because fictitious systems cannot be measured! The measurement result on a single system must be a property of the single system, and cannot depend on properties of imagined copies.

You're getting confused. The measurement result is not an expectation. I'm talking about the relationship between the measurement result and the theoretically computed expectation value.
 
  • #183
stevendaryl said:
Okay, what does it mean, in practice, to say that a thermodynamic quantity A has expectation \langle A \rangle?
I gave the link stating the precise meaning repeatedly in this discussion, last in post #164.

In a slightly fuzzy (but still fully correct) version, one can say one can measure (in principle) ##\langle A \rangle## with a negligible uncertainty if the system is large enough. There is no uncertainty at all in the thermodynamic limit that is usually invoked when deriving thermodynamics from statistical mechanics.
 
  • #184
A. Neumaier said:
I gave the link stating the precise meaning repeatedly in this discussion, last in post #164.

The question isn't how to CALCULATE expectation value, the question is, what is the physical significance of saying that the expectation value of A is \langle A \rangle? A physical theory has two parts: one is mathematical, which tells you how to compute various quantities, and the second is observational, which is how those quantities relate to our observations. I'm asking about the second.
 
  • #185
stevendaryl said:
You're getting confused. The measurement result is not an expectation. I'm talking about the relationship between the measurement result and the theoretically computed expectation value.
What was it exactly that you claimed? Did you mean to say no more than that the theoretically computed value is the average over an ensemble of similar systems? This does not give any relation to a measurement result, it only relates a theoretical value to other theoretical values. Moreover, the theoretical value depends on which ensemble you use to define which systems are similar. Each time I get a different result. Which one is the one related to the measurement?

Hence your proposed relationship amounts to nothing. (One has the same problem with classical probability: The probability to get lung cancer depends a lot on whether you choose the ensemble of all people or the ensemble of all heavy smokers. Which one is the correct theoretical probability? And how do you check it on a particular person who didn't get lung cancer?)
 
  • #186
A. Neumaier said:
What was it exactly that you claimed? Did you mean to say no more than that the theoretically computed value is the average over an ensemble of similar systems? This does not give any relation to a measurement result, it only relates a theoretical value to other theoretical values.

That's what your definition of "expectation value" does.
 
  • #187
stevendaryl said:
The question isn't how to CALCULATE expectation value, the question is, what is the physical significance of saying that the expectation value of A is \langle A \rangle? A physical theory has two parts: one is mathematical, which tells you how to compute various quantities, and the second is observational, which is how those quantities relate to our observations. I'm asking about the second.
I was answering the second. The observation gives approximately the expectation, with an uncertainty given by the standard deviation. No probabilities are involved in either asserting or checking this. Why do we have all the error bars in scientific reports on measurements?
 
  • #188
stevendaryl said:
That's what your definition of "expectation value" does.
I was asking two questions. Your comment is answering neither.
 
  • #189
A. Neumaier said:
I was answering the second. The observation gives approximately the expectation

But it doesn't. You're not going to get the expectation.

with an uncertainty given by the standard deviation.

Then what does "You will get \langle A \rangle with uncertainty \delta A" mean? What does it mean that the uncertainty is \delta A?

It doesn't mean that you will get a value between \langle A \rangle - \delta A and \langle A \rangle + \delta A. So you haven't actually connected the theoretical result with observations.
 
  • #190
stevendaryl said:
You're not going to get the expectation.
I didn't claim I would. If you are measuring the diagonal of a square of side 1 you are also not getting ##\sqrt{2}##.
stevendaryl said:
Then what does "You will get \langle A \rangle with uncertainty \delta A" mean? What does it mean that the uncertainty is \delta A?
It means that with high quality measurement equipment, the difference is bounded by a small multiple (typically less than 3, but 5 in case you want to have very high confidence) of the uncertainty. If this is not the case you expect to have an error in either the prediction procedure, or the experimental setting, or the numerical evaluation of the measurement protocol. (Or you try to publish your result as a failure of the laws of quantum mechanics. But it is unlikely your paper will be accepted unless others can reproduce your result.)
 
  • #191
A. Neumaier said:
I didn't claim I would. If you are measuring the diagonal of a square of side 1 you are also not getting ##\sqrt{2}##.

It means that the difference is bounded by a small multiple (typically less than 3, in case you want to have very high conficence) 5 of the uncertainty.

But that's not actually true. The fact that the expectation value of A is \langle A \rangle and that the standard deviation is \delta A doesn't actually imply that my measurement will be between \langle A \rangle - \delta A and \langle A \rangle + \delta A. So what does it imply?
 
  • #192
stevendaryl said:
But that's not actually true. The fact that the expectation value of A is \langle A \rangle and that the standard deviation is \delta A doesn't actually imply that my measurement will be between \langle A \rangle - \delta A and \langle A \rangle + \delta A. So what does it imply?
Again I did not claim that. Why do you object to things I didn't say?
 
  • #193
A. Neumaier said:
Again I did not claim that. Why do you object to things I didn't say?

What you said was "The observation gives approximately the expectation, with an uncertainty given by the standard deviation". But that has two additional undefined terms in it: "approximately" and "uncertainty". How do you make sense of those two words, in a non-circular way?

Your claim that expectation is less problematic than probability is just false.
 
  • #194
stevendaryl said:
What you said was "The observation gives approximately the expectation, with an uncertainty given by the standard deviation". But that has two additional undefined terms in it: "approximately" and "uncertainty". How do you make sense of those two words, in a non-circular way?
By assuming that my readers understand English.

It is impossible to give definitions in which every word used is defined as well. You can't define anything at all in this way. I place the residual uncertainty in my definition in the location where they actual are when people are doing experiments.
 
  • #195
A. Neumaier said:
By assuming that my readers understand English.

But the usual interpretations of "uncertainty" and "approximately" are subjective. So your move from "probabilities" to "expectations" doesn't actually accomplish anything, as far as making the subject less problematic.
 
  • #196
stevendaryl said:
But the usual interpretations of "uncertainty" and "approximately" are subjective.
Not more than language in general. In spite of this subjectivity, people have a good (though also subjective) sense of what objectivity means.

The purpose of objectivity is to enable a group of cooperative people called scientists to arrive at a reliable, objective, and hence predictive consensus. Not to make everything look unambiguous and logically 100.0000000000000...% correct to nitpickers like you.
 
  • #197
A. Neumaier said:
Not more than language in general. In spite of this subjectivity, people have a good (though also subjective) sense of what objectivity means.

It seems to me that you are just hiding problems under the rug. You don't like probability, because it's so subjective, so you replace it by expectation, which is subjective in the exact same sense.
 
  • #198
I really have to get out of this thread...
 
  • #199
stevendaryl said:
But the usual interpretations of "uncertainty" and "approximately" are subjective.
There are standardization efforts to reduce even this amount of subjectiveness. See, e.g., the "Guide to the Expression of Uncertainty in Measurement" (GUM) published by ISO, the National Institute for Standards and Technology (NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement".

But as you can see from these documents, every attempt to define something accurately results only in much more voluminous explanations using even more undefined words.

Language, and hence science is therefore intrinsically circular. But this benign form of circularity doesn't matter.

The standard practice is to state your assumptions in as clear terms as possible (using standard language without defining it) and start from there. Expectation (using ''approximate'' and ''small multiple'' as self-explained words in terms of which uncertainty is definable precisely) is a far better starting point than betting - which in science is completely hypothetical.
 
  • #200
stevendaryl said:
You don't like probability, because it's so subjective, so you replace it by expectation, which is subjective in the exact same sense.
It is not the same sense. Every child can interpret ''a small multiple, typically 3 or 5'', which is used in my explication of approximate, uncertain, and expectation, while probability is a fairly confusing concept even for adults, as the story with the 3 doors demonstrates.
 
Last edited:
Back
Top