When Quantum Mechanics is thrashed by non-physicists #1

  • #151
TrickyDicky said:
This doesn't follow. First the ensemble interpretation not only does not have an ontology for the wave function, it doesn't have an ontology for classical reality as it is the case in the Copenhagen interpretation.
There is no objective classical world in the ensemble interpretation so no classical-quantum cut. Remember that classical physics is an approximation, If it works so well in the macro world is because it is a good approximation on that scale, so reality is not classical.

But does common sense reality exist in the ensemble interpretation? In the ensemble interpretation, does nature exist after all physicists have died? Does nature have a law-like description, at least approximately?
 
Last edited:
Physics news on Phys.org
  • #152
atyy said:
But does common sense reality exist in the ensemble interpretation? In the ensemble interpretation, does nature exist after all physicists have dies? Does nature have a law-like description, at least approximately?
I'd say yes it exists and being an observer
independent interpretation nature don't care
about physicists, but it is agnostic about the specific ontology beyond quantum statistical mechanics.
 
  • #153
TrickyDicky said:
I'd say yes it exists and being an observer
independent interpretation nature don't care
about physicists, but it is agnostic about the specific ontology beyond quantum statistical mechanics.

Then there is still a classical/quantum cut. One shouldn't take the "classical" too seriously in that term, it can be substituted by "common sense reality". So the wave function still does not cover the whole universe, and one has to choose which part of common sense reality is assigned a wave function.
 
  • #154
atyy said:
Then there is still a classical/quantum cut. One shouldn't take the "classical" too seriously in that term, it can be substituted by "common sense reality". So the wave function still does not cover the whole universe, and one has to choose which part of common sense reality is assigned a wave function.
I don't think the wave function is assigned any part as it is purely epistemic, just an instrument to obtain statistical predictions to compare with nature(that's why I say it is compatible with the objective existence of nature) and the interpretation is agnostic wrt hidden variables, so it clearly admits the wave function may not be all.
 
  • #155
TrickyDicky said:
I don't think the wave function is assigned any part as it is purely epistemic, just an instrument to obtain statistical predictions to compare with nature(that's why I say it is compatible with the objective existence of nature) and the interpretation is agnostic wrt hidden variables, so it clearly admits the wave function may not be all.
Do you view the wave function as representing our knowledge of some underlying reality?
 
  • #156
TrickyDicky said:
I don't think the wave function is assigned any part as it is purely epistemic, just an instrument to obtain statistical predictions to compare with nature(that's why I say it is compatible with the objective existence of nature) and the interpretation is agnostic wrt hidden variables, so it clearly admits the wave function may not be all.

Well, let's say there's a cat in a box. Schroedinger's cat scenario is assignment of a wave function to the cat, which is a part of commonsense reality. Or if you have a superconducting chunk in the lab. We assign the chunk a wavefunction, and since the chunk is part of commonsense reality, we are assigning a wavefunction to part of it.
 
  • #157
bohm2 said:
Would that, then, not make the ensemble interpretation just a "shut up and calculate" interpretation in disguise?

Its not in disguise - its explicit.

For example if somehow you proved BM correct that would not disprove the ensemble interpretation. And that is the precise reason it doesn't require collapse - its totally compatible with interpretations like BM that explicitly do not have collapse.

There are many variants to shut up and calculate - most having to do with different takes on probability. You can interpret probability via the Kolmogerov's axioms and leave probability abstract. You can use a frequentest take and get something like the ensemble. You can use a Bayesian take and get something like Copenhagen (most vesrions - some have the quantum state as very real) or Quantum Bayesianism - not that I can see much of a difference between the two except Quantum Bayeianism states its interpretation explicitly.

I also want to emphasise regarding this issue there seems to be a bit of confusion about Bayesanism and frequentest views promulgated in Jaynes otherwise excellent book on probability. There is no difference in either of those interpretations mathematically - as they must be since they are equivalent to the Kolmogorov axioms. But they can lead to different ways of viewing the same problems which sometimes can give different answers:
http://stats.stackexchange.com/ques...frequentist-approach-giving-different-answers

I want to be clear about this from the outset because there have been threads where wild claims about the two approaches are made and it is claimed the frequentest's are incorrect - I think Jaynes makes that claim. It's balderdash.

Thanks
Bill
 
Last edited:
  • Like
Likes bohm2
  • #158
Well, to me BM is ugly, because it introduces trajectories, which finally are not observable, right? So what are they good for?
 
  • #159
TrickyDicky said:
If quantum theory does not, in fact, predict the result of individual measurements, but only their statistical mean, then why should one expect a syntax describing individual preparations?
Because quantum theory may not be the final theory of everything.
 
  • #160
vanhees71 said:
Well, to me BM is ugly, because it introduces trajectories, which finally are not observable, right? So what are they good for?
The wave function is also not observable, yet it is very useful. From the practical point of view, sometimes the numerical calculations with particle trajectories are simpler then more conventional numerical methods of solving the Schrodinger equation.

More generally, as BM is ugly to you, in most cases it is probably not very useful to you. But it is beautiful and intuitive to me, which makes it helpful as a thinking tool to me. For instance, it seems that I was first on this forum who understood the meaning of the main paper we discussed in this thread, and Bohmian way of thinking helped me a lot to gain this understanding (even though I have not mentioned it in my first explanation of the paper, because I adjusted my explanation to majority who are not fluent in Bohmian way of thinking). A more famous example is Bell, who discovered his celebrated theorem with the help of Bohmian way of thinking.

I am not saying that any of these make a use of BM necessary, but as many other tools, it may be useful if you know how to use it.
 
Last edited:
  • Like
Likes TrickyDicky
  • #161
vanhees71 said:
Well, to me BM is ugly, because it introduces trajectories, which finally are not observable, right? So what are they good for?
Sorry, but the trajectories of BM are the classical trajectories of the "classical part" of Copenhagen, thus, are very well observable. They are good for having a unified picture of the "quantum" and the "classical" domain of Copenhagen.
 
  • #162
bhobba said:
... about Bayesanism and frequentest views promulgated in Jaynes otherwise excellent book on probability. There is no difference in either of those interpretations mathematically - as they must be since they are equivalent to the Kolmogorov axioms.
...
I want to be clear about this from the outset because there have been threads where wild claims about the two approaches are made and it is claimed the frequentest's are incorrect - I think Jaynes makes that claim. It's balderdash.

What I remember in this direction from Jaynes (long ago, and my own attempt to understand, so without any warranty, don't blame Jaynes for my errors) is something along the following lines: The frequentists have no concept for assigning probabilities for theories - a theory can be true or not, it cannot be true with probability 0.743. But, of course, they have to do science and that means they have to use outcomes with some probabilities to decide between theories.

Once these are not frequentist probabilities, what they have done is to develop an independent science, stochastics. What they use in this domain is simply intuition - because, different from the Bayesians, they have no nice axiomatic foundation for this. Sometimes the intuition works fine, sometimes it errs, and in the last case Bayesian probability and these intuitive "stochastics" give different answers. But in such cases it would be, of course, wrong to blame the frequentist approach, because this approach, taken alone, simply tells us nothing.
 
  • #163
Ilja said:
The frequentists have no concept for assigning probabilities for theories - a theory can be true or not, it cannot be true with probability 0.743. But, of course, they have to do science and that means they have to use outcomes with some probabilities to decide between theories.

That's not correct.

The modern frequentest view as found in standard textbooks like Feller is based on the assigning of an abstract thing called probability that obeys the Kolmogerov axioms, to events. Its meaningless until one applies the strong law of large numbers and then, and only then, does the frequentest view emerge. Since, via the Cox axioms, the Baysian view is equivalent to the Kolmogerov axioms there can, obviously, be no difference mathematically. The only difference is how you view a problem.

Thanks
Bill
 
  • #164
Ilja said:
Sorry, but the trajectories of BM are the classical trajectories of the "classical part" of Copenhagen, thus, are very well observable. They are good for having a unified picture of the "quantum" and the "classical" domain of Copenhagen.
I thought, the Bohm trajectories are not the classical ones, because there's the pilot wave concept, and the whole theory becomes non-local. I have to reread about Bohmian mechanics, I guess.
 
  • #165
vanhees71 said:
I thought, the Bohm trajectories are not the classical ones, because there's the pilot wave concept, and the whole theory becomes non-local. I have to reread about Bohmian mechanics, I guess.
What Ilja meant is the following: Even though Bohmian trajectories of individual microscopic particles are not directly observable, a large collection of such trajectories may constitute a macroscopic trajectory of a macroscopic body, which obeys approximately classical non-local laws and is observable.
 
  • #166
bhobba said:
The modern frequentest view as found in standard textbooks like Feller is based on the assigning of an abstract thing called probability that obeys the Kolmogerov axioms, to events. Its meaningless until one applies the strong law of large numbers and then, and only then, does the frequentest view emerge. Since, via the Cox axioms, the Baysian view is equivalent to the Kolmogerov axioms there can, obviously, be no difference mathematically. The only difference is how you view a problem.
There can be a difference.

The point is that, first, an essential part of the objective Bayesian approach is the justification of prior probabilities - the probabilities you have to assign if you have no information at all. The Kolmogorovian axioms simply tell us nothing about such prior probabilities. The basic axiom is here that if you have no information which distinguishes two situations when you should assign the same probabilities. Nothing in Kolmogorovian probability theory gives such a rule.

Then, the point is that there is the problem of theory choice based on statistics of experiments. Which is inherently non-frequentist, because theories have no frequencies. Othodox, non-Bayesian statistics is doing something in this domain, because it has to. But what it is doing is nothing which could be derived from Kolmogorovian axioms.
 
  • #167
Ilja said:
The Kolmogorovian axioms simply tell us nothing about such prior probabilities.

The Kolmogorovian axioms define probability abstractly. Baysian probability (as defined by the Cox axioms) is logically equivalent to the Kolmogorov axioms except its not abstract - it represents a degree of confidence.

There is nothing stopping assigning abstract prior probability.

Thanks
Bill
 
Last edited:
  • #168
bhobba said:
The Kolmogorovian axioms define probability abstractly. Baysian probability (as defined by the Cox axioms) is logically equivalent to the Kolmogorov axioms except its not abstract - it represents a degree of confidence.

But in Jaynes' variant there is more than only the axioms which define probability. There are also rules for the choice of prior probabilities.

If we have no information which makes a difference between the six possible outocomes of throwing a dice, we have to assign equal probability to them, that means, 1/6. This is a fundamental rule which is different from Kolmogorovian axioms, and is also not part of some subjectivist variants of Bayesian probability theory (de Finetti), but is an essential and important part of Jaynes concept of probability as defined by the information which is available.

With Kolmogorov or de Finetti you can assign whatever prior probability you want. Following Jaynes, you do not have this freedom - the same information means the same probability.
 
  • Like
Likes microsansfil
  • #169
Ilja said:
But in Jaynes' variant there is more than only the axioms which define probability. There are also rules for the choice of prior probabilities.

If we have no information which makes a difference between the six possible outocomes of throwing a dice, we have to assign equal probability to them, that means, 1/6. This is a fundamental rule which is different from Kolmogorovian axioms, and is also not part of some subjectivist variants of Bayesian probability theory (de Finetti), but is an essential and important part of Jaynes concept of probability as defined by the information which is available.

With Kolmogorov or de Finetti you can assign whatever prior probability you want. Following Jaynes, you do not have this freedom - the same information means the same probability.

I haven't read Jaynes, but I don't see how the choice 1/6 is essential to a Bayesian account of probability. The choice of 1/6 is the "maximal entropy" choice where the entropy of a probability distribution is defined by: S = \sum_j P_j log(\frac{1}{P_j}), where P_j is the (unknown) probability of outcome number j. The purely subjective Bayesian approach doesn't require such a choice. However, to the extent that the entropy measures your lack of knowledge, maximal entropy priors better reflect your lack of knowledge.

The beauty of Bayesian probability is that, given enough data, we converge to the same posterior probabilities even if we start with different prior probabilities. To me, that's an important feature.
 
  • Like
Likes vanhees71
  • #170
stevendaryl said:
I haven't read Jaynes, but I don't see how the choice 1/6 is essential to a Bayesian account of probability.

I think Ilja was referring specifically to Jaynes. Jaynes considered the prior to be objective, ie. in any situation, there is not a free subjective choice of prior. So there are subjective (de Finetti) and objective (Jaynes) Bayesians. Of course most practical people do something like semi-empirical priors and a mixture of frequentism (practical, but incoherent at some point) and Bayesianism (coherent, but impractical).

stevendaryl said:
The purely subjective Bayesian approach doesn't require such a choice. However, to the extent that the entropy measures your lack of knowledge, maximal entropy priors better reflect your lack of knowledge..

I think Jaynes here advocated the Shannon entropy, but it isn't clear why one of the Renyi entropies shouldn't be preferred.
 
Last edited:
  • Like
Likes vanhees71
  • #171
Why do you think that Bayesianism is impractical? AFAIU there is no problem for Bayesians to obtain the results of frequentists if there are frequencies to be observed.
 
  • #172
Ilja said:
Why do you think that Bayesianism is impractical? AFAIU there is no problem for Bayesians to obtain the results of frequentists if there are frequencies to be observed.

I think Bayesianism is impractical, because to remain coherent and have the data lead one to the correct conclusion (in the Bayesian sense), the prior must be nonzero over all possibilities including the true possibility. So as long as we can state all possibilities, then Bayesianism is practical. But what happens if I am looking for a quantum theory of gravity? I don't know all possibilities, so I can't write my prior. At this point I am forced to be incoherent, and rely on genius or guesswork.
 
  • #173
atyy said:
But what happens if I am looking for a quantum theory of gravity? I don't know all possibilities, so I can't write my prior. At this point I am forced to be incoherent, and rely on genius or guesswork.
Of course, but in this case frequentism does not help you at all. It does not work on theories, because theories have no frequencies.

And from a pragmatical point of view there is no problem at all - all theories you have to consider are those known. The very point of Bayesianism is, anyway, that you don't have to know everything, but have to use plausible reasoning based on the information you have.
 
  • #174
Ilja said:
Of course, but in this case frequentism does not help you at all. It does not work on theories, because theories have no frequencies.

I think that there is a sense in which Popperian falsifiability can be seen as a way to manage the complexity of a full-blown Bayesian analysis. If there is a number of possible theories, you just pick one. Work out the consequences, and compare with experiment. Then if it's contradicted by experiment, then you discard that theory, and pick a different one. So you're only reasoning about one theory at a time.
 
  • #175
Ilja said:
Of course, but in this case frequentism does not help you at all. It does not work on theories, because theories have no frequencies.

And from a pragmatical point of view there is no problem at all - all theories you have to consider are those known. The very point of Bayesianism is, anyway, that you don't have to know everything, but have to use plausible reasoning based on the information you have.

Yes. I guess what I should say is that the Bayesian dream of never breaking coherence is impractical.
 
  • #176
stevendaryl said:
I think that there is a sense in which Popperian falsifiability can be seen as a way to manage the complexity of a full-blown Bayesian analysis. If there is a number of possible theories, you just pick one. Work out the consequences, and compare with experiment. Then if it's contradicted by experiment, then you discard that theory, and pick a different one.
Yes. But one problem of the Popperian approach was to handle statistical theories, and statistical experiments, appropriately.
When does a statistical observation falsify a theory? This is where one needs Bayesian reasoning, where one can have a few theories and some statistical observations with unclear outcomes.
 
  • #177
I don't know how this ended up discussing Jaynes view but here it is, unedited (From his book, Probability Theory: The logic of science):

The “new” perception amounts to the recognition that the mathematical rules of probability theory are not merely rules for calculating frequencies of “random variables”; they are also the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind, and we shall apply them in full generality to that end.

It is true that all “Bayesian” calculations are included automatically as particular cases of our rules; but so are all “frequentist” calculations. Nevertheless, our basic rules are broader than either of these, and in many applications our calculations do not fit into either category. To explain the situation as we see it presently: The traditional “frequentist” methods which use only sampling distributions are usable and useful in many particularly simple, idealized problems; but they represent the most proscribed special cases of probability theory, because they presuppose conditions (independent repetitions of a “random experiment” but no relevant prior information) that are hardly ever met in real problems. This approach is quite inadequate for the current needs of science.

In addition, frequentist methods provide no technical means to eliminate nuisance parameters or to take prior information into account, no way even to use all the information in the data when sufficient or ancillary statistics do not exist. Lacking the necessary theoretical principles, they force one to “choose a statistic” from intuition rather than from probability theory, and then to invent ad hoc devices (such as unbiased estimators, confidence intervals, tail-area significance tests) not contained in the rules of probability theory. Each of these is usable within a small domain for which it was invented but, as Cox’s theorems guarantee, such arbitrary devices always generate inconsistencies or absurd results when applied to extreme cases; we shall see dozens of examples.​
 
  • #178
Ilja said:
But in Jaynes' variant there is more than only the axioms which define probability. There are also rules for the choice of prior probabilities..

And that's part of how a particular view of something affects how you solve a problem - which is what I said right from the start.

It in no way changes the fact the two are mathematically exactly the same.

Ilja said:
If we have no information which makes a difference between the six possible outocomes of throwing a dice, we have to assign equal probability to them, that means, 1/6.

Its simply confirming what I said - how you view a problem affects how you approach it. Its adding something beyond the Kolmogorov axioms, which are exactly equivalent to the Cox axioms Bayesian's use.

Thanks
Bill
 
Last edited:
  • #179
billschnieder said:
I don't know how this ended up discussing Jaynes view but here it is, unedited (From his book, Probability Theory: The logic of science):

And, as the link I gave detailed, his views are not universally accepted. I certainly do not accept them. All it is is a particular philosophical view that is useful in some circumstances. So is the frequentest view. As one poster in the link, IMHO correctly, said:
'Whether frequentist or Bayesian methods are appropriate depends on the question you want to pose, and at the end of the day it is the difference in philosophies that decides the answer (provided that the computational and analytic effort required is not a consideration).'

The Baysian view of probability vs the Frequentest is not going to be resolved here.

Thanks
Bill
 
Last edited:
  • #180
stevendaryl said:
I haven't read Jaynes, but I don't see how the choice 1/6 is essential to a Bayesian account of probability.

It isn't. Its simply a reasonable assumption that you as a rational agent wouldn't, without evidence one way or the other, prefer one over the other so assign an initial confidence level of 1/6. The frequentest view has no reason to do that, but in practice a frequentest would do the same based on the symmetry of the situation - in a long number of trials you wouldn't expect any face to occur more often than another.

Thanks
Bill
 
  • #181
bhobba said:
It isn't. Its simply a reasonable assumption that you as a rational agent wouldn't, without evidence one way or the other, prefer one over the other so assign an initial confidence level of 1/6. The frequentest view has no reason to do that, but in practice a frequentest would do the same based on the symmetry of the situation - in a long number of trials you wouldn't expect any face to occur more often than another.

Well, the thing that's interesting to me about a symmetry argument for probability is that unlike subjective Bayesian probability, and unlike frequentist probability, which is really a property of an ensemble, rather than an individual event, symmetry-based probability seems like it's an intrinsic property of the entities involved in the random event. So it seems like a candidate for an "objective" notion of probability for a single event.
 
  • #182
stevendaryl said:
Well, the thing that's interesting to me about a symmetry argument for probability is that unlike subjective Bayesian probability, and unlike frequentist probability, which is really a property of an ensemble, rather than an individual event, symmetry-based probability seems like it's an intrinsic property of the entities involved in the random event. So it seems like a candidate for an "objective" notion of probability for a single event.

Its really the same thing in disguise - since if you relabel the faces differently it shouldn't make any difference with one proviso - there is some intrinsic difference between the faces - which is basically what symmetry type arguments that is used in making physical problems easier to solve involve.

Like I said - its simply a different philosophy suggesting a different approach.

Thanks
Bill
 
  • #183
Ilja said:
But in Jaynes' variant there is more than only the axioms which define probability.
About the probabilities many people make the confusion between the axiomatic (mathematics only, say nothing about semantics; based on an independent axiomatic from any application ; as like all pure maths), A methodology of statistical analysis (like http://en.wikipedia.org/wiki/Bayesian_inference or a more general as a methodology for reasoning on uncertain, incomplete, ... data as E.T Jaynes ) and the philosophy about the interpretation of probability.

Patrick
 
  • #184
bhobba said:
Its really the same thing in disguise - since if you relabel the faces differently it shouldn't make any difference with one proviso - there is some intrinsic difference between the faces - which is basically what symmetry type arguments that is used in making physical problems easier to solve involve.

Like I said - its simply a different philosophy suggesting a different approach.

Karl Popper suggested a "propensity" interpretation of probability, where the fact that a coin has a 50/50 chance of landing heads or tails is an objective fact about the coin. I couldn't really see how that made much sense, except possibly as a symmetry argument.
 
  • #185
stevendaryl said:
Karl Popper suggested a "propensity" interpretation of probability, where the fact that a coin has a 50/50 chance of landing heads or tails is an objective fact about the coin. I couldn't really see how that made much sense, except possibly as a symmetry argument.

There is all sorts of different attitudes, philosophies, views etc, call it what you will, towards probability.

As you probably have guessed for me the 'truth' lies in the Kolmogorov axioms - one chooses the view best suited to the problem. For me that's frequentest. It doesn't make it right, better than other views, simply what I prefer.

Thanks
Bill
 
  • #186
microsansfil said:
About the probabilities many people make the confusion between the axiomatic (mathematics only, say nothing about semantics; based on an independent axiomatic from any application ; as like all pure maths),

See page 2 - Feller - An Introduction To Probability Theory And Applications:

In applications the abstract mathematical models serve as tools and different models can describe the same empirical situation. The manner is which mathematical theories are applied does not depend on pre-conceived ideas, it is a purposeful technique depending on and changing with experience. A philosophical analysis of such techniques is a legitimate study, but is not in the realm of mathematics, physics or statistics. The philosophy of the foundations of probability must be divorced from mathematics and statistics exactly as the discussion of our intuitive space concept is now divorced from geometry.

The axioms, in this case the Kolmogerov axioms, and how they are applied, is what applied math and physics is concerned with. Philosophy, experience etc etc guide us in how to apply the axioms - but its the axioms themselves that is the essential thing.

Thanks
Bill
 
  • #187
bhobba said:
The axioms, in this case the Kolmogerov axioms, and how they are applied, is what applied math and physics is concerned with.
The axioms does not tell how to determine probability of an event.

The bayesian Inference or the frequentist Inference are usefull methodology to make this job in many scientific domain.

Until here i don't need to speak about philosophy to use statistics methodology.

Patrick
 
  • #188
microsansfil said:
The axioms does not tell how to determine probability of an event.

I think you need to become acquainted with the strong law of large numbers.
https://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/

microsansfil said:
The bayesian Inference or the frequentist Inference are usefull methodology to make this job in many scientific domain.

That I definitely agree with.

Thanks
Bill
 
  • #189
bhobba said:
Its simply confirming what I said - how you view a problem affects how you approach it. Its adding something beyond the Kolmogorov axioms, which are exactly equivalent to the Cox axioms Bayesian's use.

How can there be an equivalence if the domain of applicability is completely different, and the meaning is completely different?

Bayesian probability is about logic of reasoning - what can we conclude given some information. Frequentist probability is about some physical laws of nature, which define how often in repeated experiments the outcome x will be observed, given the preparation procedure.

So if we, for example, do not have all the information about the preparation procedure, frequentist probability tells us nothing (given our information). Bayesian probability would give me something - which would be different from what it would give me if I have the full information.

And frequentism gives simply nothing for the decision which of two theories I should prefer given the data. Ok, what to do in this case you can name "how to view a problem". But, following Bayesian probability, you have rules of logical consistency which you have to follow. The orthodox statistician is, instead, free to violate these rules and name this "his view of the problem". But essentially we can only hope that his "view of the problem" is consistent, or, if inconsistent, his "view" does not give a different result from the consistent one.

This is the very problem you don't seem to see: The Bayesian is required to apply the Kolmogorov axiom in his plausible reasoning. The orthodox not, because plausible reasoning is not about frequencies, thus, no probabilities are involved, and it makes not even sense to say "GR is false with probability .07549", thus, it makes no sense to apply Kolmogorovian axioms to plausible reasoning, as well as it makes no sense to apply them to electromagnetic field strength.

"There is no place in our system for speculations concerning the probability that the sun will rise tomorrow." writes Feller. But this is what the statistics has to do, in its everyday applications. They have to tell us what is the probability that a theory is wrong given the experimental evidence, this is their job. So, in fact they have to apply plausible reasoning and apply it, intuitively. But without the educated information that they have to apply the rules of Kolmogorovian probability theory to their plausible reasoning, which is what they reject as meaningless.
 
  • #190
Ilja said:
How can there be an equivalence if the domain of applicability is completely different, and the meaning is completely different?

Axioms can model different things. So? In Baysian it models a confidence level. In the modern version of frequentest its simply abstract and you show via the strong law of large numbers for a large number of trials FAPP its in proportion to the probability. In a sense its more fundamental than Baysean - but that doesn't make it better or worse.

Thanks
Bill
 
  • #191
Ilja said:
How can there be an equivalence if the domain of applicability is completely different, and the meaning is completely different?
You use a method according to the problem you have to analysis.

For example for this problem how an axiomatics can help to solve it ?

Every morning I park my car around 8 am, in a paid parking place from 9am, several times a week I forgot to move my car to a car park (which opens at 8:30) up 10am.

I would like to calculate the probability of getting a ticket/contravention when I wake up at 10 am to move my car.

Patrick
 
  • #192
microsansfil said:
I would like to calculate the probability of getting a ticket/contravention when I wake up at 10 am to move my car.

There is not enough information to calculate the probability. You need to know, for example, the hours parking inspectors work in your area - or at least their probability of working at that time. Is it on a Sunday? Do they work Sunday's - etc etc.

Added Later
In practice solving problems like that an applied mathematician would model it on a computer using something like simula, incorporate and try factors obtained from observation until it is in reasonable agreement with the level of accuracy required - if that level of accuracy is possible - it may not be.

Thanks
Bill
 
Last edited:
  • #193
bhobba said:
Axioms can model different things. So? In Baysian it models a confidence level. In the modern version of frequentest its simply abstract and you show via the strong law of large numbers for a large number of trials FAPP its in proportion to the probability. In a sense its more fundamental than Baysean - but that doesn't make it better or worse.
If the domain of applicability of approach 1 is much greater than that of approach 2 this makes approach 1 not only different but better.

Whenever you have real physical frequencies, you can also apply plausible reasoning considering them. Thus, you can apply Bayesian probability where you have frequencies. But you cannot apply frequentism in plausible reasoning about things which do not have frequencies. This makes no sense.

This is like applying the Maxwell equations only to static electric fields. This would be stupid, and not simply a "different thing".
 
  • #194
Ilja said:
If the domain of applicability of approach 1 is much greater than that of approach 2 this makes approach 1 not only different but better..

To cut to the chase the claim is the Bayesian domain is better. This is the precise claim the people in the link, as well as myself, doubt. It is not better, for example, in calculating the distribution of offspring in a survival model. Or is a frequentest view the best way to model confidence level in decision theory. You mentioned the probability of GR being true. Obviously probability in that instance is modelling a confidence level.

We seem to be loosing sight however this is a thread on QM - not Bayesian vs frequentest probability. We already have a section in this forum for that.

The point I was making is shut-up and calculate is compatible with either view.

Thanks
Bill
 
Last edited:
  • #195
bhobba said:
There is not enough information to calculate the probability. You need to know,
you need to have a methodology do not give by the axiomatic :

1/ I look at the statistics (number of cars in default of payment, number of cars actually penalized in 1 hour, etc.)
or
2/ I look at the instructions of the police (sidewalks length inspected in 1 hour, number of personnel assigned to tickets, the tolerance, etc.) to buid a prior.

Patrick
 
  • #196
microsansfil said:
you need to have a methodology do not give by the axiomatic :

1/ I look at the statistics (number of cars in default of payment, number of cars actually penalized in 1 hour, etc.)
or
2/ I look at the instructions of the police (sidewalks length inspected in 1 hour, number of personnel assigned to tickets, the tolerance, etc.) to buid a prior.

That would be a start. Whether is would be a good enough model depends purely on how accurate you want its predictions.

But I can't follow your point - in such a case it wouldn't matter one bit which view of probability you took - its finding a good model that's relevant.

Thanks
Bill
 
  • #197
bhobba said:
its finding a good model that's relevant.
What do you call a model in this context ? what is a good model that's relevant ?

The formulation of a statistical model using Bayesian statistics has the feature of requiring the specification of prior distribution for any unknown parameters. Statistical models are also part of the foundation of Bayesian inference (starting with a prior distribution, getting data, and moving to the posterior distribution).

A Posteriori ∝ Vraisemblance * A Priori
P(θ|y ) ∝ P(y |θ)P(θ)

The most we can hope to do is to make the best inference based on the experimental data and any prior knowledge that we have available, reserving the right to revise our position if new information comes to light.

Patrick
 
  • #198
I think the original issue has been addressed. Time to close this thread.

Thanks everyone!
 
Back
Top