QBism - Is it an extension of "The Monty Hall Problem"?

hankaaron
Messages
83
Reaction score
4
I just watched a video discussion on the modern interpretations of the wave function. In it I was introduced to QBism, i.e. Quantum Bayesianism. To me sounded a lot like the famous Monty Hall problem. Is QBism's probability similar to that?
 
Physics news on Phys.org
No, it has nothing to do with the Monty Hall problem, apart from both having something to do with probability.
 
Last edited:
Its simply a different view of probability.

The basis of probability is the Kolmogorov axioms - not subjective knowledge. In those axioms probability is an abstract thing.

The Kolmogorov axioms are equivalent to the so called Cox axioms where its simply a state of knowledge. The Kolmogorov axioms, via the law of large numbers which is derivable from those axioms, also gives the frequentest view.

The Monty Hall problem, and QM, can be viewed via either interpretation of probability. The frequentest view leads to something along the lines of the ensemble interpretation, the subjectivist something along the lines of Copenhagen or Quantum Baysianism.

I have to say however, my background is applied math, and most applied mathematicians view it the frequentest way because a trial is a very concrete thing. Those into Bayesian statistics are an exception.

Thanks
Bill
 
Last edited by a moderator:
atyy said:
Sure, in the sense that the Monty Hall problem uses Bayes Rule. However, the probabilities in Bayes Rule do not have to be subjective,

Good point - Bayes rule is NOT the Bayesian view of probabilities - they are two different things.

Bayes rule can be viewed under any interpretation of probability - it follows from the Kolmogorov axioms.

Thanks
Bill
 
hankaaron said:
I just watched a video discussion on the modern interpretations of the wave function. In it I was introduced to QBism, i.e. Quantum Bayesianism.

Tossing a “fair” coin, following The Logic of Science by E. T Jaynes:

prob = 1/2 is not a property of the coin.
prob = 1/2 is not a joint property of coin and tossing mechanism.
Any probability assignment starts from a prior probability.

http://www.nmsr.org/qbism.pdf

Patrick
 
Point of view that i read

The mathematical theory of Kolmogorov is to the probability as the differential geometry is to the general relativity.

It gives deductive framework, formal, but has no bearing on its application on its interpretation when applied to observations, to bet, to decisions, etc.

The difference between frequentist and Bayesian does not address the formal deductive framework on how to conduct the calculations, but on the interpretation of the data and results when we speak of "probabilities".

Jaynes gives examples in classical physics where according to what is called "probability" it seems we get reasoning, calculations, and different results.

Patrick
 
microsansfil said:
It gives deductive framework, formal, but has no bearing on its application on its interpretation when applied to observations, to bet, to decisions, etc.

Thats what you have an interpretation for.

There are two main ones:
1. Bayesian - probability is a state of knowledge.
2 Frequentest - you simply associate this abstract thing called probability with objects and apply the law of large numbers to show for a large number of trials the probability is the proportion of the outcome in the trials.

Just out of interest Terry Tao has posted some nice proofs of this:
http://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/

This is absolutely fundamental to the modern view of probability.

From the well known standard text - Feller - page 3
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Thanks
Bill
 
Last edited:
  • #10
microsansfil said:
Jaynes gives examples in classical physics where according to what is called "probability" it seems we get reasoning, calculations, and different results.

Its obvious, that being the case, differing views are leading to errors. That's why the truth lies in the axioms.

Thanks
Bill
 
Last edited:
  • #11
bhobba said:
Its obvious, that being the case, differing views are leading to errors. That's why the truth lies in the axioms.

It's not a question of truth or falsity. The results are different. I will search for article.

Patrick
 
  • #12
microsansfil said:
It's not a question of truth or falsity. The results are different. I will search for article.

Then it's almost certainly wrong - simple as that.

If such was true it would be big news earning the discover an instant Fields medal and one of the great seminal discoveries of mathematics along the lines of Godel's Theorem.

I had a look at the reference:
'For many years there has been controversy over “frequentist” versus “Bayesian” methods of inference, in which the writer has been an outspoken partisan on the Bayesian side.The record of this up to 1981 is given in an earlier book (Jaynes, 1983). In these old works there was a strong tendency, on both sides, to argue on the level of philosophy or ideology. We can now hold ourselves somewhat aloof from this because, thanks to recent work, there is no longer any need to appeal to such arguments. We are now in possession of proven theorems and masses of worked-out numerical examples. As a result, the superiority of Bayesian methods is now a thoroughly demonstrated fact in a hundred different areas. One can argue with a philosophy; it is not so easy to argue with a computer printout, which says to us: “Independently of all your philosophy, here are the facts of actual performance.” We point this out in some detail whenever there is a substantial difference in the final results. Thus we continue to argue vigorously for the Bayesian methods; but we ask the reader to note that our arguments now proceed by citing facts rather than proclaiming a philosophical or ideological position.'

If true like I said that would be BIG news.

We have a professor of statistics and probability that posts here:
https://www.physicsforums.com/member.php?u=401042

I suggest you get his opinion before accepting such startling news.

Thanks
Bill
 
Last edited:
  • #13
bhobba said:
Then it's wrong - simple as that.

If such was true it would be big news earning the discover an instant Fields medal and of the great seminal discoveries of mathematics along the lines of Godel's Theorem.

The Fields medal is reserved to people which will demonstrate that the mathematical axiomatic speak about semantic (falsity/truth) :smile:

Any mathematical axiom is purely syntactic. To do physics we need semantic (Observation, reasoning, decisons, ...).

Example of Diffusion


Patrick
 
Last edited:
  • #14
microsansfil said:
The Fields medal is reserved to people which will demonstrate that the mathematical axiomatic speak about semantic (falsity/truth) :smile:

It's for any great mathematical discovery such as done by Terry Tao, Witten and Nash - for entirely different things not related to axiomatic's.

Showing that two different interpretations that are equivalent to the same axioms give different results would be a mind blowing discovery of seminal importance.

That's why I suggest you get the view of a professional in the field, because to me its obvious its incorrect.

Thanks
Bill
 
Last edited:
  • #15
microsansfil said:
Any mathematical axiom is purely syntactic. To do physics we need semantic (Observation, reasoning, decisons, ...).

Of course it is.

But that in no way changes logic. The same axioms can not lead to different results.

Both Frequentest and Baysian use exactly the same axioms. If you have two different results with the same axioms you have discovered they are inconsistent. For probability such would be, well, mind blowing news.

I also did a search on frequentest probability proven wrong.

Nothing came up.

I think the conclusion is pretty obvious - but contact the professor if you like.

Thanks
Bill
 
  • #16
microsansfil said:
prob = 1/2 is not a property of the coin.

Your proof of that claim, rather than philosophical waffling, would prove most interesting.

Could you post it please?

Thanks
Bill
 
  • #17
bhobba said:
Then it's almost certainly wrong - simple as that.

I think it may depend on whether they're talking about the philosophy of probability or whether you're talking about methodology. The way that Bayesians and Frequentists analyze data is slightly different, even though in the limit of infinitely many trials, the differences become negligible (because of the law of large numbers).

If you only have a finite number of trials (which, of course, you always do), then the frequentist has to make some judgments about significance of results. Were there enough trials to get good statistics? At some point, such a judgment requires an ad hoc parameter (confidence levels).

In contrast, the methods of Bayesian statistics are indifferent as to the number of trials. You can get information from a single trial. You can get more information from 1000 trials, but there is no magic number of trials.

There could definitely be some situation where the Bayesian and Frequentist methodologies lead to different conclusions about a study.
 
  • #18
stevendaryl said:
There could definitely be some situation where the Bayesian and Frequentist methodologies lead to different conclusions about a study.

I have never heard of any.

To apply the frequentest view you need some probability that for all practical purposes is zero. But that can be anything. Do you think a probability of 1/googoplex^googoplex taken as zero would ever have the slightest practical consequence. And even if you think of one then why would simply taking a probability below whatever the sensitivity of the situation is would not fix it?

Thanks
Bill
 
  • #19
Here's something that mathematical physicist John Baez wrote years ago, and when I read it, I was so convinced that I assumed that frequentism was just one of those relics of the past. Since I don't really hang out with statisticians much, I didn't realize that there was still a debate about it.

It's not at all easy to define the concept of probability. If you ask most people, a coin has probability 1/2 to land heads up if when you flip it a large number of times, it lands heads up close to half the time. But this is fatally vague!

After all what counts as a "large number" of times? And what does "close to half" mean? If we don't define these concepts precisely, the above definition is useless for actually deciding when a coin has probability 1/2 to land heads up!

Say we start flipping a coin and it keeps landing heads up, as in the play Rosencrantz and Guildenstern are Dead by Tom Stoppard. How many times does it need to land heads up before we decide that this is not happening with probability 1/2? Five? Ten? A thousand? A million?

This question has no good answer. There's no definite point at which we become sure the probability is something other than 1/2. Instead, we gradually become convinced that the probability is higher. It seems ever more likely that something is amiss. But, at any point we could turn out to be wrong. We could have been the victims of an improbable fluke.

Note the words "likely" and "improbable". We're starting to use concepts from probability theory - and yet we are in the middle of trying to define probability! Very odd. Suspiciously circular.

Some people try to get around this as follows. They say the coin has probability 1/2 of landing heads up if over an infinite number of flips it lands heads up half the time. There's one big problem, though: this criterion is useless in practice, because we can never flip a coin an infinite number of times!

Ultimately, one has to face the fact that probability cannot be usefully defined in terms of the frequency of occurence of some event over a large (or infinite) number of trials. In the jargon of probability theory, the frequentist interpretation of probability is wrong.

http://math.ucr.edu/home/baez/bayes.html
 
  • #20
stevendaryl said:
Here's something that mathematical physicist John Baez wrote years ago, and when I read it, I was so convinced that I assumed that frequentism was just one of those relics of the past. Since I don't really hang out with statisticians much, I didn't realize that there was still a debate about it.

http://math.ucr.edu/home/baez/bayes.html

I know that nobody is going to believe this, but when I posted that link, I had completely forgotten that John Baez wrote that article in response to conversations with me.
 
  • #21
stevendaryl said:
Here's something that mathematical physicist John Baez wrote years ago, and when I read it,

As usual he is correct.

The issue though is whether there is a practical situation that a very small probability doesn't exist below which it makes no difference.

Its the same issue in applying the calculus. You need a some Δt not equal to zero to actually use it - but since its not zero it can't be correct in measuring things. But in practice there are intervals that for all practical purposes its square can be taken as zero - which is the intuitive approach to it.

Thanks
Bill
 
  • #22
bhobba said:
As usual he is correct.

The issue though is whether there is a practical situation that a very small probability doesn't exist below which it makes no difference.

Its the same issue in applying the calculus. You need a some Δt not equal to zero to actually use it - but since its not zero it can't be correct in measuring things. But in practice there are intervals that for all practical purposes its square can be taken as zero - which is the intuitive approach to it.

Thanks
Bill

Right. Frequentism could be considered a pragmatic methodology for dealing with statistics, without making any claims about the philosophy of probability.

The thing that is annoying about Bayesianism is that none of its conclusions are ever exciting or revolutionary. The Bayesian can never make a definitive announcement of the form: "Our statistics show that cigarettes cause cancer" or "Our experiments show that parity is violated by weak decays." For the Bayesian, data never proves or disproves a claim, it just adjusts the posterior probability of its being true. In contrast, scientists schooled in Karl Popper falsifiability think in terms theories being thrown out by experiment.

When it comes to figuring out what course of action to take in response to some crisis, Bayesianism vs. Falsifiability seems to me to make a difference.

Suppose there are two competing theories about the cause of some disease afflicting a patient: Theory A, and Theory B. Suppose there are three treatment options: Option 1, Option 2, Option 3.

Theory A says that Option 1 is the best treatment, and Option 2 is not nearly as good, and Option 3 is so bad, it will likely kill the patient.
Theory B says that Option 3 is the best treatment, and Option 2 is worse, and Option 1 will kill the patient.

The Bayesian analysis would proceed as follows:

Let P(\alpha) be the subjective probability of theory \alpha
Let P(j | \alpha) be the probability of survival of the patient, given that theory \alpha is true, and option j is chosen.

Then we compute P(j), the probability of survival given option j as follows:

P(j) = \sum_\alpha P(\alpha) P(j | \alpha)

So we pick the option that maximizes the probability of survival.

I would think that justifying that choice would be very difficult for the frequentist. The frequentist would say that there is no probability of theory A versus theory B. Either one or the other is correct, even if we don't know which. So either

P(j) = P(j | A)

or

P(j) = P(j | B)

but we don't know which. Combining different theories to get an overall probability makes no sense, from a frequentist point of view.
 
  • #23
bhobba said:
But that in no way changes logic. The same axioms can not lead to different results.
This is absurd.

This demonstrates that you do not know what you're talking for mathematics.

I have already given you a very simble example based on the axioms of the distance :
http://en.wikipedia.org/wiki/Taxicab_geometry
The result of a possible interprétation a circle.

The physical is not of mathematical nor bhobba's philosophy

Patrick
 
  • #24
bhobba said:
There are two main ones:
1. Bayesian - probability is a state of knowledge.
2 Frequentest - you simply associate this abstract thing called probability with objects and apply the law of large numbers to show for a large number of trials the probability is the proportion of the outcome in the trials.

The two main one are
1. Epstemic
2; Ontic

In ontic there is also Popper interpretation which is different from Frequentest.

And espitemic can not be reduced to the Bayesian interpretation.

Patrick
 
  • #25
microsansfil said:
This is absurd.

I simply don't know what to say.

If you believe its OK for two implementations of exactly the same axioms to give differing results then your math teachers were different to mine - and I had quite a few highly qualified ones for all sorts of subjects from Statistical Modelling to Hilbert Spaces.

It would also contradict standard texts like Feller I quoted from.

Thanks
Bill
 
  • #26
bhobba said:
If you believe its
I believe nothing it is simply the mathematics. In mathematics Proof theory (axiom/
only the syntax) is different from Model theory (semantic); The link between this two, in one direction, is Gödel's completeness theorem.

One wonders which speak about metaphysics ?

Patrick
 
  • #27
bhobba said:
I simply don't know what to say.

If you believe its OK for two implementations of exactly the same axioms to give differing results then your math teachers were different to mine - and I had quite a few highly qualified ones for all sorts of subjects from Statistical Modelling to Hilbert Spaces.

I think it is absurd to call anything you said absurd. However, I think that the issue might be about how the axioms are used in practice. It is certainly not enough to say: Here are the axioms for probability. Here are the results of a study. Compute the probability that cigarettes cause cancer (or whatever). To apply a theory, you have to have some kind of rules for connecting the formulas on paper to something you do in a laboratory. The axioms do not tell you what those rules are.

Two people could agree on the axioms and disagree about how the axioms should be applied in a real-world case.
 
  • #28
microsansfil said:
1. Epstemic
2; Ontic

When I did my degree I did six compulsory subjects - Mathematical Statistics 1A, 1B, 2A, 2B, 3A and 3B.

It was also used in a number of other subjects I did eg Operations Research, Mathematical Economics and Stochastic Modelling.

The view of everyone of those subjects was Fellers. If you wanted a picture you applied the law of large numbers and thought of the proportion of a large numbers of trials.

I have read books on stuff like Credibility Theory and Bayesian statistics that introduced the Bayesian view - in certain situations like updating estimates the Bayesian view was used because it led to more direct understanding.

I even studied books like the following to see the proof of existence theorems:
https://www.amazon.com/dp/9812703713/?tag=pfamazon01-20

But never in all my studies have those terms been used.

The first thing I need to ask - how do they diverge from the Kolmogorov axioms?

Thanks
Bill
 
Last edited by a moderator:
  • #29
stevendaryl said:
However, I think that the issue might be about how the axioms are used in practice.

I am starting to get that feeling as well.

But much more detail needs to be forthcoming to sort it out.

I could go through that book to try and nut it out.

But gee - I really don't feel like doing that for claims of this nature - the onus should really be on the person making the claims.

Thanks
Bill
 
  • #30
microsansfil said:
I believe nothing it is simply the mathematics.

Of course nothing applied is simply mathematics.

I gave a quote from Feller - exactly what is your issue with it?

I will repeat it for ease of reference.

'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Thanks
Bill
 
Last edited:
  • #31
stevendaryl said:
In contrast, the methods of Bayesian statistics are indifferent as to the number of trials. You can get information from a single trial. You can get more information from 1000 trials, but there is no magic number of trials.

The results of Bayesian statistics are dependent on the number of trials. Regardless of the prior, even if it places an infinitesimally small probability over the true probability, as long as the prior is non-zero over the true hypothesis, the Bayesian will converge to the true probability.

Bayesian statistics is guaranteed to work if one knows in advance all possible hypotheses. Which is why it is beautiful, and also impractical - because if we did, we would already have a candidate non-perturbative definitions of all of string theory.
http://en.wikipedia.org/wiki/Bernstein–von_Mises_theorem
http://www.encyclopediaofmath.org/index.php/Bernstein-von_Mises_theorem

The other important theorem is the de Finetti representation theorem that allows Bayesians to be "effectively frequentist".
http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf
http://www.stats.ox.ac.uk/~steffen/teaching/grad/definetti.pdf
 
Last edited:
  • #32
stevendaryl said:
I would think that justifying that choice would be very difficult for the frequentist. The frequentist would say that there is no probability of theory A versus theory B. Either one or the other is correct, even if we don't know which.

A Bayesian could say the same thing as well - its just our knowledge of which is correct is subjective.

A Frequentest would say in a large number similar situations a certain proportion would be wrong. But, in that application it seems a strange way to view it - for hypothesis testing I think Bayesian is better.

But take Stochastic Modelling and say a queue at a bank. It's quite natural to think of if we did a large number of trials we would find a certain proportion with this or that number. Although of course one could view it the Bayesian way and think of a each number as simply subjective likelihood - but for me such isn't that visual.

As far as QM goes, for exactly the same reason I find the ensemble view more appealing - its concrete thinking of repetitions of the same observation - exactly as Vanhees says.

You can view it Bayesian and get something like Copenhagen, but like the queue it sort of seems unnatural.

One thing for sure, Bayesian is the obvious correct way to view Many Worlds. We know we must be in a world but which one. However I don't want to revive that long thread we had about it.

Thanks
Bill
 
Last edited:
  • #33
bhobba said:
When I did my degree
But never in all my studies have those terms been used.
I will not give you my CV.

There are different languages and cultures

Ontic := Objectif
Epistemic : Subjectif

Many different interpretation
http://en.wikipedia.org/wiki/Frequentist_probability
http://en.wikipedia.org/wiki/Probabilistic_logic
http://en.wikipedia.org/wiki/Propensity_probability
http://en.wikipedia.org/wiki/Bayesian_probability
...

bhobba said:
The first thing I need to ask - how do they diverge from the Kolmogorov axioms?
l

Proof theory
Model theory
Gödel's completeness theorem

Kolmogorov is included in this theoretical framework Measure (mathematics).

In QM there are, inter alia, Quantum probability Theory :

http://arxiv.org/abs/quant-ph/0601158 That isn't Kolmogorov axioms (mathematics of classical probability theory was subsumed into classical measure theory by Kolmogorov in 1933)As in general relativity there are Differential geometry.However physical is not as mathematical

Patrick
 
Last edited:
  • #34
microsansfil said:
There are different languages and cultures

I think that's obvious

microsansfil said:
Ontic := Objectif
Epistemic : Subjectif

Its good to know what you mean.

I did a scan on the book you linked to and it did not mention either of those terms.

microsansfil said:
Many different interpretations

I am aware there are a number of different views. That's not my issue. My issue is since they all are based on the Kolmogorov Axioms they all must give the same results.

My background is math mate - I am well aware of what constitutes a valid proof.

I am well aware of Godel's theorem, but its relevance here has me beat.

I am well aware of Model theory. Its application to non standard analysis is one of the most beautiful pieces of math I have ever seen - and one of the most difficult - its decidedly non-trivial. Again its relevance here has me beat.

microsansfil said:
Kolmogorov is included in this theoretical framework Measure (mathematics)

Mate - didn't I just do a post on a book of rigorous probability theory I studied a similar one of. Exactly what do you think it's about?

microsansfil said:
That isn't Kolmogorov axioms (mathematics of classical probability theory was subsumed into classical measure theory by Kolmogorov in 1933)

By definition the Kolmogorov axioms are a measure space with total measure one and conversely a measure space of total measure one obeys the Kolmogorv axioms.

I am now starting to suspect your knowledge of rigorous probability theory is rather rudimentary.

microsansfil said:
As in general relativity there is Differential geometry.

I have studied GR. The situation is exactly the same as Feller wrote for probability.

microsansfil said:
However physical is not as mathematical

Nobody ever said it was. Again I repeat what Feller said and highlight the key point:
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Now exactly what is your issue?

Thanks
Bill
 
Last edited:
  • #35
atyy said:
The results of Bayesian statistics are dependent on the number of trials. Regardless of the prior, even if it places an infinitesimally small probability over the true probability, as long as the prior is non-zero over the true hypothesis, the Bayesian will converge to the true probability.

Bayesian statistics is guaranteed to work if one knows in advance all possible hypotheses. Which is why it is beautiful, and also impractical - because if we did, we would already have a candidate non-perturbative definitions of all of string theory.
http://en.wikipedia.org/wiki/Bernstein–von_Mises_theorem
http://www.encyclopediaofmath.org/index.php/Bernstein-von_Mises_theorem

The other important theorem is the de Finetti representation theorem that allows Bayesians to be "effectively frequentist".
http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf
http://www.stats.ox.ac.uk/~steffen/teaching/grad/definetti.pdf

I once sketched out a Bayesian "theory of everything". Theoretically (not in practice, because it's computationally intractable, or maybe even noncomputable), you would never need any other theory.

Let T_1, T_2, ... be an enumeration of all possible theories. Let H_1, H_2, ... be an enumeration of all possible histories of observations. (It might be necessary to do some coarse-graining to make a discrete set of possibilities.)

Let P(T_i) be the a-priori probability that theory T_i is true.
Let P(H_j | T_i) be the (pretend it's computable) probability of getting history H_j if theory T_i were true. Then we compute the probability of T_i given H_j has been observed via Bayes' rule:

P(H_j) = \sum_i P(T_i) P(H_j | T_i)
P(T_i | H_j) = P(H_j | T_i) P(T_i)/P(H_j)

So this gives us an a posteriori probability that any theory T_i is true.

How can we enumerate all possible theories? Well, we can just think of a theory as an algorithm for computing probabilities of future histories given past histories. Computability theory shows us a way to enumerate all such algorithms.
 
  • #36
bhobba said:
Nobody ever said it was. Again I repeat what Feller said and highlight the key point:
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Now exactly what is your issue?

Your rhetoric does not interest me.
Physics is not limited to citations

Once again with the same mathematical axiomatic we can biuld different model/semantic.

Patrick
 
  • #37
stevendaryl said:
I once sketched out a Bayesian "theory of everything". Theoretically (not in practice, because it's computationally intractable, or maybe even noncomputable), you would never need any other theory.

Let T_1, T_2, ... be an enumeration of all possible theories. Let H_1, H_2, ... be an enumeration of all possible histories of observations. (It might be necessary to do some coarse-graining to make a discrete set of possibilities.)

Let P(T_i) be the a-priori probability that theory T_i is true.
Let P(H_j | T_i) be the (pretend it's computable) probability of getting history H_j if theory T_i were true. Then we compute the probability of T_i given H_j has been observed via Bayes' rule:

P(H_j) = \sum_i P(T_i) P(H_j | T_i)
P(T_i | H_j) = P(H_j | T_i) P(T_i)/P(H_j)

So this gives us an a posteriori probability that any theory T_i is true.

How can we enumerate all possible theories? Well, we can just think of a theory as an algorithm for computing probabilities of future histories given past histories. Computability theory shows us a way to enumerate all such algorithms.

But does this include that the same formal theory can correspond to different semantics (models - both mathematically and physically)?
 
  • #38
atyy said:
But does this include that the same formal theory can correspond to different semantics (models - both mathematically and physically)?

Well, for the practical purposes of science (trying to build rockets and lasers and computers and so forth), the semantics aren't important. The only thing of importance is how to relate past observations to future observations.

Of course, that viewpoint completely ignores the reason people are drawn to science--not for practical purposes, but to understand. It also ignores the fact that the connection between past observations and future observations is enormously complex, and from a purely computational standpoint, having a semantic understanding of what's going on is tremendously powerful in creating the connection. If you're just tinkering with algorithms without being guided by physical insight, it's hopeless, in practice. But in principle...
 
  • #39
microsansfil said:
Your rhetoric does not interest me. Physics is not limited to citations

No its not. But precisely what makes you think when someone 'show's how they are applied' that does not include a mapping from the abstract things in axioms to what you apply it to? For example in probability you map this abstract thing called probability to outcomes. One then applies the law of large numbers, and a few reasonableness assumptions, to show that abstract thing is the proportion of outcomes of a large number of trials. My suspicion is you don't have much experience in applying math. How its done is usually so obvious its not even spelt out - simply assumed.

microsansfil said:
Once again with the same mathematical axiomatic we can biuld different model/semantic.

Translation - the same axioms can be applied to different situations. Why you want to make a point out of something utterly trivial has me beat.

But its obvious you come from an entirely different background to me - and I suspect its philosophy - not applied math or physics.

I have discussed this sort of thing with philosophy types before - we talk past each other.

Thanks
Bill
 
  • #40
microsansfil said:
Your rhetoric does not interest me.
Physics is not limited to citations

Once again with the same mathematical axiomatic we can biuld different model/semantic.

Patrick

I think you're barking up the wrong tree in arguing with Bill. There's no disagreement between you two about the fact that the same mathematical theory can have different, non-isomorphic models. If you disagree with Bill, it would be helpful to try to pinpoint what the disagreement really is. I can assure you that it is not about model theory or Godel's theorem.
 
  • Like
Likes 1 person
  • #41
bhobba said:
But its obvious you come from an entirely different background to me - and I suspect its philosophy - not applied math or physics.

bhobba said:
That's why the truth lies in the axioms.

No comments.

I had the full range of your Fallacy. You did not interest me, I move on.

Patrick
 
Last edited:
  • #42
stevendaryl said:
Well, for the practical purposes of science (trying to build rockets and lasers and computers and so forth), the semantics aren't important. The only thing of importance is how to relate past observations to future observations.

Of course, that viewpoint completely ignores the reason people are drawn to science--not for practical purposes, but to understand. It also ignores the fact that the connection between past observations and future observations is enormously complex, and from a purely computational standpoint, having a semantic understanding of what's going on is tremendously powerful in creating the connection. If you're just tinkering with algorithms without being guided by physical insight, it's hopeless, in practice. But in principle...

That's true, but I don't mean mathematical semantics as much as physical semantics. For example, Euclid's points can model either physical lines or to physical points, so the formal object can have more than one valid physical correspondence, and I'm not sure if you can list all conceivable physical correspondences to a given formal theory.
 
  • #43
microsansfil said:
No comments.

I had the full range of your Fallacy. You did not interest me, I move on.

Patrick

Sorry, all french are not like that.
 
  • #44
naima said:
Sorry, all french are not like that.

Its not a French thing I am sure.

I think, for reasons best known to him, he was simply being contrary.

I looked at his background. He is evidently a research engineer and should have understood many of the fundamental issues he bought up and how they are resolved in practice. I thought his background was philosophy because some (fortunately very few) can carry on like that - but evidently it isn't - which leads me to believe he was simply being contrary.

Thanks
Bill
 
Last edited:
  • #45
atyy said:
That's true, but I don't mean mathematical semantics as much as physical semantics. For example, Euclid's points can model either physical lines or to physical points, so the formal object can have more than one valid physical correspondence, and I'm not sure if you can list all conceivable physical correspondences to a given formal theory.

Of course that is true.

But when one evokes axioms in an applied context its usually utterly obvious from the context what you are mapping to what.

When it is said the truth lies in the axioms, and people like Feller say we don't attempt to define what the basic objects are; what is meant is the modern mathematical method. We prove theorems without referencing the meaning of the objects the axioms apply to; so when applied we have all these consequences without any further ado. You can apply the same axioms to many different areas with great economy of thought.

In relation to Frequentest vs Bayesian its simply a matter of interpretation of that undefined thing called probability in the Kolmogogrov axioms. You interpret it as plausibility ie a belief we as human being have and you get a Bayesian view - although that's usually done by the so called Cox axioms - but they are logically equivalent to Kolmogorov's axioms. You can leave it undefined and simply show via the law of large numbers (and yes some other assumptions such as assuming a very small probability is FAPP zero is required as well - but as is usual in applied math its not explicitly stated - you glean it with a bit of experience) that undefined thing is the proportion of a large number of trials. Also you can assume probability is a propensity and arrive at exactly the same thing. Kolomgorv's axioms guarantee it.

The only caveat with all of this is if you decide to get really tricky and map the same axioms to the same physical situation in different ways as mentioned about Euclidean geometry. Then you are in for a whole world of hurt in saying what's true and what isn't - of course you can do it - but great care would be required in keeping each separate.

It goes without saying that's not what is going on here so regardless of what view you take of probability you must get the same results.

Thanks
Bill
 
  • #46
naima said:
Sorry, all french are not like that.

This is the only argument you have. it is rather pathetic.

Is not a question of nationality because it is E.T Jaynes points of View. E.T Jaynes is not French isn't he ?

In purely mathematical framework (axiomatic) bayesian, frequentist are interpretation. They don't change mathematical framework. This is an evidence since it is outside the mathematical framework.

But mathematics isn't physics and neither the inverse. We can not reduce the QM in a single axiom purely syntactic.

I can only advise you to read E.T Jaynes. He provides concrete examples on different physical results obtained.

I have nothing to sell, no proselytizing in my case. I am just an Amanuensis

Patrick
 
Last edited:
  • #47
« for all practical purposes »

E.T Jaynes :
http://bayes.wustl.edu/etj/articles/confidence.pdf

Confidence Interval (Frequentist) vs. Credible Interval (Bayesian)
AEC Graduate Course - Statistics :
http://www.lhep.unibe.ch/schumann/docs/nirkko_tufanli_intervals.pdf

To express uncertainty in our knowledge after an experiment :

– Frequentist approach uses a "confidence interval"
– Bayesian approach uses a “credible interval”

Example - Cookie jar

See results Confidence vs. credible interval slide 20

my position : Just Amanuensis

Patrick
 
Last edited by a moderator:
  • #48
microsansfil said:
But mathematics isn't physics and neither the inverse. We can not reduce the QM in a single axiom purely syntactic.

This is the issue that leaves me scratching my head.

I can't find anything in this thread that says otherwise. My quote from Feller states we determine the meaning of axiomatic systems by seeing how they are applied. When someone says the truth is in the axioms, obviously it doesn't mean the axioms are the truth - because such are neither true or false - what it means is when applied the results from the axioms are so well known it becomes a testable theory.

Maybe it's because English is not your first language - I simply do not know.

So let's go back to what you said right at the beginning:

microsansfil said:
prob = 1/2 is not a property of the coin.
prob = 1/2 is not a joint property of coin and tossing mechanism.
Any probability assignment starts from a prior probability.

As far as I can see that is a philosophical position you are taking. There is zero reason you can't map probability as per the Kolmogerov axioms to the coin. In fact that's exactly what is done in basic courses of probability.

So can you please explain what's wrong with that? Why have textbooks like Feller got it wrong?

Thanks
Bill
 
Last edited:
  • #49
microsansfil said:
Just Amanuensis

Yes - but the issue is you are the 'Amanuensis' for a controversial position that is far from accepted eg:
http://stats.stackexchange.com/ques...ian-credible-intervals-are-obviously-inferior

'So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval.'

While I haven't read Janes book, from my knowledge of Bayesian inference the above looks a lot closer to the truth of the matter than the frequentest view is wrong.

Its simply a matter of what is the most natural way to view things making problems easier. That's nothing new, and as I have posted previously in this thread as far as Bayesian hypothesis testing is concerned the frequentest view is rather unnatural - but wrong - that is another matter.

If you argue non-standard controversial position, you really should be able to justify it - not just fall-back on - all I am doing is repeating.

Thanks
Bill
 
Last edited:
  • #50
Thread closed for the moment, pending possible moderation.
 
Back
Top