A Evaluate this paper on the derivation of the Born rule

Click For Summary
The discussion revolves around the evaluation of the paper "Curie Weiss model of the quantum measurement process" and its implications for understanding the Born rule in quantum mechanics. Participants express interest in the authors' careful analysis of measurement processes, though some raise concerns about potential circular reasoning in deriving the Born rule from the ensemble interpretation of quantum mechanics. The conversation highlights the relationship between state vectors, probability, and the scalar product in Hilbert space, emphasizing the need for a clear understanding of measurement interactions. There is also skepticism regarding the applicability of the model to real experimental setups, with calls for more precise definitions and clarifications of the concepts involved. Overall, the discourse reflects a deep engagement with the complexities of quantum measurement theory.
  • #271
vanhees71 said:
Well, here probability theory and statistics as you describe it were failing simply, because the assumptions of a certain model were wrong. It's not a failure of the application of probability theory per se. Hopefully, the economists learned from their mistakes and refine their models to better describe the real world. That's how empirical sciences work! If a model turns out to be wrong, you try to substitute it by a better one.
Not entirely. My point is that although you can predict the moments from a theoretical distribution function the reverse is not true -- you cannot obtain the distribution function from the empirical moments.. That is why probability is fundamental.
 
Physics news on Phys.org
  • #272
mikeyork said:
More generally, a finite number of events can only tell you a finite number of moments
In particular, a single measurement is completely unrelated to the distribution.
 
  • Like
Likes vanhees71
  • #273
vanhees71 said:
your claim that Born's rule doesn't apply. If this was the case that would imply that you can clearly disprove QT by a reproducible experiment.
No. Failure of Born's rule is completely unrelated to failure of quantum mechanics. The latter is applied in a much more flexible way than the Born rule demands. It seems that we'll never agree on this.
 
  • #274
A. Neumaier said:
In particular, a single measurement is completely unrelated to the distribution.
As I have repeatedly emphasized, but you have repeatedly evaded, it is not the distribution (statistics) that matters but the distribution function (probability). This enables you to predict which results are more likely. To claim that the distribution function is "completely unrelated" to a single measurement is ridiculous.
 
  • #275
mikeyork said:
Look at my post #3.

Which says:

mikeyork said:
any mathematical encoding that tells us how to compute the relative frequency can serve as a theoretical probability.

In other words, the "fundamental concept" appears to be relative frequency--i.e., statistics. So I still don't understand your statement that probability is a "fundamental concept" while statistics is "derived".
 
  • #276
mikeyork said:
So let's give up on theory? All that stuff about Hilbert spaces is useless guff?

Hilbert spaces don't require any "fundamental concept" of probability. They are just vector spaces with some additional properties.

mikeyork said:
Professional poker players should retire?

Are you claiming that professional poker players routinely compute, for example, expectation values for, say, another player bluffing?

Obviously there are known probabilities for various poker hands, but those are based on, um, relative frequencies, i.e., statistics. So to the extent that there are quantitative probabilities in poker, they are based on statistics. Everything else you mention is just on the spot subjective judgments that are, at best, qualitative, which means they're irrelevant to this discussion.
 
  • #277
PeterDonis said:
Which says:In other words, the "fundamental concept" appears to be relative frequency--i.e., statistics.
No,no,no! Frequency counting is just a way to test a probability theory -- in the same way that scattering experiments are how you test a theory of interaction.
 
  • #278
mikeyork said:
Frequency counting is just a way to test a probability theory -- in the same way that scattering experiments are how you test a theory of interaction.

In which case you still haven't answered my question: what is the "fundamental concept" of probability? All you've said so far is that, whatever it is, we can test it using statistics. (Your statement in post #3 amounts to the same thing--it's "some thingie I can use to calculate something I can test by statistics".)
 
  • #279
mikeyork said:
in the same way that scattering experiments are how you test a theory of interaction.

Ok, but if I told you that my "theory of interaction" was "I have some thingie I use to compute scattering cross sections which I then test against measured data", would you be satisfied?
 
  • #280
PeterDonis said:
Obviously there are known probabilities for various poker hands, but those are based on, um, relative frequencies, i.e., statistics. So to the extent that there are quantitative probabilities in poker, they are based on statistics. Everything else you mention is just on the spot subjective judgments that are, at best, qualitative, which means they're irrelevant to this discussion.
Actually the relative frequencies are based on a probability assumption -- that each card is equally probable. As regards the rest, it's just your subjective judgment and I've already refuted it several times.
 
  • #281
PeterDonis said:
Ok, but if I told you that my "theory of interaction" was "I have some thingie I use to compute scattering cross sections which I then test against measured data", would you be satisfied?
No. Because "a thingie" doesn't allow you to calculate anything. QM gives you a probability theory that does.
 
  • #282
mikeyork said:
the relative frequencies are based on a probability assumption -- that each card is equally probable.

We don't have a common definition of "probability" so I can't accept this statement as it stands. I would state the assumption as: we assume that each hand is generated by choosing at random 5 cards from a deck containing the standard 52 cards and no others.

mikeyork said:
I've already refuted it several times.

Where? Where have you given the quantitative probabilities that, e.g., professional poker players calculate for other players bluffing?
 
  • #283
mikeyork said:
QM gives you a probability theory that does.

QM gives you a mathematical framework that does. But you have not explained why you think this mathematical framework is a "probability theory", except to say that "it let's me calculate stuff I can test against measured statistics". If I told you that my theory of interaction was "I have this mathematical framework that let's me calculate scattering cross sections which I then test against measured data", with no other information at all, would you be satisfied?
 
  • #284
PeterDonis said:
We don't have a common definition of "probability" so I can't accept this statement as it stands. I would state the assumption as: we assume that each hand is generated by choosing at random 5 cards from a deck containing the standard 52 cards and no others.
What does random mean if not equiprobable?
PeterDonis said:
Where? Where have you given the quantitative probabilities that, e.g., professional poker players calculate for other players bluffing?
They are not that numerically precise. In a game of poker, most factors that affect probability depend on judgment and experience. However, their interpretation of their experience is based on the probabilistic concept.
 
  • #285
PeterDonis said:
QM gives you a mathematical framework that does. But you have not explained why you think this mathematical framework is a "probability theory", except to say that "it let's me calculate stuff I can test against measured statistics".
I explained a lot more than that. I have now twice explained in this thread why scalar products offer a probability theory. The Born rule is the icing on the cake.
 
  • #286
mikeyork said:
What does random mean if not equiprobable?

It means a certain procedure for picking the cards: for example, you fan out the cards in front of me, I close my eyes and pick 5 of them. Or we have a computer program that numbers the cards from 1 to 52 and then uses one of the built-in functions in whatever programming language we are using to pick 5 "random" numbers from that list (where "random" here means "using the pseudorandom number generator built into the operating system"). Or...

In other words, "random" here is operationalized. If you ask what justifies a particular operationalization, it will come down to some argument about relative frequencies of objects chosen by that operational method, i.e., statistics. So if we even use the term "equiprobable", we mean it in a way that is ultimately justified by statistics. So still no "fundamental concept" of probability independent of statistics.

mikeyork said:
They are not that numerically precise.

They aren't numerical period, as far as I can tell.

mikeyork said:
In a game of poker, most factors that affect probability depend on judgment and experience. However, their interpretation of their experience is based on the probabilistic concept.

What "probabilistic concept"? You still haven't told me what it is. All you've done is wave your hands about "factors" and "judgment" and "experience".
 
  • #287
mikeyork said:
I have now twice explained in this thread why scalar products offer a probability theory.

Your "explanation" amounts, as I said before, to saying that "scalar products let me calculate things that I can test against statistics". So, once more, I don't see how this gives a "fundamental concept" of probability that is independent of statistics.
 
  • Like
Likes Mentz114
  • #288
PeterDonis said:
So if we even use the term "equiprobable", we mean it in a way that is ultimately justified by statistics.
No your "random" picking verifies the probability theory that all card are equally probable.
 
  • #289
PeterDonis said:
Your "explanation" amounts, as I said before, to saying that "scalar products let me calculate things that I can test against statistics". So, once more, I don't see how this gives a "fundamental concept" of probability that is independent of statistics.
I wrote a lot more than that. I'm not going to repeat it. I can't force you to read.
 
  • #290
mikeyork said:
your "random" picking verifies the probability theory that all card are equally probable.

I did not use your formulation of "probability" in my scenario. In my scenario, "random" has nothing whatever to do with "probability". It's a reference to a particular kind of experimental procedure, and that's it. I did so precisely to illustrate how the "probabilities" for poker hands could be operationalized in terms of a procedure that makes no reference at all to any "fundamental concept" of probability.

You can't make such a "fundamental concept" appear just by saying so. You have to show me what it is, and why it has to appear in any scenario such as the "probabilities" of poker hands. So far your only answer has been "scalar products", but I didn't calculate any scalar products and my operationalized procedure doesn't require any.
 
  • #291
PeterDonis said:
I did not use your formulation of "probability" in my scenario. In my scenario, "random" has nothing whatever to do with "probability". It's a reference to a particular kind of experimental procedure, and that's it. I did so precisely to illustrate how the "probabilities" for poker hands could be operationalized in terms of a procedure that makes no reference at all to any "fundamental concept" of probability.
It doesn't matter how many cards you pull,you don't know that they are random. You don't even know that they are equally probable until you have pulled an infinite number of them. Equal probability is always a theoretical assumption to be tested (and never even proven).
 
  • #292
mikeyork said:
It doesn't matter how many cards you pull,you don't know that they are random.

Sure I do; I defined "random", for my purposes in the scenario, to mean "pulled according to the procedure I gave". If you object to my using the word "random" in this way, I'll change the word, not the procedure.

mikeyork said:
Equal probability is always a theoretical assumption

For you, perhaps; but I made no such assumption at all, so I don't have to care whether it is "theoretical" or "requires an infinite number of cards pulled to verify", or anything like that.
 
  • #293
mikeyork said:
the probabilistic concept.

Let me restate the question I've asked repeatedly in a different way: presumably this "probabilistic concept" you refer to is not something you just made up, but is something that appears in some standard reference on probability theory. What reference?
 
  • #294
PeterDonis said:
Sure I do; I defined "random", for my purposes in the scenario, to mean "pulled according to the procedure I gave". If you object to my using the word "random" in this way, I'll change the word, not the procedure.
For you, perhaps; but I made no such assumption at all, so I don't have to care whether it is "theoretical" or "requires an infinite number of cards pulled to verify", or anything like that.
Then you have no theory with which to predict the frequencies. But I have because equal probability gives me that theory.
 
  • #295
PeterDonis said:
Let me restate the question I've asked repeatedly in a different way: presumably this "probabilistic concept" you refer to is not something you just made up, but is something that appears in some standard reference on probability theory. What reference?
There are masses of textbooks on probability theory. Their objective is to predict frequencies not count them.

As regards scalar products in QM like I said it is a very simple argument and I've already described it twice in this thread. I'm not going to do it again.
 
  • #296
mikeyork said:
Then you have no theory with which to predict the frequencies. But I have because equal probability gives me that theory.

In other words, now you're using "equal probability" to mean an assumption about frequencies? Basically, in the case of the cards, it would be "each of the 52 cards in a standard deck will appear with the same frequency". Calling this a "theory that predicts frequencies" doesn't change the fact that the assumption I just described is logically equivalent to the assumption "each of the 52 cards in a standard deck is equally probable". See below.

mikeyork said:
There are masses of textbooks on probability theory. Their objective is to predict frequencies not count them.

On the frequentist interpretation of probability, which is AFAIK the one that the majority of the "masses of textbooks" use, probabilities are relative frequencies. Some relative frequencies are predicted (e.g., the relative frequency of four of a kind in poker), but those predictions are based on other relative frequencies of more elementary events (e.g., the relative frequency of each individual card in a standard 52 card deck).

Evidently you are not using this interpretation. The other standard intepretation is Bayesian. Is that the one you're using? Under the Bayesian interpretation, the "equally probable" assumption about, e.g., each card in a standard 52 card deck is just a uniform prior over a finite set with 52 elements. This would be consistent with your saying that probability theory is for predicting frequencies, but I don't see the connection with scalar products.
 
  • #297
mikeyork said:
There are masses of textbooks on probability theory.

Are there masses of textbooks explaining how QM scalar products are probabilities? How, for example, they obey the Kolmogorov axioms?
 
  • #298
PeterDonis said:
In other words, now you're using "equal probability" to mean an assumption about frequencies?
No.
PeterDonis said:
Basically, in the case of the cards, it would be "each of the 52 cards in a standard deck will appear with the same frequency". Calling this a "theory that predicts frequencies" doesn't change the fact that the assumption I just described is logically equivalent to the assumption "each of the 52 cards in a standard deck is equally probable".
No it's not. It may be empirically similar but will only be equivalent if you happen to get equal frequencies over an infinite number of pulled cards..
PeterDonis said:
On the frequentist interpretation of probability, which is AFAIK the one that the majority of the "masses of textbooks" use, probabilities are relative frequencies.
The distinction, as I have repeatedly said is between measuring/counting and predicting. Just like everything else in physics. Either you have a theory or you don't.
PeterDonis said:
but I don't see the connection with scalar products.
As I said, you have to go back and read it. It's a really simple argument but I don't care in the least if you don't agree with it and I'm not going to argue about it any more.
 
  • #299
mikeyork said:
you have to go back and read it

I have read your posts in this thread repeatedly and I still don't see it. So I guess we'll have to leave it there.
 
  • #300
PeterDonis said:
Are there masses of textbooks explaining how QM scalar products are probabilities? How, for example, they obey the Kolmogorov axioms?
No. You asked me about the concept of probability theory. QM is a special case and like I said, I don't care if you don't like my argument.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 54 ·
2
Replies
54
Views
5K
Replies
48
Views
6K
Replies
58
Views
4K
Replies
31
Views
3K
Replies
47
Views
5K
  • · Replies 13 ·
Replies
13
Views
6K