I Quantum physics vs Probability theory

  • #51
vanhees71 said:
Well, in my field we haven't ever needed POVMs at all.
Because there one doesn't care about the foundations. One would need them if one were to give a statistical interpretation to all the q-expectations of nonhermitian field products occurring in the derivation of quantum kinetic equations. They cannot be interpreted statistically in the pre-1970 measurement formalism.
vanhees71 said:
Where do you need them to understand the double-slit or Stern-Gerlach experiments?
The original Stern-Gerlach experiment did (in contrast to its textbook caricature) not produce two well-separated spots on the screen but two overlapping lips of silver.
This outcome cannot be described in terms of a projective measurement but needs POVMs.

Similarly, joint measurements of position and momentum, which are ubiqiotpus in engineering practice, cannot be described in terms of a projective measurement.
Born's rule in the pre-1970 form does not even have idealized terms for these.

For the double slit without the common idealizations, which also needs a POVM treatment, see the book mentioned in the POVM thread.
vanhees71 said:
Again, I'm not against the POVM formalism, but it's overcomplicating things if you start on a level where even the simpler and straight-forward case has been understood.
To motivate and understand Born's rule for POVMs is much easier (one just needs simple linear algebra) than to motivate and understand Born's rule in its original form, where all the fancy stuff about wave functions, probability amplitudes and spectral representations must be swallowed by the beginner.

Thus it is overcomplicating things if you start with probability amplitudes and spectral resolutions!
 
Last edited:
Physics news on Phys.org
  • #52
vanhees71 said:
If you insist on going beyond this, i.e., asking for a framework considering all possible measurements, i.e., of all observables, then you need to extend the logic and the probability theory beyond the standard features, as explained by @atyy in #33 .
No, there is no need to do this. All you have to do is to care about accurate formulations of the propositions you make about some quantum systems.

Of course, if you, say, define the negation of "measuring X gives always result x" as "measuring X gives never result x", this operator "not" does not follow classical logic, thus, it defines some "quantum logic". This is essentially all that has to be said about such "generalizations" of classical logic: Care about what you say, and follow the rules of classical logic, and you will not need any quantum logic in quantum theory too.

Same for probability theory. You can use the space of elementary events proposed by Kochen and Specker (in their paper about the impossibility of hidden variables, where they have given that construction but rejected it as not giving what is usually assumed to be meant with "hidden variables"). This and caring about not violating classical logic in the own reasoning is sufficient to live with classical probability theory there too.
 
  • Like
Likes Killtech and vanhees71
  • #53
Elias1960 said:
Same for probability theory. You can use the space of elementary events proposed by Kochen and Specker (in their paper about the impossibility of hidden variables, where they have given that construction but rejected it as not giving what is usually assumed to be meant with "hidden variables"). This and caring about not violating classical logic in the own reasoning is sufficient to live with classical probability theory there too.
Thank you! I must admit I didn't know any detail about Kochen-Specker other then being a No-Go theorem before but their article answers my original question quite well. It was a very interesting read and in particular the formulation of the framework they used to approach the problem.

That said I found simple attempts to model any quantum experiment to become highly contextual. Though i think that in attempting to model a much more general setup or set of experiments this could be remedied.

But what i can't see any PT model to do is to allow to fully describe all state in terms observable results one way or another (assuming any such model to correctly describe the experiment it models i.e. yields correct predictions). So this kind of approach should not come into conflict with this theorem.
 
  • #54
That construction on p.63 of the Kochen Specker paper was thought only as an illustration why one needs a more serious restriction for an adequate definition of "hidden variables" than a formula of some space ##\Lambda## which gives a quite trivial Kolmogorovian probability space for quantum theory too. It has been essentially ignored, so it is known only by those who have read the paper.

The QM pure states are defined in terms of observables, you observe a preparation measurement, then the observable result defines the eigenstate of the measured operator.

Instead, the interpretations are not restricted to describe the state in observable only terms. That would be an unreasonable restriction, motivated by nothing but positivism.

On the other hand, there is the objective Bayesian interpretation of probability (following Jaynes it is the "logic of plausible reasoning"). In some sense, it is not about anything hidden - it is about probabilities of things that make sense to us. But these things may be wrong (say a particular hypothesis about what has happened), maybe unrelated to anything real (like statements about what is true if a particular theory is true). So, this is not only about observables, and should not be, because we want to reason about how probable such things are and want to do this in a consistent way, and the rules of probability theory are what is appropriate for this.
 
Last edited:
  • Like
Likes Auto-Didact
  • #55
Elias1960 said:
On the other hand, there is the objective Bayesian interpretation of probability (following Jaynes it is the "logic of plausible reasoning"). In some sense, it is not about anything hidden - it is about probabilities of things that make sense to us. But these things may be wrong (say a particular hypothesis about what has happened), maybe unrelated to anything real (like statements about what is true if a particular theory is true). So, this is not only about observables, and should not be, because we want to reason about how probable such things are and want to do this in a consistent way, and the rules of probability theory are what is appropriate for this.

Just to point out that this is controversial. Even among Bayesians, many object to the objective Bayesian view. There are alternatives such as the subjective Bayesian view of de Finetti.
 
  • #56
atyy said:
Just to point out that this is controversial. Even among Bayesians, many object to the objective Bayesian view. There are alternatives such as the subjective Bayesian view of de Finetti.
They both have their applications. If one wants to speculate beyond the information one objectively has, one may also want to do this in a logically consistent way. In this case, the subjective Bayesian point of view can be used.

The difference is roughly this: If you have no information about a dice which makes a difference between the numbers, then the objective Bayesian interpretation prescribes that one has to use ##\frac{1}{6}## for all of them. The subjective Bayesian interpretation makes no such prescriptions. You are free to speculate that 5 may be favored based on your subjective feeling. Whatever - there is not really a contradiction between them.

So I think the difference between objective and subjective Bayesians is in this question irrelevant. For physics, the objective view is clearly preferable. Already because it gives, for free, a base for the null hypothesis: If we have no information which suggests any causal connection between A and B we have to assume P(AB)=P(A)P(B). But also all that is named entropic inference (say, for the Bayesian variant of thermodynamics) depends on what can be said about the case when we have no information. In this case, objective Bayesians prescribe the probability distribution with the largest entropy, while subjective Bayesians tell us nothing at all.
 
  • #57
Elias1960 said:
They both have their applications. If one wants to speculate beyond the information one objectively has, one may also want to do this in a logically consistent way. In this case, the subjective Bayesian point of view can be used.
well, as for original question i was actually looking for merely modeling the information we objectively have and wanted to know PT was in general always suitable for that. if it didn't that would be a very interesting thing to understand - but never mind that now.

My problem with QM is that for me all its interpretations add more confusion then they help to understand what we are actually modelling. having a solid framework to view problems from that is free of all the confusing assumptions of QM interpretations (the actual math isn't the confusing part) might be a good way to reflect and understand which particular aspects are causing the trouble.

for example QM interpretations stick to the idea of point like particles even though the entire math framework does everything it can to model as far as possible from that intuition. and never mind the idea of a point like charged particle is already incomprehensible and paradox on a classical level. It feels like QM get's this to work only because it's math framework cheats its interpretations and secretly gives up all such assumptions.

I hoped to maybe use PT to get this sorted out in my head; to understand what kind of information is there to be modeled - on the very abstract level of minimalist PT approach since it is very lightweight on axioms which makes it extremely general. In that context it would like to stay minimalist and not add any assumptions i don't absolutely need to get correct predictions. that makes me go with the most basic interpretations of probability.
 
  • #58
Auto-Didact said:
Having said that, I agree that a B level thread might not be the correct avenue for raising such an issue.
I am terribly sorry to have misunderstood this classification. is there a way I can remedy this mistake?
 
  • #59
Killtech said:
I am terribly sorry to have misunderstood this classification. is there a way I can remedy this mistake?
Ask a moderator to change it to A
 
  • #60
Killtech said:
I am terribly sorry to have misunderstood this classification. is there a way I can remedy this mistake?

The moderators would have done so if they thought it was important enough.

The more important issue is that you don't understand QM:

Killtech said:
My problem with QM is that for me all its interpretations add more confusion then they help to understand
Killtech said:
for example QM interpretations stick to the idea of point like particles even though the entire math framework does everything it can to model as far as possible from that intuition.

This second quotation is, quite simply, nonsense. It does not reflect a failure of 100 years of QM development by the leading physicists of the 20th century. It reflects your failure, hitherto, to understand what QM is saying.

My concern is that we've indulged you in a fairly pointless exercise in analysing the foundations of QM vis-a-vis classical PT. Whereas, all along your issue is simply that of someone trying to learn QM for the first time and being confused by it.
 
  • #61
PeroK said:
This second quotation is, quite simply, nonsense. It does not reflect a failure of 100 years of QM development by the leading physicists of the 20th century. It reflects your failure, hitherto, to understand what QM is saying.
I don't have an issue understanding QM. to understand how to use the formalism and how to apply it to correctly calculate results for which you don't need to resolve those kind of issues. So in that i understand QM quite well.

But whenever phyiscs text-books tried to "intuitively"-explain QM aspects it left me more confused then before. Heisenbergs uncertainty principle is a prime example of this. but when i learned the theoretical proof of it was rather easy to understand what it meant. from that point i learned that at least for me it is far better to derive my intuition from the behavior of mathematical apparatus rather then rely on any attempt of physicist to explain it in "classical" terms that usually also contradicts the math of QM.

PeroK said:
My concern is that we've indulged you in a fairly pointless exercise in analysing the foundations of QM vis-a-vis classical PT. Whereas, all along your issue is simply that of someone trying to learn QM for the first time and being confused by it.
I found the answer i asked, albeit it took 3 pages.

Indeed i haven't realized that what i am looking for is a far more general framework to analyse possible constructions of theories capable of describing quantum experiments - just like the No-Go-theorems need it to discuss the possibility of hidden-variable theories. the underlying premise for the required framework is the same.

my concern is that the point of view is just too different from most physicist that it gets difficult to express the questions i have in terms they understand. Then again this was an issue i might have better posted in the probabily theory forums since it needed only basic information about several physical experiments in question but a deeper understanding of PT was needed for the entire rest. The idea that i explicitly did not want for a model it terms of classic QM may also be problematic for people that are too familiar with it to even understand why anyone would want that - given that QM works well enough. Then again the article of Kochen-Specker where such formalism was developed i wonder why for so many here it appeared initially unthinkable.
 
  • #62
Killtech said:
I don't have an issue understanding QM. to understand how to use the formalism and how to apply it to correctly calculate results for which you don't need to resolve those kind of issues. So in that i understand QM quite well.

But whenever phyiscs text-books tried to "intuitively"-explain QM aspects it left me more confused then before. Heisenbergs uncertainty principle is a prime example of this. but when i learned the theoretical proof of it was rather easy to understand what it meant. from that point i learned that at least for me it is far better to derive my intuition from the behavior of mathematical apparatus rather then rely on any attempt of physicist to explain it in "classical" terms that usually also contradicts the math of QM.

What book are you using?

One problem I can see with your approach is how you would map your mathematics to experiment? Especially as experiments involve macroscopic, classical apparatus. Can you explain the double-slit experiment, for example, purely in terms of the mathematical formalism?
 
  • Like
Likes Killtech
  • #63
Killtech said:
But whenever phyiscs text-books tried to "intuitively"-explain QM aspects it left me more confused then before. Heisenbergs uncertainty principle is a prime example of this. but when i learned the theoretical proof of it was rather easy to understand what it meant. from that point i learned that at least for me it is far better to derive my intuition from the behavior of mathematical apparatus rather then rely on any attempt of physicist to explain it in "classical" terms that usually also contradicts the math of QM.

That's it! You have to build your intuition from the math. There's no other way. It's good advice to stay away from any text that claims otherwise. In terms of the writings of the founding fathers for me that implied to rather read Schrödinger, Dirac, Born, Pauli, and particularly Sommerfeld than Bohr or Heisenberg.

Concerning foundations, stay away from philosophy books, where even usual words get rid of any clear meaning leaving you in the dark and fog of utmost confusion ;-)).

Concerning foundational physical questions like EPR, entanglement, and the like, it's also good to look at the real-lab experiments by quantum opticians and read their papers (with a good theoretical textbook as a background like Garrison and Ciao, Quantum optics, to get the full QFT description which is the only true thing).
 
  • #64
Killtech said:
The idea that i explicitly did not want for a model it terms of classic QM may also be problematic for people that are too familiar with it to even understand why anyone would want that - given that QM works well enough. Then again the article of Kochen-Specker where such formalism was developed i wonder why for so many here it appeared initially unthinkable.
This is the double edged sword of specialization into camps: the breeding of researchers into large silos who vehemently overreact to everyone who speaks against the accepted wisdom of the group; this is a widely documented phenomenon within the social sciences which has been studied from a variety of viewpoints (pedagogic, sociological, economic, political, doxastic, etc), but I digress.

The direct downside of specialization is that specialists in different fields are unable to converse with each other, even when talking about the same topic for a multitude of reasons. To quote Feynman: In this age of specialization men who thoroughly know one field are often incompetent to discuss another. The great problems of the relations between one and another aspect of human activity have for this reason been discussed less and less in public. When we look at the past great debates on these subjects we feel jealous of those times, for we should have liked the excitement of such argument.
 
  • Like
Likes Killtech
  • #65
PeroK said:
What book are you using?
over all the time 15 years i looked through quite a few different books but also scrips i could find on the internet. far from all, quite a few left me with a brain hemorrhage :) - more often those more akin to an experimental focus. But there were also those this only sticked to axiomatic approaches... i liked those the most.

PeroK said:
One problem I can see with your approach is how you would map your mathematics to experiment? Especially as experiments involve macroscopic, classical apparatus. Can you explain the double-slit experiment, for example, purely in terms of the mathematical formalism?
Now you are starting to understand where i am coming from because this is exactly the fundamental problem i am running into. For a parson initially home in pure mathematics and within the autistic spectrum this is the most difficult part to sort out. I just can't handle the constructions physics have made here (as for me it appears as far from canonical as it can get) and even something simple like a well defined mapping algorithm between an experimental setup and the corresponding observable operator it measures lacking a proper definition leaves me hanging with a something that prohibits me form a well-defined interpretation mapping.

Then again PT offers a framework to model experiments in a very clear an reasonable way i can fully understand so it is a natural tool to get back to in order to see how i can fix my experiment-to-math gap.
 
  • Like
Likes PeroK
  • #66
PeroK said:
Can you explain the double-slit experiment, for example, purely in terms of the mathematical formalism?
I am not sure what your question exactly means? "Explain" is a wide term. question: does any modelling of a random experiment in PT explain anything? i mean reading through Kochen-Specker i wonder whether you are asking if this can be done in an non-contextual way? in that sense it would have an canonical way to apply to a wide array of other instances. If that is the case of your question i think it might be possible.
 
  • #67
Killtech said:
I am not sure what your question exactly means? "Explain" is a wide term. question: does any modelling of a random experiment in PT explain anything? i mean reading through Kochen-Specker i wonder whether you are asking if this can be done in an non-contextual way? in that sense it would have an canonical way to apply to a wide array of other instances. If that is the case of your question i think it might be possible.

You might want to check that your mathematical formalism predicts the results obtained by experiment. Somehow you have to map the mathematical model to a specific experimental set-up.
 
  • #68
PeroK said:
You might want to check that your mathematical formalism predicts the results obtained by experiment. Somehow you have to map the mathematical model to a specific experimental set-up.
PT toolbox provides you with both - albeit the mapping is at first trivial. for example the way you distinguish outcomes implicitly defines what can be observed directly: observables. you could in general define a mapping between all types of possible detectors to these observables by which you identify outcomes. but of course this stays entirely on a macroscopic level. once this is estabilished and you have a correct model for your experiment you can start comparing the QM model vs yours since both yield the same results. now you try to find a mapping between each information stored in your quantum state model (e.g. wave function in the sense of a decomposition into some basis with each coefficient holding 1 real number of abstract information) to your macroscopic outcome space such that for each such information varieting (within its allowed frame) will yield the same change in results for both models.

now the problem is that a wave function stores a lot of information so you need a very general experimental setup to be able to make each information make a distinguishable difference in the results (taken over many realizations with same starting conditions). the simple double slit setup doesn't even have parameters to play with so it isn't suited for this. but one could think of each slit width as a parameter and so on until one gets enough degrees of freedom for this kind of mapping.

in the end you should have a mapping of paramters to outcome distributions (events in PT terminology) and those again are associated to QM mathematical framework via its interpretations. The final stage is now to rearrange the original state space of the PT model in terms of the mathematical objects of QM via that mapping function - which therefore functions as an interpretation.
 
Last edited:
  • #69
Killtech said:
PT toolbox provides you with both - albeit the mapping is at first trivial. for example the way you distinguish outcomes implicitly defines what can be observed directly: observables. you could in general define a mapping between all types of possible detectors to these observables by which you identify outcomes. but of course this stays entirely on a macroscopic level. once this is estabilished and you have a correct model for your experiment you can start comparing the QM model vs yours since both yield the same results. now you try to find a mapping between each information stored in your quantum state model (e.g. wave function in the sense of a decomposition into some basis with each coefficient holding 1 real number of abstract information) to your macroscopic outcome space such that for each such information varieting (within its allowed frame) will yield the same change in results for both models.

now the problem is that a wave function stores a lot of information so you need a very general experimental setup to be able to make each information make a distinguishable difference in the results (taken over many realizations with same starting conditions).

Hmm. You're not saying anything specific here. Let's say I'm an experimenter and I have results from a double-slit experiment using electrons. When either slit is open I get a single-slit pattern. But, when both slits are open I do not get the sum of two single-slit patterns, I get a different pattern; an "interference" pattern.

How does the mathematical formalism of QM explain that result? It has to be specific to that experiment.

If you can't do that, then you are studying pure mathematics; but not physics. Not that there is anything wrong with pure mathematics!
 
  • Like
Likes vanhees71 and Auto-Didact
  • #70
PeroK said:
Hmm. You're not saying anything specific here. Let's say I'm an experimenter and I have results from a double-slit experiment using electrons. When either slit is open I get a single-slit pattern. But, when both slits are open I do not get the sum of two single-slit patterns, I get a different pattern; an "interference" pattern.

How does the mathematical formalism of QM explain that result? It has to be specific to that experiment.
Sorry, i have edited my prior post after posting with a little more elaboration. but i think you misunderstand my goal a little. i do not aim to explain anything. i am rather looking for a clear construction principle how to associate elements of the QM formalism to macroscopic observations made in the experiments - other then using the standard interpretations i am struggling with.
 
  • #71
Killtech said:
Sorry, i have edited my prior post after posting with a little more elaboration. but i think you misunderstand my goal a little. i do not aim to explain anything. i am rather looking for a clear construction principle how to associate elements of the QM formalism to macroscopic observations made in the experiments - other then using the standard interpretations i am struggling with.

I don't believe you can. If we exclude highly specialist macroscopic objects that have been experimentally created to display QM phenomena, and look at "ordinary" macroscopic objects. Like a particle detector.

You can't account for every particle in the detector and environment explicitly. Schrodinger's cat might be a good example. Just how, in QM terms, are you going to define a "live" cat and a "dead" cat? How do you define a cat, for that matter! You can do it in veterinary terms. But, there is no QM definition of a cat.

You have to accept that mathematical and physical reasoning from QM does not extend to a cat. Roughly you need at least:

QM
Molecular chemistry
Organic chemistry
Cell biology
Biology

QM underpins the whole edifice, but you can't understand a cat using only QM.

Theorectically, let's assume, we could do it. But, it's practically impossible.
 
  • #72
PeroK said:
I don't believe you can. If we exclude highly specialist macroscopic objects that have been experimentally created to display QM phenomena, and look at "ordinary" macroscopic objects. Like a particle detector.

Theorectically, let's assume, we could do it. But, it's practically impossible.
Well, i have to disagree here. Initially when i was looking for simply finding a way to express Schrödigers equations in pictures to better understand what it does - because i found that whenever dealing with diff equations it turned out to be extremely helpful to depict them visually to get a good intuition how solutions should look like and why certain theorems hold - the aspect that it was complex valued was a bit of a obstacle. so i using polar coordinates representation ##\Psi = \sqrt {\rho} e^{i u}## i got around it and checked the time evolution equations for both quantities. since it always helps to find similar already well understood equations as a shortcut to picturing these, i found that classical physics offers a lot. for example the time evolutions here are the simple continuity equation while the one for the probability-density-current can be written in terms of the Navier-Stokes-equations of a super-fluid with a peculiar non-linear self interaction ##\hbar^2 \frac {\nabla^2 \sqrt \rho} {2 m \sqrt \rho}## (pressure term?). Now I have looked like this object interacts with its environment just to find that it does so again in a quite familiar fashion - along Schrödigerns original attempt to interpret the wave function as the charge density (albeit here ##\rho## is encoded in it via Borns rule) and motivated by the interpretation when Dirac equations continuity equation goes negative. most convincing however is to find how intuitive this makes the H-atom solutions: a problem with Bohrs models was that a charged particle with angular momentum would emit an EM-wave and lose energy - but a charged fluid can take the form of a disk with an non zero angular momentum such that it does not change over time thus not emitting energy. Only if you combine two different energy solutions you would immediately find ##\partial _{t} \rho = \partial _{t} <E_{1}+E_{2}|E_{1}+E_{2}>## ##= \partial_{t} 2<E_{1},E_{2}> ## ##= sin((E_{2}-E_{1}) \hbar t) const## an oscillating charge distribution thus such a solution should (classically) rapidly lose energy by emitting an EM wave of proper frequency and collapse to the lower state. this behavior which would make only a discrete set of energy-eigenstates classically stable is however only possible for a non-linear system.

But generally what is stirring me up is that when looking into the time evolution of the probability density it has a non-linear self interaction term according to Schrödinger (within the current time evo) - from a PT point of view this is a No-Go for a true probability density because those would allow different realizations/outcomes of an experiment to interact with each other (this seems to be the root of all QM non classicality). That said interpreting this as a interaction with an alternate universe make a lot of sense to me here. but the easiest way to remedy this problem is if ##\rho## would simply be a physical density instead (to which a probability density is merely proportional) since those do obviously have such interactions. At this crossroads i view the latter approach as the more canonical while most QM interpretations take the other road - and i must understand why.

But yeah, i know superfluids are not be exactly common on macroscopic level. Neither are non-linear systems of that kind easy to find, however there are a lot of macroscopic non-linear examples that show a lot of interesting stuff, like for example solitions: solutions to non-linear wave equations which exhibit particle behavior. So there may not be perfect macroscopic match for the wave function behavior but you can get quite close. And for me I find it sufficient to follow that visualization of QM then sticking to abstract point particles.

But without a connection to experiments it has limited usability.
 
Last edited:
  • #73
Killtech said:
I am terribly sorry to have misunderstood this classification. is there a way I can remedy this mistake?

After reviewing the thread, I have changed the level to "I". Some aspects of the discussion probably can't be addressed fully except at the "A" level, but your posts indicate that you do not have the background needed for an "A" level discussion. Since the discussion is clearly beyond the "B" level at this point, "I" level seems like the best compromise.
 
  • #74
Stephen Tashi said:
If that refers to my questions, the problem is to show that "quantum logic" or "quantum probability" or "probability amplitudes" are organized mathematical topics that generalize ordinary probability theory. The alternative to that possibility is that these these terms are not defined in some unified mathematical context, but are informal descriptions of aspects of calculations in QM.
What do you think about Streater's Classical and Quantum Probability?

There's also Hardy's Quantum Theory From Five Reasonable Axioms which tries to reconstruct both classical and quantum probabilistic theories.
 
  • Like
Likes PeroK
  • #75
kith said:
What do you think about Streater's Classical and Quantum Probability?

There's also Hardy's Quantum Theory From Five Reasonable Axioms which tries to reconstruct both classical and quantum probabilistic theories.

I haven't looked at Streater's paper yet. Hardy's approach uses "physical probability". It's what I'd call the Axiom Of Average Luck. It modifies the Law Of Large Numbers to say that a probability can be physically approximated to any given desired accuracy by independent experiments - as oppose to the mathematical statement of the law which only deals with the probability of getting a good approximation.
 
  • #76
Stephen Tashi said:
Hardy's approach uses "physical probability". It's what I'd call the Axiom Of Average Luck. It modifies the Law Of Large Numbers to say that a probability can be physically approximated to any given desired accuracy by independent experiments - as oppose to the mathematical statement of the law which only deals with the probability of getting a good approximation.
Do you think this is enough to do physics or is there something missing? If it is enough, classical and quantum probabilities in physics are on equal footing (because rigorous probability theory itself isn't needed for physics if we take this point of view).

But try Streater, I think his treatment is much more aligned with what you are looking for.
 
  • #77
kith said:
If it is enough, classical and quantum probabilities in physics are on equal footing (because rigorous probability theory itself isn't needed for physics if we take this point of view).

Connecting probability theory with applications of probability theory is (yet another) problem of interpretation. Probability theory doesn't say that random variables have realizations, it doesn't say that we can do random sampling and it doesn't comment of whether events are "possible" or "impossible". Probability theory is circular. It only talks about probabilities.

Attempts to connect the law of large numbers to physical reality seem to work well. However, attempts to use martingale methods of gambling may also seem to work well. Suppose the probability of an event can always(!) be approximated to two decimal places by 10,000 independent trials. How times will Nature be performing sets of 10,000 trials? Will there be a physical consequence if one set of these trials fails to achieve two decimal accuracy? I don't know to reconcile the concept of "physical probability" (results of repeated experiments) with a scheme for how many times Nature conducts such series of experiments. There is also the problem that if I look for places where Nature has repeated an experiment, it is me that is grouping things into batches of independent experiments. The frequency of successes will depend on how I group them.
 
  • Like
Likes bhobba
  • #78
From comments on this thread, I take away (among other things) that classical probability is fine in its domain, but there are some instances where it won't work (Bell, two-slit, etc.) But outside of the evident counter-examples, I am not always sure where the boundary lies. For example, if I google "Schrödinger equation and brownian motion", I get a number of articles such as those attempting, using classical statistical methods, to derive the equation, or to apply it non-quantum phenomenon, such as
https://www.researchgate.net/publication/237152270_Quantum_equations_from_Brownian_motion
https://www.springer.com/gp/book/9783540570301
https://onlinelibrary.wiley.com/doi...978(199811)46:6/8<889::AID-PROP889>3.0.CO;2-Z
But could such an endeavor (either deriving the S. equation by applying classical statistics to stochastic processes, or conversely, applying the S. equation to a macro phenomenon) even make sense?
 
  • #79
nomadreid said:
From comments on this thread, I take away (among other things) that classical probability is fine in its domain, but there are some instances where it won't work (Bell, two-slit, etc.)

Are there actually instances where classical probability theory "won't work"? Or are such failures the failure of the assumptions made in modeling phenomena with classical probability theory - for example, assuming events are independent when they (empirically) are not.

Griffiths uses the term "pre-probabilities" to describe mathematical structures that are used to derive probabilities, but which are not themselves probabilities. ( section 3.5 https://plato.stanford.edu/entries/qm-consistent-histories/ ). The manipulations of "pre-probabilities" can resemble the manipulations used for probabilities. Because the pre-probabilities of QM use complex numbers, one might call them "complex" or "quantum" probabilities. But the success of pre-probabilities does not imply that classical probability theory won't work. It does imply that modeling certain physical phenomena is best done by thinking in terms of pre-probabilities instead of making simplifying assumptions of independence and applying classical probability theory directly.
 
  • Like
Likes nomadreid, vanhees71, *now* and 1 other person
  • #80
Stephen Tashi said:
Are there actually instances where classical probability theory "won't work"?
No. In the objective Bayesian interpretation, probability is simply the logic of plausible reasoning. Logic always works. If logic seems to fail, the error is somewhere else.
nomadreid said:
But could such an endeavor (either deriving the S. equation by applying classical statistics to stochastic processes, or conversely, applying the S. equation to a macro phenomenon) even make sense?
It makes sense.

The classical derivation comes from
Nelson, E. (1966). Derivation of the Schrödinger Equation from Newtonian Mechanics, Phys Rev 150(4), 1079-1085
and is known as Nelsonian stochastics.

A conceptually IMHO much better variant comes from Caticha and is named "entropic dynamics":
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357

Both have a problem known as "Wallstrom objection" that the Schrödinger equation is derived only for wave functions which have no zero's in the configuration space.
 
  • Like
Likes Stephen Tashi and nomadreid
  • #81
Wasn't Nelson himself quite critical against his own baby recently? I'd have to search for the source, where I read about it ;-)).
 
  • #82
Elias1960 Thanks very much for the very informative answer.
Elias1960 said:
A conceptually IMHO much better variant comes from Caticha and is named "entropic dynamics":
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357

Both have a problem known as "Wallstrom objection" that the Schrödinger equation is derived only for wave functions which have no zero's in the configuration space.

The Caticha variant has the added advantage that it is more accessible. :woot: Anyway, when I looked up the "Wallstrom objection", I got a lot of attempts to get around it, such as https://arxiv.org/abs/1101.5774, https://arxiv.org/abs/1905.03075, and others. Have any of them successfully served as a complement to either the classic derivation or to entropic dynamics?
 
  • #83
nomadreid said:
The Caticha variant has the added advantage that it is more accessible. :woot: Anyway, when I looked up the "Wallstrom objection", I got a lot of attempts to get around it, such as https://arxiv.org/abs/1101.5774, https://arxiv.org/abs/1905.03075, and others. Have any of them successfully served as a complement to either the classic derivation or to entropic dynamics?
It seems, the first of your quoted approches https://arxiv.org/abs/1101.5774 would fail to save entropic dynamics. There would be no potential ##v^i(q) = \partial_i \phi(q)## at all, but entropic dynamics requires that such a function globally exists.
Instead, if one simple explicitly excludes all ##\psi(q)## with zeroes somewhere, as in https://arxiv.org/abs/1905.03075, then a potential exists as a global function ##v^i(q) = \partial_i \phi(q)##, and Caticha's interpretation makes sense.
 
  • Like
Likes nomadreid
  • #84
Many thanks for that, Elias1960!
 
  • #85
Greetings all!

Stephen Tashi said:
Are there actually instances where classical probability theory "won't work"?
It depends. Ultimately quantum probabilities can be seen as Classical probabilities that are implicitly conditional. See the works of Andrei Khrennikov for nice expositions (https://arxiv.org/abs/1406.4886), this would be related to the "pre-probability" view above. In essence every quantum probability is like ##P(E_{i}|Q)##, i.e. chance of outcome ##E_{i}## given that variable ##Q## has been selected where as Classical Probability can have unconditional probabilities ##P(E_{i})##.

However constantly treating quantum probability this way is underdeveloped and probably more difficult than the standard way of folding all variable selections into a single non-comm Von Neumann algebra. It would be very difficult to treat Quantum Stochastic processes such as those of Belavkin this way.

Another way of phrasing the difference is that in quantum theory we can have fundamentally incompatible but non-contradictory events.
 
Last edited:
  • #86
vanhees71 said:
It's also obvious that with the SGE measurement of this spin component you change the state of the particle. Say, you have prepared to particle to have a certain spin-z component ##\sigma_z = +\hbar/2##, and now you measure the spin-x component. Then your particle is randomly deflected up or down (with 50% probability each)
Oh, you invoked the collapse! I had thought this was a no-no for you!
 
Last edited:
  • Like
Likes Auto-Didact
  • #87
No, I did not invoke the collapse. The time evolution of the wave function is entirely described by unitary time evolution, and the probabilities for finding the particle in one or the other partial beam after the ##\sigma_x## measurement is entirely determined by Born's rule using the time evoloved wave function. There's no need for collapse, particularly not in this simple example, where you can solve the time-dependent SGE (almost) exactly analytically.
 
  • #88
Why would one "avoid" collapse? Isn't state updating a normal part of QM?
 
  • #89
The collapse is an ad-hoc prescription which works well as such, but it has very fundamental problems in connection with relativistic QFT. It's contradicting the very construction of relativistic QFTs, where the only known (and very successful) models are those where interactions are strictly local, i.e., local observable operators commute at spacelike separation of their arguments, and this implies that a local measurement cannot have instant effects at far distant parts of entangled systems, and a collapse would mean an effect over space-like separated meaurement events.
 
  • #90
That's not true though. State-updating can be easily generalised to QFT without any problems with special relativity. See Hamhalter, J. "Quantum Measure Theory". It's a tough book, but Gleason's theorem and Lüders rule are generalised.

State updating doesn't lead to any problems, just like it doesn't cause signalling in entanglement in non-relativistic QM.

How do you update states in QFT if not via the usual rule? I know we don't do it normally in S-matrix calculations.
 
  • Informative
Likes Auto-Didact
  • #91
What else do you need? I also don't need a collapse to understand the fascinating Bell measurements with entangled (multi-)photon states either. The correlation is simply not caused by local interactions of the photons with the measurement devices but is already there due to the preparation in the initial entangled state (though the single-photon properties like polarization states are maximally uncertain, i.e., maximum-entropy mixed states).
 
  • #92
vanhees71 said:
What else do you need?
Beyond the S-matrix? This is basically equivalent to saying why would you need to be able to handle finite time processes in QFT. I think our fundamental theory should be able to handle finite-time processes, otherwise how could finite time events dealt with non-rel QM be considered limiting cases of QFT.

vanhees71 said:
I also don't need a collapse to understand the fascinating Bell measurements with entangled (multi-)photon states either.
Of course you don't need collapse to understand Bell correlations and of course the correlations are not caused by local interactions. The point is that you claimed state-reduction has problems with relativistic QFT. I'm saying that it has been mathematically proven that it does not.

However take a complicated multiparticle entangled state, not just the pairs in a simple Bell experiment. Something like the more general correlations considered by Gisin with ##L## particles, ##M## observables and ##N## outcomes also known as ##(L,M,N)## Bell secnarios in the literature.

You have an initial entangled state, then measurements are performed on ##R## of the particles. How do you model the correlations conditioned on these observations for the remaining ##L - R## particles without the state update rule?
 
  • Like
Likes Auto-Didact
  • #93
How can it not have problems with relativistic causality, if you claim that instantaneously by measuring a photon's polarization at point A, another photon which is most likely to be registered at a far distant place B, the state collapses from the entangled state before the measurement which is something like
$$[\hat{a}^{\dagger}(\vec{p},H) \hat{a}^{\dagger}(\vec{p}',V)-\hat{a}^{\dagger}(\vec{p}',H) \hat{a}^{\dagger}(\vec{p} V)]|\Omega \rangle$$
to a "product state", which is something like
$$\hat{a}^{\dagger}(\vec{p},H) \hat{a}^{\dagger}(\vec{p}',V) |\Omega \rangle$$
(modulo normlization factors and integration over the momenta such as to have proper wave packets rather than generalized momentum eigenstates of course)?
 
  • Like
Likes weirdoguy and QLogic
  • #94
vanhees71 said:
How can it not have problems with relativistic causality...rest of post
It's just the standard no-signalling results extended to QFT where it's a bit harder to prove. ##B## is not able to infer ##A##'s choice of observable from the statistics of observables local to ##B##. Thus there is no violation of causality.

That updating of the state is just standard QM where conditioned on one of ##A##'s results the global state is a product state. It doesn't mean that ##B## over multiple runs of the experiment can ever learn anything about ##A##'s results.

Of course in QFT we never have product states locally anyway, but that's a separate point.

Also how do you model the statistics of the ##L - R## particles conditioned on results on ##R## of them that I mentioned above?
 
  • Like
Likes Auto-Didact
  • #95
Stephen Tashi said:
Or are such failures the failure of the assumptions made in modeling phenomena with classical probability theory - for example, assuming events are independent when they (empirically) are not.
Stephen Tashi said:
But the success of pre-probabilities does not imply that classical probability theory won't work. It does imply that modeling certain physical phenomena is best done by thinking in terms of pre-probabilities instead of making simplifying assumptions of independence and applying classical probability theory directly.
What kind of dependence did you have in mind, the ignoring of which leads to errors?
 
  • #96
QLogic said:
It's just the standard no-signalling results extended to QFT where it's a bit harder to prove. ##B## is not able to infer ##A##'s choice of observable from the statistics of observables local to ##B##. Thus there is no violation of causality.

That updating of the state is just standard QM where conditioned on one of ##A##'s results the global state is a product state. It doesn't mean that ##B## over multiple runs of the experiment can ever learn anything about ##A##'s results.

Of course in QFT we never have product states locally anyway, but that's a separate point.

Also how do you model the statistics of the ##L - R## particles conditioned on results on ##R## of them that I mentioned above?
Yes sure, from the very foundations of local (microcausal) QFT it's clear that no problems can occur, but just putting the assumption of "state collapse" on top (and it's completely unnecessary too!) destroys this consistency of the formalism.

Look at the concrete experiments: You make a measurement protocol at each of the places and then evaluate them afterwards and postselect the different events. There's no collapse necessary to explain these outcomes but just Born's rule to calculate the probabilities for the outcomes of measurements and compare it with the result of the experiment.
 
  • Like
Likes bhobba and weirdoguy
  • #97
vanhees71 said:
Yes sure, from the very foundations of local (microcausal) QFT it's clear that no problems can occur, but just putting the assumption of "state collapse" on top (and it's completely unnecessary too!) destroys this consistency of the formalism.
It's a theorem found in Hamhalter's book that it doesn't.

vanhees71 said:
Look at the concrete experiments: You make a measurement protocol at each of the places and then evaluate them afterwards and postselect the different events. There's no collapse necessary to explain these outcomes but just Born's rule to calculate the probabilities for the outcomes of measurements and compare it with the result of the experiment.
Of course. However what if one has observed only a subset of the particles and you wish to model the statistics of future experiments. Again as I said if only ##R## have been measured and you wish to model the statistics of the remaining ##L - R##, how is this done without conditioning/state reduction?

The classical analogue of what you're arguing is that in probability theory one doesn't need conditioning. I can't see how that is valid.
 
  • Like
Likes Auto-Didact
  • #98
Obviously again I don't understand what you are asking. Of course one needs conditioning in both classical and quantum statistics. Is this about philosophical quibbles about the meaning of probabilities in general? If so, it doesn't belong into the quantum physics forum at all (not even into the interpretation subforum).
 
  • Like
Likes weirdoguy and *now*
  • #99
vanhees71 said:
Yes sure, from the very foundations of local (microcausal) QFT it's clear that no problems can occur, but just putting the assumption of "state collapse" on top (and it's completely unnecessary too!) destroys this consistency of the formalism.

Wrong.

vanhees71 said:
Look at the concrete experiments: You make a measurement protocol at each of the places and then evaluate them afterwards and postselect the different events. There's no collapse necessary to explain these outcomes but just Born's rule to calculate the probabilities for the outcomes of measurements and compare it with the result of the experiment.

Whatever you call it, there is no unitary time evolution of the quantum state.
 
  • Like
Likes Auto-Didact
  • #100
vanhees71 said:
Obviously again I don't understand what you are asking. Of course one needs conditioning in both classical and quantum statistics. Is this about philosophical quibbles about the meaning of probabilities in general? If so, it doesn't belong into the quantum physics forum at all (not even into the interpretation subforum).

Rubbish. It is you that rejects standard QM.
 
Back
Top