Is wave function a real physical thing?

In summary, the conversation discusses the question of whether the wave function is a real physical object or simply a mathematical tool. The article referenced in the conversation presents a new theorem that provides evidence for the former, but further research and experimentation is needed to fully understand the implications of this theorem. The conversation ultimately concludes that the question is currently of little consequence until a clear definition of the terms and an experiment can be devised to settle the debate.
  • #36
zoki85 said:
No function is a real physical object/thing

Neither is any mathematical object - however it can model it.

The EM field is considered real because for the laws of conservation of energy and momentum to hold it has momentum and energy.

The same however may or may not apply to a quantum state (the wave-function is simply the expansion in terms of position eigenstates) - it may or may not be real.

Thanks
Bill
 
Physics news on Phys.org
  • #37
zoki85 said:
No function is a real physical object/thing
I am agree here with Nugatory https://www.physicsforums.com/threads/is-wave-function-a-real-physical-thing.788665/#post-4952978.
Depend on the definition you postulate about "real".

For example Platonism about mathematics (or mathematical platonism) is the metaphysical view that there are abstract mathematical objects whose existence is independent of us and our language, thought, and practices.

Why not.

This is no more neither no less absurd than to assert the reality of the wave function.

Patrick
 
  • #38
bhobba said:
Neither is any mathematical object - however it can model it.

The EM field is considered real because for the laws of conservation of energy and momentum to hold it has momentum and energy.
Actually, EM field is considered real becouse we can measure/detect it.
 
  • #41
That's a very good question! Since the physics content of the wave function is probabilistic it can only be "measured" by preparing a lot of independent systems always in the same way (state preparation) and measure the same observable (measurement). Then you do the usual statistical analysis to test the hypothesis that the probabilities are described right by the squared modulus of the wave function.

No matter, what Qbists mumble about the meaning of probabilities for a single event, in the physics lab there's no other way than the frequentist interpretation of probabilities, which BTW is based on the central-limit theorem which is a mathematical fact within the usual axiomatic foundation of probability theory (e.g., the Kolmogorov axioms).
 
  • Like
Likes DrChinese and bhobba
  • #42
zoki85 said:
Actually, EM field is considered real becouse we can measure/detect it.

Not quite.

The electric field is defined as the force exerted on a test charge. That doesn't mean there is anything real at that point - it just means there is some other charge around exerting a force on it. The reason its considered real is because of relativity it takes some time for that force to be felt, hence for our conservation laws of momentum and energy to hold there must be somewhere it is held - and that is the field. This is part of some famous no go theorems developed by Wigner and explained in Ohanian's book on Relativity:
https://www.amazon.com/dp/B00ADP76ZO/?tag=pfamazon01-20

A quantum state also can be measured - but, as Vanhees correctly explains - only via a large number of similarly prepared systems the same as the probabilities of a biased coin. However it is not the same as an EM field and similar no-go theorems do not exist. Personally this is part of the reason I do not consider it real - if it was real you could directly measure it rather than having to prepare a large number of systems - it is also one reason I find the minimal statistical interpretation a more direct view.

Measurement is not the requirement for being real in a physical sense.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71
  • #43
vanhees71 said:
which BTW is based on the central-limit theorem which is a mathematical fact within the usual axiomatic foundation of probability theory (e.g., the Kolmogorov axioms).

That's true - but I suspect you really meant the strong law of large numbers.

One of my favourite mathematicians, Terry Tao, wrote a nice article on it:
http://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/

Just as an aside he also wrote some nice stuff on distribution theory:
http://terrytao.wordpress.com/2009/04/19/245c-notes-3-distributions/
http://www.math.ucla.edu/~tao/preprints/distribution.pdf

Thanks
Bill
 
  • Like
Likes vanhees71
  • #44
I meant the theorem that the frequencies in a probability experiment converges to the probabilities as predicted by QT in the limit of very many experiments. That's of course the strong law of large numbers. The central-limit theorem explains, why Gaussian distributions occur pretty often in probability problems.
 
  • #45
bhobba said:
Not quite.

The electric field is defined as the force exerted on a test charge. That doesn't mean there is anything real at that point - it just means there is some other charge around exerting a force on it.
Meaning nothing is real, all the world is illusion. Oh, I can't sleep now :D
 
  • #46
vanhees71 said:
That's a very good question! Since the physics content of the wave function is probabilistic it can only be "measured" by preparing a lot of independent systems always in the same way (state preparation) and measure the same observable (measurement). Then you do the usual statistical analysis to test the hypothesis that the probabilities are described right by the squared modulus of the wave function.

No matter, what Qbists mumble about the meaning of probabilities for a single event, in the physics lab there's no other way than the frequentist interpretation of probabilities, which BTW is based on the central-limit theorem which is a mathematical fact within the usual axiomatic foundation of probability theory (e.g., the Kolmogorov axioms).
Exactly, so if the wave function is just a tool to calculate probabilities, i.e. is not something measurable, certainly not when used in this manner, it makes no sense to even ask about the "ontic" nature of such a tool. This would lead to similar considerations about QM states in general.
To me it is salient of QM that its explanatory power for things from tunneling or atomic spectra to chemistry, superconductors or lasers is not directly derived from the probabilistic predictions from wave functions(these applications rely basically on intrinsic quantum properties), rather such predictions are usually more like self-checks for the theory, or for calculating cross-sections in particle physics.
 
  • #47
TrickyDicky said:
i.e. is not something measurable,

It is measurable - only from a large ensemble of similarly prepared systems, but it is measurable.

Thanks
Bill
 
  • Like
Likes vanhees71
  • #48
bhobba said:
It is measurable - only from a large ensemble of similarly prepared systems, but it is measurable.

Thanks
Bill
I suspect you are probably referring here to the mathematical concept of probability measure while I'm referring to measurements in physics.
 
  • #49
TrickyDicky said:
I suspect you are probably referring here to the mathematical concept of probability measure while I'm referring to measurements in physics.

No - its from the properties of a state.

That said it is indeed different to things like an electric field etc.

Thanks
Bill
 
  • #50
vanhees71 said:
No matter, what Qbists mumble about the meaning of probabilities for a single event, in the physics lab there's no other way than the frequentist interpretation of probabilities, which BTW is based on the central-limit theorem which is a mathematical fact within the usual axiomatic foundation of probability theory (e.g., the Kolmogorov axioms).

I don't know why you would say that in the lab, there is no other way than the frequentist interpretation. Bayesian probability works perfectly fine in the lab. The differences between bayesian and frequentist really only comes into play at the margins, when you're trying to figure out whether your statistics are good enough to make a conclusion. The frequentists use some cutoff for significance, which is ultimately arbitrary. The bayesian approach is smoother--the more information you have, the stronger a conclusion you can make, but you can use whatever data you have.

Since frequentist probability only applies in the limit of infinitely many trials, there isn't a hard and fast distinction between a single event and 1000 events. Neither one implies anything about the probability, strictly speaking.
 
  • #51
I don't understand how one reconciles a "real" wave function with its instantaneous collapse. To me this is an obvious violation of causality.
 
  • #52
JPBenowitz said:
I don't understand how one reconciles a "real" wave function with its instantaneous collapse. To me this is an obvious violation of causality.

A simple example is to simply taken the quantum formalism as it is and add that the wave function in a particular frame (the aether) is real. The wave function in other frames will not be real, but predictions made using the quantum formalism in any frame will be the same as predictions made in the aether frame, so one cannot tell which frame is the aether frame. This can be seen in http://arxiv.org/abs/1007.3977.

This is not consistent with classical relativistic causality, but it doesn't matter since relativity does not require classical relativistic causality, only that the probabilities of events are frame-invariant and that classical information should not be transmitted faster than light. The main problem with this way of making the wave function real is not relativity, but that it leaves the measurement problem open.
 
Last edited:
  • #53
stevendaryl said:
I don't know why you would say that in the lab, there is no other way than the frequentist interpretation. Bayesian probability works perfectly fine in the lab. The differences between bayesian and frequentist really only comes into play at the margins, when you're trying to figure out whether your statistics are good enough to make a conclusion. The frequentists use some cutoff for significance, which is ultimately arbitrary. The bayesian approach is smoother--the more information you have, the stronger a conclusion you can make, but you can use whatever data you have.

Since frequentist probability only applies in the limit of infinitely many trials, there isn't a hard and fast distinction between a single event and 1000 events. Neither one implies anything about the probability, strictly speaking.

My point is that no matter how you metaphysically interpret the meaning of probabilities, in the lab you have to "get statistics" by preparing ensembles of the system under consideration. The Qbists always mumble something about that there is some meaning of probabilities for a single event, but in my opinion that doesn't help at all to make sense of the probabilistic content of the quantum mechanical state.

What I don't like most about this Qbism is the idea that the quantum state is subjective. That's not, how it is understood by the practitioners using QT as a description of nature: A state is defined by preparation procedures. A preparation can be more or less accurate, and usually you don't prepare pure states but mixed ones, but nevertheless there's nothing subjective in the meaning of states.

It's also often claimed that the outcome of measurements is observer dependent or that QT brings back in the observer into the game. That's, however a pretty trivial statement: Of course the experimentalists decides which observable(s) with which accuracy he likes (or can with the means/money at hand) to measure it (them), and of course, the outcome of the measurement depends on what I decide to measure. I get a different result when measuring a momentum than I get when I detect the location of a particle.

The main point that distinguishes QT from classical physics is not so much that I cannot measure observables without disturbing the system, but it's the prediction of the QT formalism that a certain preparation procedure (e.g., make particles with a well-defined momentum) excludes necessarily the sharp definition of other incompatible observables (e.g., the position of the particle), but that's also an objective decision of the experimentalist, what he prepares. To figure out whether the predictions about incompatibility of observables is correct (and even correctly quantitfied in terms of the probabilities inherent in the determination of the pure or mixed quantum state due to the chosen preparation procedure) can be checked only by preparing a lot of such experiments and getting the statistics to check this hypothesis as with any other probabilistic statement. A single event doesn't tell you anything about the correctness or incorrectness of the probabilistic predictions!
 
  • #54
atyy said:
A simple example is to simply taken the quantum formalism as it is and add that the wave function in a particular frame (the aether) is real. The wave function in other frames will not be real, but predictions made using the quantum formalism in any frame will be the same as predictions made in the aether frame, so one cannot tell which frame is the aether frame. This can be seen in http://arxiv.org/abs/1007.3977.

This is not consistent with classical relativistic causality, but it doesn't matter since relativity does not require classical relativistic causality, only that the probabilities of events are frame-invariant and that classical information should not be transmitted faster than light. The main problem with this way of making the wave function real is not relativity, but that it leaves the measurement problem open.

This is a contradiction in itself: If you assume a relativistic QFT to describe nature, by construction all measurable (physical) predictions are Poincare covariant, i.e., there's no way to distinguish one inertial frame from another by doing experiments within quantum theory. As Gaasbeek writes already in the abstract: The delayed-choice experiments can be described by standard quantum optics. Quantum optics is just an effective theory describing the behavior of the quantized electromagnetic field in interaction with macroscopic optical apparati in accordance with QED, the paradigmatic example of a relativistic QFT, and as such is Poincare covariant in the prediction about the observable outcomes, and quantum optics indeed is among the most precisely understood fields of relativistic quantum theory: All predictions are confirmed by high-accuracy experiments. So quantum theory cannot reintroduce an "aether" or however you like to call a "preferred reference frame" into physics! By construction QED and thus also quantum optics fulfills relativistic causality constraints too!
 
  • #55
vanhees71 said:
What I don't like most about this Qbism is the idea that the quantum state is subjective. That's not, how it is understood by the practitioners using QT as a description of nature: A state is defined by preparation procedures. A preparation can be more or less accurate, and usually you don't prepare pure states but mixed ones, but nevertheless there's nothing subjective in the meaning of states.

I don't think QBism makes sense, but many aspects of it seem very standard and nice to me. For example, how can we understand wave function collapse? An analogy in classical probability is that it is like throwing a die, where before the throw the outcome is uncertain, but after the throw the probability collapses to a definite result. Classically, this is very coherently described by the subjective Bayesian interpretation of probability, from which the frequentist algorithms can be derived. It is fine to argue that the state preparation in QM is objective. However, the quantum formalism links measurement and preparation via collapse. If collapse is subjective by the die analogy, then because collapse is a preparation procedure, the preparation procedure is also at least partly subjective.
 
Last edited:
  • #56
vanhees71 said:
This is a contradiction in itself: If you assume a relativistic QFT to describe nature, by construction all measurable (physical) predictions are Poincare covariant, i.e., there's no way to distinguish one inertial frame from another by doing experiments within quantum theory. As Gaasbeek writes already in the abstract: The delayed-choice experiments can be described by standard quantum optics. Quantum optics is just an effective theory describing the behavior of the quantized electromagnetic field in interaction with macroscopic optical apparati in accordance with QED, the paradigmatic example of a relativistic QFT, and as such is Poincare covariant in the prediction about the observable outcomes, and quantum optics indeed is among the most precisely understood fields of relativistic quantum theory: All predictions are confirmed by high-accuracy experiments. So quantum theory cannot reintroduce an "aether" or however you like to call a "preferred reference frame" into physics! By construction QED and thus also quantum optics fulfills relativistic causality constraints too!

No, there is no contradiction, it just seems very superfluous to modern sensibilities where we are used to having done away with the aether, since we cannot figure which frame is the aether frame. But Lorentz Aether Theory and its "invisible aether" makes the same predictions as the standard "no aether" formulation of special relativity, and in fact one can derive the standard "no aether" formulation of special relativity from Lorentz Aether Theory, so there cannot be a contradiction, unless special relativity itself is inconsistent.
 
  • #57
vanhees71 said:
What I don't like most about this Qbism is the idea that the quantum state is subjective.

Well, it seems to me that certain aspects of it are subjective. For example, in an EPR-type experiment, Alice measures the spin of one of a pair of particles. Afterwards, she would describe the state of the two-particle system using a "collapsed" wavefunction. Bob has not yet measured the spin of his particle (and hasn't heard of Alice's result) and so would continue to use the initial entangled wave function to describe the pair. If they are (correctly) using different wavefunctions to describe the same situation, then that seems subjective to me. Or "relative to the subject".

When density matrices are used, instead of wave functions, it seems even more subjective, since a density matrix can be interpreted as mixing two different kinds of probability--classical ignorance of the true state, and quantum nondeterminism. The first type of probability seems subjective.
 
  • #58
vanhees71 said:
My point is that no matter how you metaphysically interpret the meaning of probabilities, in the lab you have to "get statistics" by preparing ensembles of the system under consideration. The Qbists always mumble something about that there is some meaning of probabilities for a single event, but in my opinion that doesn't help at all to make sense of the probabilistic content of the quantum mechanical state.

I don't think Qbism adds much (if anything) to the understanding of quantum mechanics, but I was simply discussing bayesianism (not necessarily quantum). Bayesian probability isn't contrary to getting statistics--there is such a thing as "bayesian statistics", after all. As I said, the difference is only in how you interpret the resulting statistics.
 
  • #59
atyy said:
No, there is no contradiction, it just seems very superfluous to modern sensibilities where we are used to having done away with the aether, since we cannot figure which frame is the aether frame. But Lorentz Aether Theory and its "invisible aether" makes the same predictions as the standard "no aether" formulation of special relativity, and in fact one can derive the standard "no aether" formulation of special relativity from Lorentz Aether Theory, so there cannot be a contradiction, unless special relativity itself is inconsistent.

I know several physicists who are perfectly competent (as opposed to crackpots) who favor Lorentz Aether Theory over Einstein's relativity, specifically because they think it would allow collapse of the wave function to be a real event (which it can't be, in a completely Lorentz-invariant way).
 
  • #60
stevendaryl said:
I don't think Qbism adds much (if anything) to the understanding of quantum mechanics, but I was simply discussing bayesianism (not necessarily quantum). Bayesian probability isn't contrary to getting statistics--there is such a thing as "bayesian statistics", after all. As I said, the difference is only in how you interpret the resulting statistics.
Ok, then what's in your view the difference between Bayesian and frequentist interpretations of probabilities, particularly the statement probabilities make sense for a single event?

E.g., when they say in the weather forecast, there's a 99% probability to have snow tomorrow, and tomorrow it doesn't snow. Does that tell you anything about the validity of the probability given by the forecast? I don't think so. It's just a probability based on experience (i.e., the collection of many weather data over a long period) and weather models based on very fancy hydrodynamics on big computers. The probabilistic statement can only be checked by evaluating a lot of data based on weather observations.

Of course, there's Bayes's theorem on conditional probabilities, which has nothing to do with interpretations or statistics but is a theorem that can be proven within the standard axiom system by Kolmogorov:
$$P(A|B) P(B) = P(B|A) P(A),$$
which is of course not the matter of any debate.

I'm really unable to understand why there is such a hype about Qbism, which I consider a rather poor reinvention of old statements about subjectivism in quantum theory, which in my view is unfounded in the quantum theoretical formalism and in the way quantum theory is used by theorists and experimentalists in the physics labs around the world ;-).
 
Last edited:
  • Like
Likes DrChinese
  • #61
atyy said:
I don't think QBism makes sense, but many aspects of it seem very standard and nice to me. For example, how can we understand wave function collapse? An analogy in classical probability is that it is like throwing a die, where before the throw the outcome is uncertain, but after the throw the probability collapses to a definite result. Classically, this is very coherently described by the subjective Bayesian interpretation of probability, from which the frequentist algorithms can be derived. It is fine to argue that the state preparation in QM is objective. However, the quantum formalism links measurement and preparation via collapse. If collapse is subjective by the die analogy, then because collapse is a preparation procedure, the preparation procedure is also at least partly subjective.
Hm, I don't think that collapse is needed in probabilistic theories. What's the point of it? I throw the die, ignoring the details of the initial conditions and get some (pseudo-)random result which I read off. Why then should there be another physical process called "collapse"? The probabilities for some outcome is simply the description of my expectation how often a certain outcome of a random experiment will occur when I perform it under the given conditions. The standard assumption ##P(j)=1/6## is due to the maximum-entropy principle: If I don't know anything about the die, I just take the probability distribution of maximum entropy (i.e., the least prejudice) in the sense of the Shannon entropy. This hypothesis I can test with statistical means in an objective way throwing the die very often. Then you get some new probaility distribution according to the maximum entropy principle due to the gained statistical knowledge, which may be more realistic, because it turns out that it's not a fair die. Has then anything in the physical world "collapsed", because I change my probabilities (expectations about the frequency of outcomes of a random experiment) according to more (statistical) information about the die? I think not, because I don't know, what should that physical process called "collapse" should be. Also my die remains unchanged etc.

Also for me there is no difference between the quantum mechanical probabilities and the above example of probabilities applied in a situation where the underlying dynamics is assumed to be deterministic in the sense of Newtonian mechanics. The only difference is that the probabilistic nature of our knowledge is in the quantum case not just because of the ignorance of the observer (in the die example about the precise initial conditions of the die as a rigid body, whose knowledge would enable us in principle to predict with certainty the outcome of the individual toss, because it's a deterministic process) but it's principally not possible to have determined values for all observables of the quantum object. In quantum theory on those observables have a determined value (or a value with very high probability) which have been prepared but then necessarily other observables that are not compatible with those which have been prepared to be (pretty) determined are (pretty) undetermined. Then I do a measurement on an indivdual so prepared system of such an undetermined observable and get some accurate value. Why should there be any collapse, only because I found a value? For sure there's an interaction of the object with the measurement apparatus, but that's not a "collapse of the state" but just an interaction. So also in the quantum case there's no necessity at all to have a strange happening called "collapse of the quantum state".
 
  • #62
vanhees71 said:
Hm, I don't think that collapse is needed in probabilistic theories. What's the point of it? I throw the die, ignoring the details of the initial conditions and get some (pseudo-)random result which I read off. Why then should there be another physical process called "collapse"? The probabilities for some outcome is simply the description of my expectation how often a certain outcome of a random experiment will occur when I perform it under the given conditions. The standard assumption ##P(j)=1/6## is due to the maximum-entropy principle: If I don't know anything about the die, I just take the probability distribution of maximum entropy (i.e., the least prejudice) in the sense of the Shannon entropy. This hypothesis I can test with statistical means in an objective way throwing the die very often. Then you get some new probaility distribution according to the maximum entropy principle due to the gained statistical knowledge, which may be more realistic, because it turns out that it's not a fair die. Has then anything in the physical world "collapsed", because I change my probabilities (expectations about the frequency of outcomes of a random experiment) according to more (statistical) information about the die? I think not, because I don't know, what should that physical process called "collapse" should be. Also my die remains unchanged etc.

Also for me there is no difference between the quantum mechanical probabilities and the above example of probabilities applied in a situation where the underlying dynamics is assumed to be deterministic in the sense of Newtonian mechanics. The only difference is that the probabilistic nature of our knowledge is in the quantum case not just because of the ignorance of the observer (in the die example about the precise initial conditions of the die as a rigid body, whose knowledge would enable us in principle to predict with certainty the outcome of the individual toss, because it's a deterministic process) but it's principally not possible to have determined values for all observables of the quantum object. In quantum theory on those observables have a determined value (or a value with very high probability) which have been prepared but then necessarily other observables that are not compatible with those which have been prepared to be (pretty) determined are (pretty) undetermined. Then I do a measurement on an indivdual so prepared system of such an undetermined observable and get some accurate value. Why should there be any collapse, only because I found a value? For sure there's an interaction of the object with the measurement apparatus, but that's not a "collapse of the state" but just an interaction. So also in the quantum case there's no necessity at all to have a strange happening called "collapse of the quantum state".

Collapse is an essential part of the quantum formalism. The only debate should be whether it is a physical process or not. What you seem to be saying is that collapse is epistemic.
 
  • #63
I guess, it's what's called epistemic. But again, why do you consider the "collapse" as essential? Where do you need it?
 
  • #64
vanhees71 said:
I guess, it's what's called epistemic. But again, why do you consider the "collapse" as essential? Where do you need it?

Hmmm, are we still disagreeing on this? Collapse is in Landau and Lifshitz, Cohen-Tannoudji, Diu and Laloe, Sakurai and Weinberg (and every other major text except Ballentine, whom I'm sure is wrong), so it really is quantum mechanics. To see that it is essential, take an EPR experiment in which Alice and Bob measure simultaneously. What is simultaneous in one frame will be sequential in another frame. As long as one has sequential measurements in which sub-ensembles are selected based on the measurement outcome, one needs collapse or an equivalent postulate.
 
  • #65
atyy said:
Hmmm, are we still disagreeing on this?

As he should.

Its a logical consequence of the assumption of continuity - but has anyone every measured a state again an infinitesimal moment later to check if the assumption holds?

Its not really needed - one simply assumes the filtering type measurement it applies to is another state preparation. Of course a systems state changes if you prepare it differently - instantaneously - well that's another matter - observations don't happen instantaneously.

Thanks
Bill
 
  • #66
bhobba said:
Its a logical consequence of the assumption of continuity - but has anyone every measured a state again an infinitesimal moment later to check if the assumption holds?

Yes! The measurement has been performed in the Bell tests. If there is a frame in which the measurements are simultaneous, then there will be another frame in which Bob measures an infinitesimal moment after Alice. So far, all predictions are consistent with quantum mechanics (including collapse) and relativity.

bhobba said:
Its not really needed - one simply assumes the filtering type measurement it applies to is simply another state preparation. Of course a systems state changes if you prepare it differently - instantaneously - well that's another matter.

It is needed, because in a filtering experiment, the preparation procedure involves choosing the sub-ensemble based on the outcome of the immediately preceding measurement. So preparation and measurement are linked.
 
  • #67
atyy said:
the preparation procedure involves choosing the sub-ensemble based on the outcome of the immediately preceding measurement. So preparation and measurement are linked.

Exactly how long does that prior measurement take to prepare the system differently? And what's the consequence for instantaneous collapse? Think of the double slit. The, say electron, interacts with the screen and decoheres pretty quickly - but not instantaneously. We do not know how the resultant improper state becomes a proper one - but I doubt however that's done its instantaneous - although of course one never knows. Either way, until we know for sure, saying it's instantaneous isn't warranted.

Thanks
Bill
 
  • #68
bhobba said:
Exactly how long does that prior measurement take to prepare the system differently? And what's the consequence for instantaneous collapse? Think of the double slit. The, say electron, interacts with the screen and decoheres pretty quickly - but not instantaneously. We do not know how the resultant improper state becomes a proper one - but I doubt however that's done its not instantaneous - although of course one never knows.

You can take the instantaneous part as just a convenient model that has not been falsified yet. What is clear is that unitary evolution alone is insufficient, and there has to be some other postulate for sequential measurements, which in standard quantum mechanics is the non-unitary evolution of wave function collapse (suitably generalized).
 
  • #69
atyy said:
non-unitary evolution of wave function collapse (suitably generalized).

Sure - but in modern times I think the problem of outcomes is a better way of stating the issue than collapse which has connotations I don't think the formalism implies.

Thanks
Bill
 
  • #70
bhobba said:
Sure - but in modern times I think the problem of outcomes is a better way of stating the issue thamn collapse wjich has connotations I don't think the formalism implies.

That's fine if it is just a matter of terminology. I do prefer the old fashioned terminology, since I do use Copenhagen as a default interpretation, where the wave function is not necessarily real, and consequently the wave function evolution including collapse is also not necessarily real. So "collapse" is just the updating of the wave function after a measurement without committing to whether it is ontic or epistemic.

OK, but in fact, collapse is one of the reasons that it seems reasonable to try to think of the wave function as epistemic. Indeed, if I understand vanhees71 correctly, he would like to think of collapse as epistemic. All I'm pointing out is that earlier he argued that the wave function is ontic, and that it isn't obviously consistent to say that the wave function is ontic, but that collapse is epistemic, since collapse is a state preparation procedure, so if collapse is epistemic, then the wave function prepared by collapse is presumably at least partly epistemic.
 

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
Replies
1
Views
627
  • Quantum Physics
Replies
4
Views
732
  • Quantum Physics
Replies
3
Views
257
  • Quantum Physics
Replies
2
Views
816
Replies
8
Views
1K
  • Quantum Physics
Replies
6
Views
2K
Replies
4
Views
842
Back
Top