Where does a quantum experiment *begin*?

In summary, the conversation discusses the concept of when a quantum experiment begins and whether it matters. It is noted that the start and the end of the experiment are the same type of thing and that the point of measurement is to make an observation. The possibility of continuous measurements is mentioned and it is suggested that classical mechanics can be used to describe the behavior of electrons in an electron gun tube. The concept of superposition is also discussed and it is emphasized that the property or observable in superposition must be specified.
  • #1
Davor Magdic
5
1
Apologies if the question has been asked, I didn't see it in my search but maybe I missed it.

I was wondering if there is a formal definition of when/where a quantum experiment begins (as opposed to where it ends, i.e. with the collapse of the wave function), and whether it matters.

For example, with a double-slit experiment, we talk about firing "one electron at a time" towards the slit, and we say the experiment ends when we measure that the electron has hit a particular area of the screen. But it would seem that not just its path but also the presence of that electron is conditional, i.e. that prior to the measurement, the electron was in a superposition of states fired-and-flying-towards-the-slit and not-fired-yet. The electron comes from the electron gun which is being heated. From what I understand that means the electron in question was in a superposition of states jumped and not-jumped the barrier in the heated filament due to the flow of electrons making the electrical current in the filament. Those electrons themselves are presumably in a superposition of states of passing and not passing through the filament, and so on.

In other words it seems like prior to the measurement, it's not just the landing position of the electron on the screen that's in the superposition of states, but that, tracking backwards, everything else that makes the chain is fuzzy until a certain point (if it exists). I.e. if there is a such thing as the "rise" of the wave function, can we say that, assuming you are the experimenter, if your first observation is that your finger is pushing the "on" button, and your next observation is that you see is a quick flash on the screen, the theory doesn't tell you what happened in between? That seems somewhat counter-intuitive.

(As to whether this matters, my intuition is that the longer the parts of the chain from the end to the start we include in the observation, the more unique and less reproducible/repetitious that sequence is, and so less relevant for making predictions, but would appreciate hearing your thoughts.)

Thanks!
 
Last edited:
  • Like
Likes Physics Footnotes
Physics news on Phys.org
  • #2
The start and the end are the same type of thing. Remember that the end is a start - that is the whole point of wave function collapse.
 
Last edited:
  • #3
Quantum mechanics is not so much different from classical mechanics. Classical measurement ends with observation (which corresponds to quantum collapse), but where does the classical measurement begins? The beginning is not so well defined, but it is not really so much important. The point of measurement (either quantum or classical) is to make the observation.
 
  • #4
atyy, Demystifier, thank you for the replies. If the end is a start, and the point of measurement is to make an observation (i.e. sample the state of the system), does it not follow that measurements are continuously happening, in which case when does the collapse happen?

To put it this way, imagine you are pressing the "on" button on the electron gun and you're looking at the screen. There is a very small possibility that an electron fill fire right away and you see a flash, and say that doesn't happen. Since that was a measurement, which detected no interaction, I suppose it means that the probability wave of the entire system at the time of turning it on was collapsed to "filament-not-yet-heated-electron-not-yet-jumped-the-barrier"?

Let's say you keep looking at the screen and for several seconds nothing happens (as the filament is heating up), and finally you see a quick flash on the screen. It seems to follow that in all the moments when there was no flash, the system (battery + electron gun + screen) was in the collapsed state of "electron not jumped barrier" and then at one moment it was in the state "electrical current in the filament is strong, filament is heated, an electron has jumped the barrier, traveled to the screen and hit a phosphorus atom which emitted a photon that reached your eyes" (and so on).

To me this sounds like there was never room for the probability wave to not be collapsed? Or is this a way of looking at things that implies hidden variables?
 
  • #5
This is why that I wish that the school curriculum will let students play with basic stuff such an an electron gun tube.

First of all, let's get this out of the way. The old TV tube that we all had (before plasma, LED, LCD screens) all had electron guns. The electrons that are released were NEVER treated as quantum particles. They were treated as classical particles, and classical mechanics were perfectly valid in describing them. Why? Because the nature of how they are used, and what they are being used for do not require quantum mechanical picture (see, for example, the measurement of e/m in a classic Bainbridge tube setup).

What this means is that the electrons that come out of a thermionic gun, after they have been emitted, can easily be treated as classical particles. Heck, if you look at particle accelerators today, even there, they are treated as classical particles (see, for example, beam physics simulators code such as PARMELA). Also note that I don't have to use a thermionic gun. I can easily use a photoinjector gun. The source here is irrelevant as long as I can get a monochromatic electron beam (within my experimental limits).

What will make this a quantum mechanical experiment (i.e. quantum effects will come into play) is the setup that the electron beam will go through. A double-slit equivalent experiment will require that the electrons will have two equal probability paths that it can take, meaning the superposition occurs once it has entered this setup, giving it a superposition of paths.

{BTW, it is one of my pet peeve when someone claims that there is a superposition, without elaborating the property or observable that is in superposition. You have to indicate WHAT quantity that is in superposition. It is the position, the momentum, the spin, etc...? It is insufficient to simply say "the electrons are in a superposition" and left it at that.}

So if you really want to be picky and define the "start" of the quantum part of the experiment, then I'd say the entrance to the double paths.

Zz.
 
  • Like
Likes bhobba, PeterDonis, dlgoff and 1 other person
  • #6
In the standard interpretation, observation is subjective. Some examples:

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.250401
"During the analysis, if the recorded trace crosses a voltage level fixed at around 75% of the expected pulse height from an 810 nm photon, then it is considered to be a detection event. This level was chosen to eliminate with near certainty lower-energy blackbody photons. The time that the trace crosses a level set at around 20% of the expected pulse height is used to timestamp the detection event. We consider the detection event to be complete and the outcome fixed by this point.
 
  • #7
Davor Magdic said:
does it not follow that measurements are continuously happening, in which case when does the collapse happen?
Typically, for larger objects the collapses happen more often. For macroscopic objects, it looks practically continuous.
 
  • #8
If anything is objective, it's observation, at least that's what good experimentalists do their entire life! If an experiment is not objective, it means that nobody else can independently verify its results, and that's to be taken as a sign that it is not a valid experiment in the sense of natural sciences!

I also don't understand, why you think that the quoted text below the link says that observation is subjective. To the contrary it gives clear objective definitions of what's measured in terms of objectively observable "detection events".
 
  • Like
Likes bhobba
  • #9
ZapperZ said:
This is why that I wish that the school curriculum will let students play with basic stuff such an an electron gun tube.

First of all, let's get this out of the way. The old TV tube that we all had (before plasma, LED, LCD screens) all had electron guns. The electrons that are released were NEVER treated as quantum particles. They were treated as classical particles, and classical mechanics were perfectly valid in describing them. Why? Because the nature of how they are used, and what they are being used for do not require quantum mechanical picture (see, for example, the measurement of e/m in a classic Bainbridge tube setup).

What this means is that the electrons that come out of a thermionic gun, after they have been emitted, can easily be treated as classical particles. Heck, if you look at particle accelerators today, even there, they are treated as classical particles (see, for example, beam physics simulators code such as PARMELA). Also note that I don't have to use a thermionic gun. I can easily use a photoinjector gun. The source here is irrelevant as long as I can get a monochromatic electron beam (within my experimental limits).

What will make this a quantum mechanical experiment (i.e. quantum effects will come into play) is the setup that the electron beam will go through. A double-slit equivalent experiment will require that the electrons will have two equal probability paths that it can take, meaning the superposition occurs once it has entered this setup, giving it a superposition of paths.

{BTW, it is one of my pet peeve when someone claims that there is a superposition, without elaborating the property or observable that is in superposition. You have to indicate WHAT quantity that is in superposition. It is the position, the momentum, the spin, etc...? It is insufficient to simply say "the electrons are in a superposition" and left it at that.}

So if you really want to be picky and define the "start" of the quantum part of the experiment, then I'd say the entrance to the double paths.

Zz.
I agree with what you are saying, but one should also emphasize that the classical behavior of electrons (or other charged particles) in setups from good old cathod ray tubes to modern acccelerators is very well in agreement with quantum theory. Why we see a "cathod ray" and not a smeared cloud when a charged particle is going through a gas or a cloud chamber was understood as early as 1929 by Mott's famous paper on the subject:

https://en.wikipedia.org/wiki/Mott_problem

Of course, I disagree with the formulation in terms of "collapse", but that's another story...
 
  • #10
ZapperZ said:
You have to indicate WHAT quantity that is in superposition. It is the position, the momentum, the spin, etc...? It is insufficient to simply say "the electrons are in a superposition" and left it at that.} So if you really want to be picky and define the "start" of the quantum part of the experiment, then I'd say the entrance to the double paths.

I assumed that the electron is in the superposition of positions -- one being the filament is heated enough that the electron has jumped the barrier, and the other that the electron is still in the filament. I.e. I assumed that the electron leaving the filament is a quantum phenomenon and there is no deterministic time when that happens -- at one point the electron is in the filament, at another it's outside, unknowable when exactly. (I don't know if that is true but sounded like it from the description of thermionic emission.)

But if we choose to take the electron jumping the barrier as the start of the quantum part of the experiment, can we not go back to the quantum process that precedes it and so on?

Perhaps changing the definition of where the experiment begins changes the definition of the probability wave we are observing (the larger the chain, the more complex the function, and less calculable I assume).
 
  • #11
Davor Magdic said:
I assumed that the electron is in the superposition of positions -- one being the filament is heated enough that the electron has jumped the barrier, and the other that the electron is still in the filament. I.e. I assumed that the electron leaving the filament is a quantum phenomenon and there is no deterministic time when that happens -- at one point the electron is in the filament, at another it's outside, unknowable when exactly. (I don't know if that is true but sounded like it from the description of thermionic emission.)

But if we choose to take the electron jumping the barrier as the start of the quantum part of the experiment, can we not go back to the quantum process that precedes it and so on?

Perhaps changing the definition of where the experiment begins changes the definition of the probability wave we are observing (the larger the chain, the more complex the function, and less calculable I assume).

This is extremely puzzling.

1. Thermionic process is described accurately via the Richardson-Duschman model. Can you tell me where is this "superposition of position" here?

2. Why would the electron that never left the material even matter? In other words, why would the process of creating these electrons affect your double-slit experiment? This, you never explain.

If I say that I can give you single electrons, once every 5 seconds on average, with an energy of 2 keV, and an energy spread of 10 eV FWHM, are you telling me that you cannot adequately perform the electron double-slit experiment without knowing how the electrons were created? Are you saying that even if you get electrons with these characteristics, that your experiment will change if the electrons came from thermionic emission, photoemission, field emission, pair production, atomic ionization, etc... etc? Really?

Zz.
 
  • #12
ZapperZ said:
This is extremely puzzling.
1. Thermionic process is described accurately via the Richardson-Duschman model. Can you tell me where is this "superposition of position" here?

I was under impression that if we are considering an electron escaping the metal we only talk about a probability of that happening under certain conditions, and probability implies a superposition of states, does it not? I do not understand the model in detail but several articles mention the thermionic emission as semi-classical i.e. needing quantum mechanics for complete description.

2. Why would the electron that never left the material even matter? In other words, why would the process of creating these electrons affect your double-slit experiment? This, you never explain.

I had mentioned the double slit as an example of a setup, but it seems that even a no-slit experiment -- electrons firing from the gun hitting the phosphorus screen -- is also a quantum experiment, in that we cannot tell precisely what happens and when but only talk about probabilities, and so can be used as an illustration for what I'm trying to understand.

If I say that I can give you single electrons, once every 5 seconds on average, with an energy of 2 keV, and an energy spread of 10 eV FWHM, are you telling me that you cannot adequately perform the electron double-slit experiment without knowing how the electrons were created?

My question is essentially those single electrons, what happens -- how does it change the equations -- if we try to look into where they came from, and how far back we can go doing that, rather then assuming their existence and properties as a starting condition. I do not quite know what I'm trying to achieve with it, but was curious. Logically it follows that by moving the starting conditions further down the chain (towards the "beginning") the probability function gets more and more complex and less usable but maybe not, i.e. naively speaking maybe some complexities cancel each other out? Again my question is what happens if we apply the model to "earlier" starting conditions in the chain, and is there a "beginning".
 
  • #13
What "equations"? What "model"?

Zz.
 
  • #14
ZapperZ said:
A double-slit equivalent experiment will require that the electrons will have two equal probability paths that it can take, meaning the superposition occurs once it has entered this setup, giving it a superposition of paths.

To be clear about the requirement, is it true that we can have a situation of two "equal probability paths" without having a situation that is modeled by a superposition of states? For example, there could be a device that detects an electron and uses a random number generator to decide whether to send it down one path or the other.
 
  • #15
Stephen Tashi said:
To be clear about the requirement, is it true that we can have a situation of two "equal probability paths" without having a situation that is modeled by a superposition of states? For example, there could be a device that detects an electron and uses a random number generator to decide whether to send it down one path or the other.
Again, you have to specify what you mean by "superposition of states" here. You have to specify which states you superimpose. In case of a pure state you just specify a ray in Hilbert space, and that's it. It doesn't matter in which basis you express it. E.g., if the state is represented by an arbitrary normalized member of the ray ##|\psi \rangle## you can express it as superposition of any basis states you like. If you want to calculate the probatility to measure a certain value ##a## of the observable ##A##, where ##a## is necessarily in the spectrum of the representing operator ##\hat{a}##, and ##|a,\beta \rangle## is a set of orthonormalized (generalized) eigenvectors of ##\hat{A}## with eigenvalue ##a##, then the probability to find ##a## when measuring ##A## is
$$P(a|\psi)=\sum_{\beta} \left | \langle a,\beta|\psi \rangle \right|^2.$$
That's it. There's no need for superposition. Here the choice of the right basis is dictated by what observable you measure.
 
  • #16
vanhees71 said:
Again, you have to specify what you mean by "superposition of states" here. You have to specify which states you superimpose.
In case of a pure state you just specify a ray in Hilbert space, and that's it.

In the case of an experiment, you have to do something practical so their has to be some physical procedures that implement the process of specifying a ray in Hilbert space etc. It seems to me that the original post is asking when it is valid to analyze the outcome of an experiment by approximating the initial conditions as completely known.

In the case of classical physics it is clear that if you achieve some initial state (e.g. an object initially at rest at the top of an incline plane) then the further progress of the experiment is determined by the information in the initial state (e.g. it doesn't matter whether you put the object at the top of the inclined plane with a winch or with a forklift.). I suppose a similar principle applies to a QM model of an experiment if we consider Quantum states. So the question seems to boil down to asking when we can use macroscopically observable processes (e.g. I set the voltage reading to 1230 volts) to create initial quantum states that are approximately "known".
 
  • #17
Davor Magdic said:
atyy, Demystifier, thank you for the replies. If the end is a start, and the point of measurement is to make an observation (i.e. sample the state of the system), does it not follow that measurements are continuously happening, in which case when does the collapse happen?

To put it this way, imagine you are pressing the "on" button on the electron gun and you're looking at the screen. There is a very small possibility that an electron fill fire right away and you see a flash, and say that doesn't happen. Since that was a measurement, which detected no interaction, I suppose it means that the probability wave of the entire system at the time of turning it on was collapsed to "filament-not-yet-heated-electron-not-yet-jumped-the-barrier"?

Let's say you keep looking at the screen and for several seconds nothing happens (as the filament is heating up), and finally you see a quick flash on the screen. It seems to follow that in all the moments when there was no flash, the system (battery + electron gun + screen) was in the collapsed state of "electron not jumped barrier" and then at one moment it was in the state "electrical current in the filament is strong, filament is heated, an electron has jumped the barrier, traveled to the screen and hit a phosphorus atom which emitted a photon that reached your eyes" (and so on).

To me this sounds like there was never room for the probability wave to not be collapsed? Or is this a way of looking at things that implies hidden variables?

Yes, if you keep looking at the system, you will cause the wave function to continually collapse.

Collapsing a wave function is one way of beginning a quantum experiment. I don't have time to describe it, but attaching a time to an observation, and hence a time to collapse is often carried out in Bell tests, in which observations have to be time stamped. Another place to look is Fig. 3 and 4 of https://arxiv.org/abs/1604.08020 where they use a "heralding event" to mark the time at which an experiment begins.
 
  • #18
vanhees71 said:
Of course, I disagree with the formulation in terms of "collapse", but that's another story...
Yes, you often emphasize this, so this must me important for you. But formulation with collapse cannot be experimentally distinguished from the formulation without collapse, so it is a matter of interpretation. On the other hand, you also often emphasize that interpretation is not relevant for physics. So if the question of collapse is not relevant for physics, then what is it relevant for?
 
  • #19
Collapse is contradicing very basic principles of (relativistic) physics, the locality of interactions and causality, and it's not observable, as you say yourself. So why should I keep this unnecessary assumption as part of any interpretation?
 
  • Like
Likes ddd123
  • #20
vanhees71 said:
Collapse is contradicing very basic principles of (relativistic) physics, the locality of interactions and causality, and it's not observable, as you say yourself. So why should I keep this unnecessary assumption as part of any interpretation?
You didn't answer my question. :wink:
I will answer yours as soon as you answer mine.
 
  • #21
Do you mean your question: "So if the question of collapse is not relevant for physics, then what is it relevant for?" Then I answered it. I try once more: First of all the collapse assumption is relevant to physics because it violates basic principles of relativistic physics. 2nd it's (thus fortunately!) irrelevant to physics, because it's never observed and never necessary to be invoked to apply quantum theory to the analysis of real-world observations/experiments, while the very foundations of relativistic QFT (locality of interactions) is a very successful assumption. So in my opinion there is no collapse and there must be no collapse since its assumption just makes the physical (!) interpretation inconsistent with its very foundations. So it's a contradiction within the entire framework of (relativistic) QT and thus shouldn't be postulated to begin with.
 
  • #22
vanhees71 said:
First of all the collapse assumption is relevant to physics because it violates basic principles of relativistic physics. 2nd it's (thus fortunately!) irrelevant to physics, because it's never observed and never necessary to be invoked to apply quantum theory to the analysis of real-world observations/experiments, while the very foundations of relativistic QFT (locality of interactions) is a very successful assumption.
So basically, you are saying that collapse is both relevant and irrelevant to physics. Don't you see that as a contradiction? Aren't you using double standards for "being relevant to physics"?
 
  • #23
No, I guess I just don't express my opinion clearly enough. Let me try again: I think the postulate of collapse is contradicting the fundamental postulates of relativistic QFT (locality of interactions, (micro-)causality). At the same time it's not needed to apply QT to real-world observations. That's why I just don't postulate it to begin with. In this sense it's irrelevant to physics. At the same time it's not just a matter of interpretation, but it's even worse, because postulating it makes the theory inconsistent in itself. So it must not be postulated (and fortunately it also doesn't need to be postualted).
 
  • #24
vanhees71 said:
No, I guess I just don't express my opinion clearly enough. Let me try again: I think the postulate of collapse is contradicting the fundamental postulates of relativistic QFT (locality of interactions, (micro-)causality). At the same time it's not needed to apply QT to real-world observations. That's why I just don't postulate it to begin with. In this sense it's irrelevant to physics. At the same time it's not just a matter of interpretation, but it's even worse, because postulating it makes the theory inconsistent in itself. So it must not be postulated (and fortunately it also doesn't need to be postualted).
I still don't get it.

Let me compare it with gauge ghosts. They have a wrong connection between spin and statistics, so they contradict relativistic QFT. Fortunately they are not observable, so they do not really contradict relativistic QFT. But everything can also be computed without them, so it is not necessary to introduce them. Yet, many physicists find physics easier when they use them.

Aren't gauge ghosts similar to collapse? By what general criteria are gauge ghosts acceptable and collapse not? After all, they are both just a tool for thinking about nature (neither of them is physically "real"), and the point of theoretical physics is to give us useful tools for thinking about nature.
 
Last edited:
  • #25
In addition, the collapse does not need to be postulated. It can be invented as a thinking tool, compatible with minimal ensemble interpretation and standard rules for conditional probability. In this sense it can be derived from QM, or even from relativistic QFT, in a minimal ensemble interpretation.

The only interpretational aspect of collapse is talking about whether it is real or just a tool. But as long as you merely use it as a tool, and don't talk or think about it's possible reality, there is nothing controversial or interpretational about collapse. As a tool, the collapse is neither more nor less physical than gauge ghosts.
 
  • #26
Stephen Tashi said:
It seems to me that the original post is asking when it is valid to analyze the outcome of an experiment by approximating the initial conditions as completely known.

Thank you, this is what I was trying to articulate, the question of initial conditions. I was wondering if there is any difference in the prediction of the outcome (of, say, the double-slit experiment) if the quantum system we observe is modeled starting with pre-existing electrons flying towards the slits vs. starting with the process that heats up the filament and releases the electrons etc.

By prediction I mean things like how soon the interference pattern emerges, by some measure, or some other measurable outcomes. (And if I understand correctly, if we chose the initial conditions to be "electrons past either slit" such model wouldn't predict the interference pattern?)

It feels like the initial conditions are somehow the counterpart to the measurement, I'm trying to understand if it's true and relevant (and how, if it is).
 
  • #27
Davor Magdic said:
Thank you, this is what I was trying to articulate, the question of initial conditions. I was wondering if there is any difference in the prediction of the outcome (of, say, the double-slit experiment) if the quantum system we observe is modeled starting with pre-existing electrons flying towards the slits vs. starting with the process that heats up the filament and releases the electrons etc.

No.

Does that answer your question?

I've asked this previously. If I give you a beam of electrons with specific characteristics, then can you tell from the double-slit experiment how they were created?

Please note that we already know the answer for this for photons. A 5 eV photon coming from an undulator, a synchrotron, an atomic transition, etc.. etc looks the same. And we have done A LOT of interference/diffraction experiments using these various photons.

I feel like this question has been answered, but somehow the answer is not being accepted. Short of going out and doing the experiment yourself, what do you expect to happen here?

Zz.
 
  • #28
Demystifier said:
In addition, the collapse does not need to be postulated. It can be invented as a thinking tool, compatible with minimal ensemble interpretation and standard rules for conditional probability. In this sense it can be derived from QM, or even from relativistic QFT, in a minimal ensemble interpretation.

The only interpretational aspect of collapse is talking about whether it is real or just a tool. But as long as you merely use it as a tool, and don't talk or think about it's possible reality, there is nothing controversial or interpretational about collapse. As a tool, the collapse is neither more nor less physical than gauge ghosts.
I don't understand why my opinion is so difficult to understand. My argument is very simple: It contradicts the fundamental and very successful assumptions going into relativistic QFT, leading to the Standard Model. So it must not be postulated to begin with.

Analyzing its use in the literature, it turns out that you don't need it and you can just use the minimal statistical interpretation, which doesn't have the problem with relativistic QFT. So I just don't make the postulate that there is a collapse.
 
  • #29
Demystifier said:
I still don't get it.

Let me compare it with gauge ghosts. They have a wrong connection between spin and statistics, so they contradict relativistic QFT. Fortunately they are not observable, so they do not really contradict relativistic QFT. But everything can also be computed without them, so it is not necessary to introduce them. Yet, many physicists find physics easier when they use them.

Aren't gauge ghosts similar to collapse? By what general criteria are gauge ghosts acceptable and collapse not? After all, they are both just a tool for thinking about nature (neither of them is physically "real"), and the point of theoretical physics is to give us useful tools for thinking about nature.
No, gauge ghosts are a calculational tool to organize the perturbative series for gauge theories. You start with four degrees of freedom of a massless vector field. Two of these degrees of freedom are unphysical, and local gauge invariance prevents them to interact. You can make this explicit by using an appropriate gauge, but this leads to complicated Feynman rules that are not manifestly Lorentz covariant. With Lorentz covariant gauge conditions, you don't eliminate the unphysical degrees of freedom but their contribution to the S-matrix elements must be cancelled, and this is done by introducing the Faddeev-Popov ghosts. So it's just a mathematical trick to get the gauge-invariant S-matrix elements in a manifestly covariant gauge. It has nothing to do with interpretation whatsoever.
 
  • #30
vanhees71 said:
So it's just a mathematical trick to get the gauge-invariant S-matrix elements in a manifestly covariant gauge. It has nothing to do with interpretation whatsoever.
Yes, gauge ghosts are just a tool, just a formal trick. But my point is that so is the collapse.
 
  • #31
Demystifier said:
Yes, gauge ghosts are just a tool, just a formal trick. But my point is that so is the collapse.
But what is a trick good for that only causes the theory to be contradicting itself?
 
  • Like
Likes Mentz114
  • #32
vanhees71 said:
But what is a trick good for that only causes the theory to be contradicting itself?
Suppose that before measurement the system is described by the state
$$|\psi>=c_1|\psi_1> + c_2|\psi_2>$$
where ##|\psi_1>## and ##|\psi_2>## are eigenstates of some observable ##A## that will be measured, with eigenvalues ##a_1## and ##a_2##, respectively. Suppose that I perform a projective measurement of ##A## and that I find out that the result of measurement is ##a_1##. Then it is a consequence of minimal ensemble interpretation that, after the measurement, I can make further statistical predictions by describing the system with a new state
$$|\psi_1>$$
In this sense, my acquire of knew knowledge induced a transition
$$|\psi> \rightarrow |\psi_1>$$
This transition is not a physical process in the measured system. (If this is a physical process at all, then it is only a process in my head, or a process at a peace of paper or computer screen, where I note changes about my current knowledge.)

For some reason this transition is called "collapse". Yes, the name can be misleading, but so can be the name "ghost".

And I don't see any contradiction. Moreover the trick is useful because it's simpler to make further predictions by using only ##|\psi_1>## instead of the full superposition ##|\psi>##.

Moreover, if you insist on not using a collapse, then making predictions requires more effort, which leads to a risk of making wrong predictions. For instance, Ballentine in his book made a wrong prediction that quantum Zeno effect does not exist. In reality it does; it is measured. The effect can also be explained without collapse but then it's not so simple, which is why Ballentine made a mistake. The collapse makes it simpler, which is why it's useful.
 
  • #33
So, you don't need a collapse either. What you describe is a preparation process by filtering a la von Neumann. Nowhere is there the slightest hint for a nonlocal interaction as invoked implicitly by the collapse interpretation. I don't see, why your description is more complicated than invoking the collapse. It's even more simple!
 
  • #34
vanhees71 said:
So, you don't need a collapse either. What you describe is a preparation process by filtering a la von Neumann. Nowhere is there the slightest hint for a nonlocal interaction as invoked implicitly by the collapse interpretation. I don't see, why your description is more complicated than invoking the collapse. It's even more simple!
I think you still don't get my point. There are two meanings of the word "collapse". One is a controversial interpretation, while another is a non-controversial tool. One requires a nonlocal interaction, while the other doesn't. One contradicts minimal ensemble interpretation, while the other can be derived from minimal ensemble interpretation. One talks about ontology, while the other talks about epistemology. And yet they are both called "collapse", which is the source of confusion (for instance, Ballentine failed to see the difference). I am invoking the latter type of collapse, and rejecting the former.

Another important point: By invoking only the non-controversial epistemic collapse, or even by not invoking any kind of collapse at all, one does not remove a need for non-locality. The Bell theorem proves that some kind of non-locality is unavoidable under much wider conditions.
 
  • #35
There are no non-local interactions there are only long-range correlations! The former are incompatible the latter are compatible with local relativistic QFTs. So you should not simply say "non-locality" but clearly state what you mean (interactions vs. correlations). Same with collapse: If it's an ill-defined notion, don't use it!
 

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
  • Quantum Physics
Replies
14
Views
1K
  • Quantum Physics
5
Replies
143
Views
6K
Replies
1
Views
601
  • Quantum Physics
Replies
2
Views
287
  • Quantum Physics
Replies
17
Views
1K
  • Quantum Physics
Replies
19
Views
1K
  • Quantum Physics
3
Replies
81
Views
4K
Replies
4
Views
862
Replies
8
Views
2K
Back
Top