B Understanding Measurements / Collapse

1977ub
Messages
530
Reaction score
22
I'm trying to understand the impact of past measurements, and when measurements occur.

As I understand it, in the simplest case, you've got a particle emitter in the center of a circle, and a measuring plate around the circle. Here in the ideal case the particle is emitted and has equal probability of showing up anywhere on the circle when the wave "collapses". This random outcome can be tested.

But I wondered at some point, when the particle is emitted at a flat wall, and we know what time, we can calculate the time that the wave would reach the nearest point to the emitter. If it is not measured there, then presumably the wave is still happening, and then with each progressive moment, we get more and more information about where the arrival point is NOT, and therefore the wave gets redefined, ad infinitum until it finally hits some point on the wall. Is that right?

When a photon leaves a distant star and arrives at a photographic plate on earth, has a "measurement" been performed at every moment where it *might* have arrived somewhere else than earth? If we had an emitter in deep space sending photons in random directions, could we tell anything about the layout of the universe in various directions once we got back the times photons were emitted and correlated that with the photons we got on earth?
 
Physics news on Phys.org
Your post has a number of misconceptions;

1. Collapse is an interpretation thing its not really part of QM. Even defining it is problematical - we had a recent discussion about it with the other science advisers and the best we could come up with, at least as far as I could see, is just restating the Born Rule.

2. QM is a theory about observations - it's silent on what's going on when not observed. Saying things like might have arrived and such is not part of QM.

All we can say is a photon interacted with the photographic plate, was destroyed by that interaction and left a mark. We really can't say any more than that.

As to the last part of you question you need to elaborate further - I don't get it.

Just as an aside even what a photon is is not that easy a question to answer:
http://www.physics.usu.edu/torre/3700_Spring_2015/What_is_a_photon.pdf

Thanks
Bill
 
Actually, I think there are elements of this question that are quite profound, and can be cast entirely in the language of observables and the analysis techniques scientists use to gain successful expectations about those observables.

First let me offer a slightly modified version of what is being asked. Imagine we send out a slow-moving electron (your question is better framed by this more mundane particle, see the issues raised above) at a fairly well defined moment but a poorly defined direction (and a well-enough defined energy for what follows), and instead of surrounding it with a detecting sphere, we have one detecting half-sphere in one hemisphere that is 1 km away, and another detecting half-sphere in the other hemisphere that is 10 km away. Let's say the time the electron takes to travel is well enough defined by its energy that while it travels from 1 to 10 km, we have plenty of time to analyze whether or not it hit the 1 km half-sphere, and send light signals to any scientists on the scene who are participating in the analysis, particularly those taking data from the 10 km half-sphere.

What it sounds to me like you are wondering is, before the electron can reach the 1 km half-sphere, all the participating scientists will be inclined to treat the electron wavefunction as spherically expanding. If later the electron is detected at the 1 km half-sphere, all the scientists will conclude its wavefunction has "collapsed." (Even those who take a philosophical perspective on reality that says no collapse ever really happens, they will still agree that in our limited ability to observe and correlate information, all the participating scientists must treat the electron as if it arrived at the 1 km half-sphere, and that's all I mean by "collapse" here.) However, if the 1 km half-sphere reports no detection, then all the participating scientists will readjust the electron wavefunction that they are using to predict the results such that it is now an expanding half-sphere, and no longer spherical. Hence, the wavefunction has been altered, even by a null detection. (And note this is all true whether you regard the wavefunction as something physically real, or just some information in the minds of all the scientists.) So my answer to that is yes, that is indeed a correct description of the situation here, as we can all observe it, independent of any philosophical preferences that people may have.

What this means is, if we are talking about slow-moving electrons, the null results are affecting the wavefunction that will be used by anyone who has access to those null results. Here's where the philosophy comes in-- if you think the wavefunction is real, then the null results exert real constraints on the wavefunction that really alter it in ways that are quite hard to experimentally test (since you get tricky questions like when exactly did it change in one place based on something that didn't happen somewhere else). But if you think the wavefunction lives only in information space, so "exists" only in the minds of the scientists using it, then there is no difficulty in answering how and when it changed-- it changed in the minds of those who got access to the null results, and it changed when they became privy to that information, and it did not change in the minds of those who did not get that information. What's quite interesting, and I think profound, to note here is that all of the scientists, whether they are privy to the null result information or not, will get outcomes that are consistent with the information they were using at the time-- none will find their analysis false, or in any way wanting, depending on whether or not they have access to any given information in the problem, as long as they have no false information.

If what to make of all this leaves you puzzled and searching, then all it means is you have joined the club of those who understand the stakes here. All I would say to you is, when you use the term "we" in your question, make sure you are talking about actual scientists who have actual access to the information you are using-- that can make all the difference, as you can run into incorrect conclusions if you imagine that information is allowed to move around in hypothetical or faster-than-light ways.
 
1977ub said:
I'm trying to understand the impact of past measurements, and when measurements occur.

As I understand it, in the simplest case, you've got a particle emitter in the center of a circle, and a measuring plate around the circle. Here in the ideal case the particle is emitted and has equal probability of showing up anywhere on the circle when the wave "collapses". This random outcome can be tested.

But I wondered at some point, when the particle is emitted at a flat wall, and we know what time, we can calculate the time that the wave would reach the nearest point to the emitter. If it is not measured there, then presumably the wave is still happening, and then with each progressive moment, we get more and more information about where the arrival point is NOT, and therefore the wave gets redefined, ad infinitum until it finally hits some point on the wall. Is that right?

When a photon leaves a distant star and arrives at a photographic plate on earth, has a "measurement" been performed at every moment where it *might* have arrived somewhere else than earth? If we had an emitter in deep space sending photons in random directions, could we tell anything about the layout of the universe in various directions once we got back the times photons were emitted and correlated that with the photons we got on earth?

Some of your reasoning reminds me of Renninger-type experiments or Renninger negative-result experiments.
 
Thanks very much Ken! So, to use the waves (real or not) to get the proper results for a hit out away from the nearest point in the wall, would one need to "collapse" the wave multiple times? Would that result in different results and can it be tested? Is this something similar to what I would read about in "Renninger's Thought Experiment" ?
 
1977ub said:
Thanks very much Ken! So, to use the waves (real or not) to get the proper results for a hit out away from the nearest point in the wall, would one need to "collapse" the wave multiple times?
Yes, but be sure to remember the way probability works-- there is no unique probability for something to happen, it depends on what you already know and don't know. The more information you use, the more your probability will change, but it's still a probability so you cannot say it is more right or less right than it was before. For example, if you are playing cards, and you don't know your opponent's hand, you might assess your chances of winning at 50%. If you accidentally get a peek at some of the cards, even if it is cards in other players' hands so you only know the opponent you are worried about doesn't have those cards, perhaps you reassess that you now have a 75% chance of winning. The 75% isn't any more accurate or more sure than the 50% was, it's just different, because you know more. Similarly, the way you assess the probability of the particle hitting the flat wall somewhere will depend on what you already know about where it did not hit, and that knowledge will appear in the way you assess its wavefunction. So even people who think wavefunctions are real things (which always somewhat mystifies me, frankly), they must allow that not all the scientists present are privy to that real wavefunction, yet all the scientists can test quantum mechanics just fine.
Would that result in different results and can it be tested? Is this something similar to what I would read about in "Renninger's Thought Experiment" ?
I haven't read that yet, it sounds interesting, but at this point I would just say yes it can be tested, but maybe not the kind of absolute test you'd like. The way we encounter probabilities in quantum mechanics means you can't always test what you'd like to test, you can only test that the probabilities are working out. For example, if I hand you an electron, and say "tell me the spin it had in the upward direction when I gave it to you," there is actually no way you can answer that question. However, if you measure its spin to be "up", you can indeed say "whatever it's spin in the upward direction was, if it even had a definite spin in that direction (which it doesn't have to have), I know that it wasn't down." So all you can test is theories that claim the spin was down, you know they were wrong, but other theories you need many trials to begin to see if the statistical pattern supports or refutes what you are testing. Such is the nature of statistical theories like quantum mechanics.

ETA: Ah yes, I read the Renninger experiment and I see it is just what I mentioned. So no surprises there. Still, it raises challenges to people who like to interpret the wavefunction as something real. I'm sure they have answers to those challenges, they often prove very committed to their realism.

Here's a related scenario that might prove even more informative. If you put a particle in a box, and let it fall to its ground state, its wavefunction is symmetric about the center. If you then shine a short but bright pulse of light on half the box and find no particle there, you must reassess its wavefunction such that it is not symmetric about the center. Thus it can no longer be in its ground state, thus the expectation value of its energy has gone up. So how is it that the energy has gone up if the light did not interact with the particle? Maybe quantum field theory has a good way to handle this question, but I suspect that quantum mechanics must handle it by saying that the pulse of light must have some finite exposure time, and during that time, the particle could have interacted with the light without generating a detection, and that possibility of interaction without detection must be where the energy came from. Others might wish to weigh in with a better answer to that puzzler.
 
Last edited:
In the scenario of a circular wall, coated with something that emits light that experimenters can see, there is first emitted one particle. It then hits one one point on the wall, collapsing the "wave" and then our prediction regarding where other particles thus emitted may arrive on other parts of the wall is not even, since emission is no longer at the center. Likewise, once the particle has not hit the closest point on the flat wall at time x, in some sense the original wave has "collapsed" and we analyze the outcome of this new wave over multiple tries. Presumably the new prediction is different from the original prediction of scattering outcomes, since now we know the % at the closest region is zero. What do experiments suggest?
 
1977ub said:
other particles thus emitted

Do you mean the light that gets emitted when the one particle hits one point on the wall?
 
Experiments suggest that you will generate the correct expectations if you use quantum mechanics to calculate relative probabilities, by that I mean probabilities that only compare the relative likelihood of the outcomes you are actually able to test, with no attempt to make sure all the possibilities add up to 1. Then you can get them all to add up to 1 at the end if that's what you need, but it's not always what you are actually testing. It is often easiest to do probability calculations in two steps like that-- first calculate the relative probabilities, then when you are sure you have accounted for all the possible outcomes, scale them up to get them to add up to 1 as the very last step. As you gain new knowledge, it may just be that last step you need to modify, as you exclude more and more possibilities based on the null detection information you have access to. But here's an interesting point-- if you just take two regions on the wall and ask the relative likelihood of hitting one or the other, null detection information from other parts of the detector won't have any effect. So we see what is actually being affected: the way you normalize to absolute probabilities based on the possibilities you have ruled out.
 
  • #10
Ken G said:
If you put a particle in a box, and let it fall to its ground state, its wavefunction is symmetric about the center. If you then shine a short but bright pulse of light on half the box and find no particle there, you must reassess its wavefunction such that it is not symmetric about the center. Thus it can no longer be in its ground state, thus the expectation value of its energy has gone up. So how is it that the energy has gone up if the light did not interact with the particle?

You don't know that the energy has gone up, because you didn't measure the energy. You only know that the expectation value of the energy has gone up; but the expectation value is not an actually observed value.

It is true that the wave function you use to describe the particle will change after you shine the pulse of light, but unless you adopt a realist interpretation of the wave function, you cannot say that changing the wave function means the particle itself "actually" changed--which is what you would have to say in order to say that its energy has gone up even though the light didn't interact with it. If the wave function is just a matter of our information about the particle, then it can change without the particle itself changing.

You might have intended this example precisely to illustrate the point I just made--that it poses a problem for a realist interpretation of the wave function--but I'm not sure because a bit later on you say that you suspect "quantum mechanics must handle it" as follows:

Ken G said:
the pulse of light must have some finite exposure time, and during that time, the particle could have interacted with the light without generating a detection, and that possibility of interaction without detection must be where the energy came from

This is the sort of story that someone who adopts a realist interpretation of the wave function would have to tell, yes. But someone who didn't adopt such an interpretation would not. They could just say what I said above: that the wave function changing doesn't mean anything about the particle itself changed. So it's not "quantum mechanics" per se that "must" handle this example as in the quote just above; it's only a particular class of interpretations of QM.

Ken G said:
Maybe quantum field theory has a good way to handle this question

I don't think taking QFT into account would change what I just said above.
 
  • Like
Likes Mentz114
  • #11
1977ub said:
once the particle has not hit the closest point on the flat wall at time x, in some sense the original wave has "collapsed" and we analyze the outcome of this new wave over multiple tries

It is worth noting that this example (and the related one you posed where a particle hits a point on a sphere and light gets emitted) differ in an important respect from Ken G's example of the two half spheres, one 1 km away and one 10 km away from the point of emission of an electron. In Ken G's case, the "null detection" can be clearly defined--notice that he carefully specified that the electron is moving slowly, and that its energy (and hence its speed) is fairly well defined, but its direction is not. Those qualifications are crucial in order to guarantee that there is a fairly narrow "window of time" during which the electron would hit the 1 km half sphere, if it were going to hit it at all, and the same for the 10 km sphere, and that the separation between these two windows of time is much larger than the size of either one. That means there is a fairly wide period of time between the two windows where, if the electron has not been observed to hit the 1 km half sphere, we can confidently say that it won't--that we have a "null detection", and we can update the wave function accordingly and use the updated wave function to make future predictions.

In your examples, this is not the case: the particle's speed might not be very well-defined, so its time of flight to a particular point on the wall might not be either; and the points on the wall are continuous, so there is not a discrete set of narrow "windows of time", separated by periods of time much wider than the windows themselves, where detection events would have to occur. So the conditions that would have to be met in order to update the wave function based on "null detection" events, are not met.
 
  • #12
PeterDonis said:
You don't know that the energy has gone up, because you didn't measure the energy. You only know that the expectation value of the energy has gone up; but the expectation value is not an actually observed value.
And yet, energy expectation values should be conserved, should they not? The expectation value has gone up here, so the null result has raised the energy expectation even though no interaction occurred. That requires explanation, so the question does appear. I don't know if my answer is the correct one, but an answer is required.
It is true that the wave function you use to describe the particle will change after you shine the pulse of light, but unless you adopt a realist interpretation of the wave function, you cannot say that changing the wave function means the particle itself "actually" changed--which is what you would have to say in order to say that its energy has gone up even though the light didn't interact with it.
I didn't say its energy went up, I said the expectation value did, and that suffices. But I agree with you that realist interpretations of the wavefunction suffer additional problems that I prefer to avoid by regarding the wavefunction as information and nothing more.
This is the sort of story that someone who adopts a realist interpretation of the wave function would have to tell, yes. But someone who didn't adopt such an interpretation would not. They could just say what I said above: that the wave function changing doesn't mean anything about the particle itself changed. So it's not "quantum mechanics" per se that "must" handle this example as in the quote just above; it's only a particular class of interpretations of QM.
But when I say quantum mechanics must handle it, I mean it must account for the non-conservation of the expectation value of the energy. It must find that energy somewhere else-- that's not a matter of philosophy or interpretation, it is in the support of conservation of energy. It may be true that ensemble interpretations are allowed to smooth out energy nonconservation, but surely those would be the only allowed interpretations if the expectation value of the energy was not conserved in each instance. So every instance must have some way of making it work out. Perhaps the problem is in culling out the null detections, in a kind of "Maxwell's demon" sort of way, which always requires energy to do, but if that's the answer, it means the situations where detections did occur must lower the expectation value! So again I don't know the right resolution of the issue, but in any interpretation this issue must be resolved, even nonrealist ones.
I don't think taking QFT into account would change what I just said above.
Yes, I'm curious if QFT resolves it differently, because right now I can only think of two ways to resolve the issue. Either there is a kind of interaction that can rob energy (in the expectation value) from the photon field even if there is no detection (such that you could tell there is a particle in the box, versus no particle in the box, even if you are not detecting it on the side you shine the light on), or else one cannot both cull out the non-detections and also expect the energy expectation value to be conserved in each instance, one can only get conservation if one allows either a detection or a non-detection (say, by including one of each), and then the energy in the detection case would need to rise by the same amount it is found to drop in the non-detection case. I tend to suspect it is the former of those two.
 
  • #13
Ken G said:
energy expectation values should be conserved, should they not?

Not unless we are talking about a classical approximation for a quantum system with a large number of degrees of freedom, in which we can treat the expectation value as the actual value. For a quantum system with a single degree of freedom (or a small number--3 if we suppose the particle is confined in a three-dimensional box and has zero spin), my understanding is, no.

You would have a stronger argument if you started with the observation that the ground state of the particle in the box is an energy eigenstate, for which the expectation value is just the eigenvalue (the ground state energy), and for which it might make more sense to treat the eigenvalue as an "actual value" even for a system with a small number of degrees of freedom. But even in that case, I think interpreting the energy eigenvalue as the "actual value", and therefore interpreting any change in the wave function (after the pulse of light shines on the box, the resulting state of the particle won't be an energy eigenstate) as requiring some "actual change" in the particle, requires at least some elements of a realist interpretation of the wave function; I don't think it's completely interpretation independent.

Ken G said:
I'm curious if QFT resolves it differently

In QFT, or at least in perturbative QFT, we can apply conservation laws at any vertex of a Feynman diagram. But that's still not the same as applying them to expectation values.
 
  • #14
PeterDonis said:
It is worth noting that this example (and the related one you posed where a particle hits a point on a sphere and light gets emitted) differ in an important respect from Ken G's example of the two half spheres, one 1 km away and one 10 km away from the point of emission of an electron. In Ken G's case, the "null detection" can be clearly defined--notice that he carefully specified that the electron is moving slowly, and that its energy (and hence its speed) is fairly well defined, but its direction is not. Those qualifications are crucial in order to guarantee that there is a fairly narrow "window of time" during which the electron would hit the 1 km half sphere, if it were going to hit it at all, and the same for the 10 km sphere, and that the separation between these two windows of time is much larger than the size of either one. That means there is a fairly wide period of time between the two windows where, if the electron has not been observed to hit the 1 km half sphere, we can confidently say that it won't--that we have a "null detection", and we can update the wave function accordingly and use the updated wave function to make future predictions.

In your examples, this is not the case: the particle's speed might not be very well-defined, so its time of flight to a particular point on the wall might not be either; and the points on the wall are continuous, so there is not a discrete set of narrow "windows of time", separated by periods of time much wider than the windows themselves, where detection events would have to occur. So the conditions that would have to be met in order to update the wave function based on "null detection" events, are not met.

Ok - I'm trying to get to where this question is real and can be addressed. We've got a slow moving electron with the known time of emission and the known velocity and we have a circle painted on the closest part of the flat surface. There is another circle painted some specific # of feet along off to the right.

The "first approximation" of % of hits in the farther region would be based on the initial wave at emission. Once we decide that the particle did not hit the closer region, does the % in the farther region no longer match that initial wave's calculation? I realize that classically there is the question of how often a die comes up 6 vs. how often a die which does not come up 1 comes up 6. Is there anything in this experiment to suggest that the % hits in the far region is anything other than what would be explained by that classical approach to the percentages?
 
  • #15
1977ub said:
We've got a slow moving electron with the known time of emission

You should be aware that there is an inherent tradeoff here between how accurately you specify the electron's speed and how accurately you specify the time of emission. You can't specify both to arbitrary accuracy, because of the uncertainty principle.

1977ub said:
and the known velocity and we have a circle painted on the closest part of the flat surface. There is another circle painted some specific # of feet along off to the right.

If the circles are separated by enough distance (where "enough" needs to be evaluated taking into account the uncertainty principle limitation I gave above, in how accurately the speed and time of emission can be specified), then yes, there will be a period of time when a non-detection in the closest circle can be interpreted as a "null detection", and the wave function can be updated for the purpose of computing the probability of detection in the other circle.

1977ub said:
I realize that classically there is the question of how often a die comes up 6 vs. how often a die which does not come up 1 comes up 6. Is there anything in this experiment to suggest that the % hits in the far region is anything other than what would be explained by that classical approach to the percentages?

I'm not sure how to answer this, because quantum mechanics involves amplitudes, which are complex numbers that you square to get probabilities. That is a fundamental difference from the classical case, so my initial reaction would be to be extremely cautious about drawing any analogy between the two.
 
  • #16
PeterDonis said:
If the circles are separated by enough distance (where "enough" needs to be evaluated taking into account the uncertainty principle limitation I gave above, in how accurately the speed and time of emission can be specified), then yes, there will be a period of time when a non-detection in the closest circle can be interpreted as a "null detection", and the wave function can be updated for the purpose of computing the probability of detection in the other circle.

You say that the wave function "can be" updated. I'm wondering whether it need be. Or will calculations based on the initial emission wave still hold?
 
  • #17
PeterDonis said:
Not unless we are talking about a classical approximation for a quantum system with a large number of degrees of freedom, in which we can treat the expectation value as the actual value. For a quantum system with a single degree of freedom (or a small number--3 if we suppose the particle is confined in a three-dimensional box and has zero spin), my understanding is, no.
Well that would explain it, but I had thought the answer was yes. It would be a strange situation even for the ensemble, because we could subject the whole ensemble of detections and non-detections to energy measurements, and we know that all the non-detections would result in a higher energy than they started with in their ground states. If we were to attribute that energy increase to the energy measurement, if we cannot expect to find it missing from the non-detecting photons, then that is certainly a strange kind of energy measurement! It would not be an energy measurement at all on the non-detection subset, because if your measuring device is conveying energy in the net you can't say you are measuring energy. You might imagine that it is only an energy measurement on the complete ensemble, including the detections, but then the energy device would have to take energy from the detected subset for it to exchange no energy in the net. That would be a surprising situation, since the detected and non-detected subsets are all just superposition states, there wouldn't seem to be anything special about them such that one could gain and the other lose energy from the device.
You would have a stronger argument if you started with the observation that the ground state of the particle in the box is an energy eigenstate, for which the expectation value is just the eigenvalue (the ground state energy), and for which it might make more sense to treat the eigenvalue as an "actual value" even for a system with a small number of degrees of freedom.
But that is exactly the situation-- the particle starts out in its ground state. It shouldn't matter if I just wait until I know it will be, or actually do the measurement, that isn't important to me. It starts out in the ground state, so definitely has its lowest energy.
But even in that case, I think interpreting the energy eigenvalue as the "actual value", and therefore interpreting any change in the wave function (after the pulse of light shines on the box, the resulting state of the particle won't be an energy eigenstate) as requiring some "actual change" in the particle, requires at least some elements of a realist interpretation of the wave function; I don't think it's completely interpretation independent.
Regardless of interpretation, the issue can be framed entirely in terms of the subsequent energy observations.
In QFT, or at least in perturbative QFT, we can apply conservation laws at any vertex of a Feynman diagram. But that's still not the same as applying them to expectation values.
But independent of any theory, we can ask if our measuring device is exchanging energy with our ensemble, or any particular subensemble, in the net, simply by tracking how much energy our device requires to operate. I argue we cannot call something an energy measurement simply if it leaves the particle in an energy eigenstate-- it has to do so without exchanging energy with the system in the net.
 
  • #18
1977ub said:
I realize that classically there is the question of how often a die comes up 6 vs. how often a die which does not come up 1 comes up 6. Is there anything in this experiment to suggest that the % hits in the far region is anything other than what would be explained by that classical approach to the percentages?
It's just the same here. Amplitudes work just like probabilities (once you square them) any time there is not interference between multiple amplitudes. Here there is not any interference, because all the amplitudes that go into the different detected locations are all independent of each other, they only interfere with other amplitudes at the same detection location. So all you need to change is the overall normalization when you know about non-detections, which is just like normalizing the probabilities to add up to 1 in your classical die situation.
 
  • #19
Ken G said:
It's just the same here. Amplitudes work just like probabilities (once you square them) any time there is not interference between multiple amplitudes. Here there is not any interference, because all the amplitudes that go into the different detected locations are all independent of each other, they only interfere with other amplitudes at the same detection location. So all you need to change is the overall normalization when you know about non-detections, which is just like normalizing the probabilities to add up to 1 in your classical die situation.

So truly there is no "re-measurement" implied by the null-result at any points closer to the center than the outer region. I don't quite understand that, unless these re-measurements come out not to make any new or different prediction than the original...
 
  • #20
The weird thing about a measurement is that when it occurs is subjective. A measurement occurs when one gets a definite result or definite information.

The measurements whose results are null (ie. when you look and you see the particle is not there) are measurements and they do collapse the wave function. Some examples of quantum calculations with null results are given in https://arxiv.org/abs/1406.5535 (p5: "Observing nothing is also an observation").
 
Last edited:
  • #21
1977ub said:
will calculations based on the initial emission wave still hold?

The two calculations are calculating different things. The first calculation--the one based on the initial emission wave--is calculating the probability for the particle ending up in some particular area (such as the second circle, the one off to the right). The second calculation--the one using the updated wave function after a "null detection" is confirmed in the first, closest circle--is calculating the probability for the particle ending up in the second circle, given that it didn't end up in the first. These are two different probabilities. Whether the wave function "needs" to be updated depends on which probability you are interested in.

If we don't adopt any particular interpretation of QM, that's basically all you can say. Any further claim, such as that the update of the wave function corresponds to something that "really happened" to the particle, will be interpretation dependent.
 
  • Like
Likes zonde
  • #22
PeterDonis said:
The two calculations are calculating different things. The first calculation--the one based on the initial emission wave--is calculating the probability for the particle ending up in some particular area (such as the second circle, the one off to the right). The second calculation--the one using the updated wave function after a "null detection" is confirmed in the first, closest circle--is calculating the probability for the particle ending up in the second circle, given that it didn't end up in the first. These are two different probabilities. Whether the wave function "needs" to be updated depends on which probability you are interested in.
Isn't it only normalization factor by which two probabilities differ? My reasoning is that wave function in both cases is updated using projection. The difference is that in one case projection we take as filtering measurement (result is fraction of ensemble emitted by source that end up in second semicircle) but in second case the source and projection together is taken as preparation procedure (result is fraction of ensemble emitted by source + filtered by first semicircle that end up in second semicircle).
 
  • #23
zonde said:
Isn't it only normalization factor by which two probabilities differ?

You could call the difference a normalization factor, I suppose, but it doesn't change the fact that the two probabilities are different, which was my point. Asking whether you "need" to update the wave function, as the OP did, is asking the wrong question; the right question is, which of the two probabilities do you care about? Answering that tells you which wave function to use.
 
  • #24
PeterDonis said:
You could call the difference a normalization factor, I suppose, but it doesn't change the fact that the two probabilities are different, which was my point. Asking whether you "need" to update the wave function, as the OP did, is asking the wrong question; the right question is, which of the two probabilities do you care about? Answering that tells you which wave function to use.

I'm missing something. The one electron will only land in one of the two circles at most. So the probability it will land in the far circle should be the same as having it land in the far circle and not the near circle.
 
  • #25
1977ub said:
I'm missing something. The one electron will only land in one of the two circles at most. So the probability it will land in the far circle should be the same as having it land in the far circle and not the near circle.
The end result (physically) is the same in both cases. But what you are missing is that starting point of your reasoning for two cases is different and that affects how you calculate the probability.
 
  • #26
1977ub said:
So truly there is no "re-measurement" implied by the null-result at any points closer to the center than the outer region. I don't quite understand that, unless these re-measurements come out not to make any new or different prediction than the original...
The absence of "re-measurement" is merely how this experiment is set up, where all that happens is possibilities are culled out and all other probabilities are scaled up accordingly. But the one I mentioned where light is shined in a box is a null result with a more significant impact on the particle. So I think the overall question you are asking is one with significant implications that you had the insight to notice, but perhaps less so in the example you took.
 
  • #27
zonde said:
The end result (physically) is the same in both cases. But what you are missing is that starting point of your reasoning for two cases is different and that affects how you calculate the probability.

But experimentally we find only one probability upon doing the experiment multiple times - which one ? Are they numerically equivalent using the 2 methods ?
 
  • #28
1977ub said:
But experimentally we find only one probability upon doing the experiment multiple times - which one ? Are they numerically equivalent using the 2 methods ?
If you are comparing two different experiments, you can get two different probabilities and it's fine. To check a single probability, you need many trials of exactly the same experiment, you can't in one case get information from null results, and in the other case, not. That would be testing two different probabilities using two different experiments.
 
  • #29
Ken G said:
If you are comparing two different experiments, you can get two different probabilities and it's fine. To check a single probability, you need many trials of exactly the same experiment, you can't in one case get information from null results, and in the other case, not. That would be testing two different probabilities using two different experiments.

Somehow I'm not getting this across. Person A uses the apparatus to detect hits in the far circle. He only uses the initial emission model of the wave.

Person B uses the same apparatus to detect hits in the same far circle. He attempts to use new wave models with each new bit of information about near points along the wall in which the electron has *not* hit.

The actual % in the far circle, upon releasing enough particles to find a % - is there any deviation found from person A's calculation found in the result?
 
  • #30
That's just what I was saying-- person A and person B are involved in two different experiments, because they have access to different information (person B is throwing out the non-null results that person A is including as part of the experiment). Hence they are testing two different probabilities, even though they are happening at the same time for both people. They calculate two different probabilities (differing by the normalization), and both find their probabilities work fine for them, as they are doing two different experiments. This happens all the time, even classically. If you have five people at a table playing a card game, they are all calculating different probabilities, that all test out well, even though they are all playing the same series of hands.

What's profound about this is that the realist asks, which person has all the information, so is calculating the actual probabilities, the "real ones." But the nonrealist doesn't think there is any such thing as "all the information" (as if from some "god perspective"), or any "actual probability," so all probabilities are tools for successfully manipulating whatever information is available, and the question of which is the "real one" never even appears. In my view, the nonrealist perspective gibes better with Bayesian thinking about probability, where to assess a probability you always have to start with some kind of prior expectation, which you use information to modify but not to generate probabilities in some absolute sense. Framed like this, all you really test is the consistency between your prior expectations, your acquired information, and your theory-- so when you get agreement, you don't know that any of those are complete in some absolute sense, you only know they work well together in a relative sense, they "combine well" if you like. The point being, two different scientists can follow a different process of tracking information and checking the consistency of their prior expectations with their theory, and both can get "correct results", even when they are different results.

I'll give you a classic example. Let's say one scientist prepares a million photons to be polarized vertically, and a million to be polarized horizontally, and keeps track of which is which as they do various polarization measurements on them. A second scientist looks at the same experimental outcomes, but is not privy to the "which is which" information, so treats the photons as unpolarized. Each scientist will calculate very different probabilities for the outcomes of the experiments done, yet both will get complete agreement with the probabilities they calculate, because they will sort the data differently. Probabilities require sorting because they are ratios of one subclass to another. The philosophical question to ask is, who makes the subclasses? The realist says nature does, the nonrealist says nature doesn't know how to classify, people do that.
 
Last edited:
  • Like
Likes Adel Makram
  • #31
1977ub said:
But experimentally we find only one probability upon doing the experiment multiple times - which one ? Are they numerically equivalent using the 2 methods ?
No, they are not. This can be easily seen if you use frequentist interpretation of probability. You repeat your experiment many times. In first case you look at all the emitted electrons and ask what is the fraction of electrons that ended up in second detector while in second case you throw away those electrons that ended up in first detector and ask what fraction of those left ended in second detector. And of course numbers are different.

Sorry, edited a bit. Mixed up fraction with actual count of electrons.
 
  • Like
Likes PeterDonis
  • #32
I may have made this too complicated, or thrown in too many questions or scenarios.
I'll start over.

...several sentences later, omitted here...

Oh. Now I'm remembering that the mere presence of detector one changes the pattern of outcomes on the wall. I have just re-made the double slit experiment.
 
  • #33
I don't think the detectors you are talking about change any patterns. Perhaps you would like to specify more clearly what experiments you are thinking about? It seems to me the only changes occurring are the way you are sorting the data, like there is an experiment happening where a bunch of particles hit the wall, and another experiment where some of that data is being thrown out because you are only interested in non-detections. But even those don't change any patterns, only their normalizations.
 
  • #34
Ok here is what I had deleted:

Two observers are watching the experimental apparatus. The electrons are emitted, and there is a circle indicated at the close region and another at the far region.

The wall is coated entirely with a substance which will register visibly when an electron arrives. We don't have "detectors" so much as regions of the wall which are painted with circles simply to define these regions.

Each time a single slow electron is emitted, it can land in one of the 2 circles or it can land elsewhere, but it cannot land in both circles.

Observer A performs a single calculation based only on the initial set up, presuming that there is only one "collapse". He asks "what % of electrons will end up within the far circle?" He ends up with a single prediction of % hits in that circle, finally computing the outcome % based on *all* electrons released.

Observer B is watching closely enough to decide when each electron should have hit within the closer circle and in those cases where it seems not to have done so, recalculates the probability of hitting the far circle based upon that initial null measurement. No results are ever "thrown out". Any time the close circle is hit, , this will be treated as a "miss" of the far circle, but these will be averaged in at the end.

At the end of a sufficiently high number of electrons released to determine a %, will both end observers end up with the same prediction for the far circle?
If not, which one will match the % from the runs of the experiment?
 
  • #35
That's exactly what I'm talking about, and results certainly are thrown out: when you get a hit inside the circle the second person simply doesn't count that event. Otherwise the two scenarios are identical, all results and all calculations are precisely the same.
 
  • Like
Likes atyy
  • #36
1977ub said:
Ok here is what I had deleted:

Two observers are watching the experimental apparatus. The electrons are emitted, and there is a circle indicated at the close region and another at the far region.

The wall is coated entirely with a substance which will register visibly when an electron arrives. We don't have "detectors" so much as regions of the wall which are painted with circles simply to define these regions.

Each time a single slow electron is emitted, it can land in one of the 2 circles or it can land elsewhere, but it cannot land in both circles.

Observer A performs a single calculation based only on the initial set up, presuming that there is only one "collapse". He asks "what % of electrons will end up within the far circle?" He ends up with a single prediction of % hits in that circle, finally computing the outcome % based on *all* electrons released.

Observer B is watching closely enough to decide when each electron should have hit within the closer circle and in those cases where it seems not to have done so, recalculates the probability of hitting the far circle based upon that initial null measurement. No results are ever "thrown out". Any time the close circle is hit, , this will be treated as a "miss" of the far circle, but these will be averaged in at the end.

At the end of a sufficiently high number of electrons released to determine a %, will both end observers end up with the same prediction for the far circle?
If not, which one will match the % from the runs of the experiment?

If Observer A includes the disturbance caused by Observer B's inner circle measurement apparatus in his calculations, he will get the same result.

Observer A can do his calculation in 2 ways, both getting the same answer:
(1) Calculate exactly the same as Observer B, collapse the wave function, and throw away the intermediate result, then get the final count at the outer circle.
(2) Calculate with Observer B observing, but with no collapse of the wave function, and just calculate the final result at the outer circle.

The averaging process you talk about is the same as throwing away the intermediate results.
 
Last edited:
  • #37
There is an emitter of slow electrons, and there is a flat wall with two circular regions painted - one close to the emitter, and one farther away. There is no inner circle "apparatus" - unless by using photographic plate along the whole wall we have created innumerable "detectors" - both inside and outside of the 2 circles.
 
  • #38
We understand the apparatus. What we're saying is, let's say f1(r) is the predicted density of hits as a function of radius in the inner circle, and f2(r) is in the second. These are straightforward to compute prior to any trials of the experiment. Now let's say you wanted to test how f2 changes after you wait long enough to know there has not been a hit in the inner cicle. The answer is you set f1 to zero and simply scale up f2 so that it remains a normalized probability. But our point was, if you want to test that you got this calculation right, you'll have to do many many trials of the experiment-- but any trial which gets a hit in the inner circle will not be relevant to the probability you are testing, so will have to be thrown out of the dataset.

Alternatively, if you don't throw out any data, you can simply test the original f1 and f2, which is testing something different because here f1 is not zero and f2 is not normalized by itself. That's what I meant by two different experiments being tested, and different data being used, because in only one is part of the dataset being thrown out. This is what is typical-- you can get two different probabilities, and conclude both are working fine, if you are testing something different that requires a different sorting of the dataset. The role of sorting in probability calculations is often overlooked by language that suggests that probabilities "exist" as things in and of themselves, which to me, connects to a similar problem with interpreting wavefunction amplitudes as "things." But that interpretation is possible, and some prefer it.
 
  • #39
no inner circle vs outer circle. two circles painted on, both let's say one foot in diameter. one covers the part of the wall closest to the emitter. the second one is the same size but some distance away on the wall.
 
  • #40
No mention of 'time-symmetric' interpretations of QM such as the Transactional Interpretation? TI makes conceptualising this kind of two-hemisphere experiment simpler and less confusing. I'm only just learning about this myself, so I'm no expert, but ...

The absorption of the electron, wherever it ends up, causes the absorber to emit a backwards-in-time 'confirmation wave' which, together with the forwards-in-time 'offer wave' of the emitter, determines the wave function of the emitted electron.

In this view, a null result of the smaller-hemisphere measurement team does not change the wave function. It was already there, fully-formed - having been instantaneously created from the interference of the emitter's 'retarded' offer wave and the absorber's 'advanced' confirmation wave. And non-locality isn't a problem when you can travel back in time! The offer/confirmation occurs instantaneously over any distance.

As I understand it (which I don't, mathematically, at least not yet), time symmetry is there in the equations of Maxwell, Einstein and Schroedinger. The backwards-in-time stuff has just been ignored by physicists. My instinct tells me there's something to this. If you can remove some of the quantum-weirdness of Copenhagen, that's got to be a good thing, and for it to concur with the equations is even better. I just wish I could learn the mathematical side of QM more easily, but it's a slow process for me.
 
  • #41
1977ub said:
no inner circle vs outer circle. two circles painted on, both let's say one foot in diameter. one covers the part of the wall closest to the emitter. the second one is the same size but some distance away on the wall.
That makes no difference, the geometry of the regions does not change the argument I just gave, it only modifies the form of the f(r) functions. The key point in all of this is, what f(r) you think you are testing depends on what data you are throwing out. You have to throw data out because you have lots of trials, and some will not be relevant to what you are testing. For example, if you are testing changes in the f(r) that come from the absence of detections somewhere else, you have to throw out the cases where there were detections somewhere else! So this is the reason you have to decide what you are testing before you can calculate the probabilities you expect.
 
  • #42
Jehannum said:
The absorption of the electron, wherever it ends up, causes the absorber to emit a backwards-in-time 'confirmation wave' which, together with the forwards-in-time 'offer wave' of the emitter, determines the wave function of the emitted electron.
Yes, you can always retain realism if you jettison locality. To me, this is a dubious choice, because the only reason we hold to either realism or locality is that they both gibe with our classical experiences. So if you have to let go of one, I see no reason to pick which. I'd rather just dump the whole paradigm that classical experience should be regarded as prejudicial toward our interpretations of non-classical phenomena! But I certainly agree that interpretations are subjective-- the question being asked is framed in an interpretation independent way, and all the interpretations must arrive at the same answer to that question.
As I understand it (which I don't, mathematically, at least not yet), time symmetry is there in the equations of Maxwell, Einstein and Schroedinger. The backwards-in-time stuff has just been ignored by physicists. My instinct tells me there's something to this. If you can remove some of the quantum-weirdness of Copenhagen, that's got to be a good thing, and for it to concur with the equations is even better. I just wish I could learn the mathematical side of QM more easily, but it's a slow process for me.
But remember, time symmetry is in Newton's laws too! Yet we have the second law of thermodynamics, and we have a concept of a difference between a cause and an effect. I grant you that it is not at all obvious that causes lead to effects, rather than effects produce a requirement to have a cause, but why retain realism at all if we are going to allow that our daily experiences are not reliable guides to "what is really happening"? To me, once we've rejected the authority of our intuition, the more obvious next step is to be skeptical of the entire notion of "what is really happening," and just admit that we are scientists trying to form successful expectations about observed phenomena, and any untestable process that we imagine is regulating those phenomena, but cannot show is regulating those phenomena, is essentially pure magic. Useful magic, subjectively preferred magic, but magic all the same.
 
  • #43
Ken G said:
That makes no difference, the geometry of the regions does not change the argument I just gave, it only modifies the form of the f(r) functions. The key point in all of this is, what f(r) you think you are testing depends on what data you are throwing out. You have to throw data out because you have lots of trials, and some will not be relevant to what you are testing. For example, if you are testing changes in the f(r) that come from the absence of detections somewhere else, you have to throw out the cases where there were detections somewhere else! So this is the reason you have to decide what you are testing before you can calculate the probabilities you expect.

I promise nobody will throw anything out.

person A intends to treat a single wave at emission as collapsing only once when it hits some near or far point of the wall. The wall is entirely coated with photographic material which will register a hit. The count up to a hundred emissions, and then count the hits in the far circle. This way, they get a single % of hits in the far circle.

person B has realized that since different parts of the wall are at different distances from the emitter, that partial information can be gotten midway through the flight of a particle. They count up a hundred emissions. They count up however many hits in the far circle and get a % just like person A. However, halfway through the expected flight time of each particle - in cases where the particle has hit some part of the wall, that obviously counts as a miss, but for the other cases, with this new information, a new wave calculation is made regarding the hit in the far circle. Unlike the case where the wall is circular, and there is only one instant where any information is revealed, New information arrives continuously until the far circle is hit (when it actually is). Does this change the computation which is done - or should be done. Doesn't a calculation change whenever new information becomes available?
 
  • #44
1977ub said:
I promise nobody will throw anything out.
Yes they do! I don't mean they pretend it didn't happen, I mean they simply sort their data such that part of the dataset simply doesn't appear when they test the probability they have calculated.
person B has realized that since different parts of the wall are at different distances from the emitter, that partial information can be gotten midway through the flight of a particle. They count up a hundred emissions. They count up however many hits in the far
circle and get a % just like person A. However, halfway through the expected flight time of each particle - in cases where the particle has hit some part of the wall, that obviously counts as a miss, but for the other cases, with this new information, a new wave calculation is made regarding the hit in the far circle.
Bingo, you have just stated where the sorting of the data is occurring. That's what I'm talking about-- the whole reason person B is testing a different percentage is they are using different data. You have just said so.

There isn't anything quantum mechanical going on here, the exact same principle applies at a table where people are playing cards, and some players are privy to information that others aren't. They calculate different probabilities of winning, yet they are just as correct as the others who get different answers, because their probabilities apply to a different set of hands-- the set that satisfies their own particular information constraints. They are thus "throwing out" of their consideration a different set of hypothetical hands, and hence achieve a different, yet "correct", probability.

However, there are situations in quantum mechanics where null information has a very non-classical effect, such as when a pulse of light finds no particle on the right side of a box. That changes the state of the particle in a way that cannot be treated with classical information, the way this scenario can.
 
  • #45
1977ub said:
I promise nobody will throw anything out.

person A intends to treat a single wave at emission as collapsing only once when it hits some near or far point of the wall. The wall is entirely coated with photographic material which will register a hit. The count up to a hundred emissions, and then count the hits in the far circle. This way, they get a single % of hits in the far circle.

person B has realized that since different parts of the wall are at different distances from the emitter, that partial information can be gotten midway through the flight of a particle. They count up a hundred emissions. They count up however many hits in the far circle and get a % just like person A. However, halfway through the expected flight time of each particle - in cases where the particle has hit some part of the wall, that obviously counts as a miss, but for the other cases, with this new information, a new wave calculation is made regarding the hit in the far circle. Unlike the case where the wall is circular, and there is only one instant where any information is revealed, New information arrives continuously until the far circle is hit (when it actually is). Does this change the computation which is done - or should be done. Doesn't a calculation change whenever new information becomes available?

The calculation they perform mid-way-through is different because in those cases they have been given good reason to "throw out" the probability it is in the closer circle. My basic question - does person A's calculation of a single "collapse" work out to match the results, or does the mere fact that information reveals itself bit by bit over time mean that a more complex calculation must be performed in order to get the right results.
 
Last edited:
  • #46
I never said they didn't have good reason to throw out some of the data, it's all about sorting.

To answer your question, there are many versions of "right results," depending on which experiment is being conducted. Persons A and B are both going to get "right results," and those results are going to be different. Again, that's commonplace, that in itself has nothing to do with collapse, it's purely how information and probability work. Probabilities always require sorting of data to fit to the information being used, and the test being done.
 
  • #47
Person A & B are doing the same experiment. The same emitter, the same wall, the same tallies. The same net hit % at the end of it all. One of them on his notepad imagines there to be one wave collapse, the other person imagines there to many or a continuous wave collapse over time. Which one's calculations match the experimental outcome.
 
  • #48
1977ub said:
Person A & B are doing the same experiment. The same emitter, the same wall, the same tallies.
No, they are certainly not doing the same tallies, as you just explained in your last post. Are they getting the same percentages of hits in the outer circle? No, they are not. That's what is meant by different tallies, and it's also why they are getting different probabilities, and they think the probabilities are right. I'm not sure what's hard to see about this-- you must agree they are getting different probabilities, right? And they are getting agreement with their probabilities, right? So that logically requires they must get different tallies, which they do.
The same net hit % at the end of it all.
No, it's not the same percentage. It's the same hits in the outer circle, but not the same percentages. How could they get the same percentages when you just agreed they are calculating different probabilities, because person B is throwing out all the trials that get a hit in the first circle, and person A is including all those trials?
One of them on his notepad imagines there to be one wave collapse, the other person imagines there to many or a continuous wave collapse over time. Which one's calculations match the experimental outcome.
Both, that's the whole point. I must be missing something in your question, because this is trivial. Imagine the two circles each get half the hits for the "single wave collapse," i.e., the full experiment (and no hits occur outside both circles, for simplicity). Person A uses a "single wave" to conclude that each trial has a 50% chance of hitting the first circle, and a 50% chance of hitting the second. And he/she concludes his/her probabilities are perfectly correct. Meanwhile, person B concludes that every time there is not a hit in the first circle, there will be a hit in the second circle 100% of the time-- that's your "continuous collapse" percentage, it's 100%. And of course, that works out just as well too. So you only have to realize why one person expects 50% in the second circle, and it works, and the other expects 100% in the second circle, and that works too. It's simply sorting the data differently, there's nothing quantum mechanical there.
 
  • #49
atyy said:
The weird thing about a measurement is that when it occurs is subjective. A measurement occurs when one gets a definite result or definite information.
The measurements whose results are null (ie. when you look and you see the particle is not there) are measurements and they do collapse the wave function. Some examples of quantum calculations with null results are given in https://arxiv.org/abs/1406.5535 (p5: "Observing nothing is also an observation").

The idea that measurement is subjective is not a necessary inference or conclusion, but arises from the inability to define measurement in the standard theory. This is remedied in the transactional picture, in which the 'measurement transition' is well-defined, as discussed here: https://arxiv.org/abs/1709.09367
 
  • #50
1977ub said:
I'm trying to understand the impact of past measurements, and when measurements occur.

As I understand it, in the simplest case, you've got a particle emitter in the center of a circle, and a measuring plate around the circle. Here in the ideal case the particle is emitted and has equal probability of showing up anywhere on the circle when the wave "collapses". This random outcome can be tested.

But I wondered at some point, when the particle is emitted at a flat wall, and we know what time, we can calculate the time that the wave would reach the nearest point to the emitter. If it is not measured there, then presumably the wave is still happening, and then with each progressive moment, we get more and more information about where the arrival point is NOT, and therefore the wave gets redefined, ad infinitum until it finally hits some point on the wall. Is that right?

When a photon leaves a distant star and arrives at a photographic plate on earth, has a "measurement" been performed at every moment where it *might* have arrived somewhere else than earth? If we had an emitter in deep space sending photons in random directions, could we tell anything about the layout of the universe in various directions once we got back the times photons were emitted and correlated that with the photons we got on earth?
Measurement occurs whenever there is absorber response--but that is missing in the stanrdard theory. You need the direct-action theory (transactional picture) to be able to define measurement in physical terms. See: https://arxiv.org/abs/1709.09367
 
Back
Top