Exploring the Paradoxical Einstein-Podolsky-Rosen Experiment

  • Thread starter Thread starter guguma
  • Start date Start date
  • Tags Tags
    Experiment
Click For Summary
The Einstein-Podolsky-Rosen (EPR) paradox challenges the Copenhagen interpretation of quantum mechanics by suggesting that particles like electrons and positrons have definite properties before measurement, implying faster-than-light information transfer. The discussion highlights that the EPR paper raised significant questions but lacked experimental tests, leading to later developments by J.S. Bell, which showed inconsistencies with local hidden variable theories. The conversation also touches on the nature of wavefunction collapse, emphasizing that it occurs due to interactions, not necessarily requiring conscious observation. Misinterpretations from popular media about quantum mechanics and consciousness are criticized, asserting that measurement involves disturbance rather than observation alone. Overall, the complexities of quantum mechanics and its interpretations remain a topic of ongoing debate and exploration.
  • #61
reilly said:
You managed to avoid dealing with most of my post. Why?
Reilly Atkinson
Because your post is deriding, self-important, pompous, arrogant, and a host of other adjectives that are inappropriate for civilized conversation. (Note I'm insulting the post, not you). I made an off the cuff remark that was not intended to offend anyone and so I apologize if it offended you. However, I believe that your viewpoint is absurd. The other points in your post regarding our respective credentials are irrelevant to the discussion.
 
Physics news on Phys.org
  • #62
JesseM said:
No, it shouldn't. Both beams result in photons being directed at a range of horizontal positions (horizontal relative to the axis of the beams), regardless of focus, as can be seen in the graph of the detection patterns from each detector in Dopfer's thesis which is reproduced on this page (fig. 4). If each detector can only detect a very narrow range of horizontal positions, so that the only way to build up the wider range seen in the graphs is to vary each detector's position over many trials, that means there'd be many instances when photons missed the detector because it was in the wrong position on that trial.
That's true of the detector behind the slits. That is not true of the detector at the focal plane. The whole point of putting the detector at the focal plane is to catch every photon. In that branch of the experiment, that detector stays stationary. See Figure 4.5(p.36) of the original paper. http://www.quantum.univie.ac.at/publications/thesis/bddiss.pdf

Now the next point is critical. There will be MORE photons detected behind the lens than behind the slits, regardless of coincidence counting. Why? The slits block a lot of photons. So by introducing coincidence counting, we are taking a subset of the photons behind the lens, but NOT a subset of the photons behind the slits. In other words, even with the coincidence counting, we're counting _every_ photon that goes through the slits. Ergo, the pattern should be the exact same whether it's a CCD or a scanning detector.

Change the experiment now, and put the detector behind the lens at the _imaging_ plane instead of the focal plane. Now which-slit information is obtainable _if_ we can segregate out those photons that actually passed through _any_ slit. So the coincidence counter is, in fact, necessary in this part of the experiment in order to isolate from those detections behind the lens the corresponding photons that actually passed through the slits, and thereby we can determine "which slit" they went through, and, as QM tells us should happen, the interference pattern disappears.

So, to recap, the coincidence counter is necessary to select out the subset of photons from the post-lens detector corresponding to photons that actually passed through the double slit, and thereby we can see which slit those photons went through. But, when the post-lens detector is at the focal plane, there's no chance of knowing which-slit information anyway, and so there's no need to segregate out those photons that actually passed through the slits. All photons detected behind the slits will be detected behind the lens as well. Therefore, coincidence counting is not necessary to see the interference pattern.
 
Last edited by a moderator:
  • #63
guguma said:
So what I think is that consciousness has no effect on anything, actually this certainly demands another discussion "What is Consciousness" and "What is A Conscious Entity" and personally I do not differentiate a bottle of scotch from a human being, both are made up of the same raw material and only their functionality and their stable integrity differs which does not put one or the other above in ranking in terms of natural law.

reilly said:
Is a car wreck a measurement? Could the act of peeling a banana be an anti-measurement? Could waves crashing on a beach be a measurement? I keep asking this question, but I have yet to get an answer.

I completely agree with everything that Reilly has said here, but I actually think I can see a sense to which the two of you are talking about different things. So let me see if I can elucidate a position that may represent a kind of common ground. We all agree that human science is a human endeavor (other intelligences no doubt have their own approaches-- we can laugh at the efforts of a dog, but somewhere in the cosmos is one that would blow our minds), and the goal of this endeavor is to achieve understanding and power in our relationship with our universe. As such, it happens in our brains, or in the mechanisms that our brains build or intepret-- the "measuring devices". So we decide what a measurement is, and in that sense they don't exist without us (reilly's point), yet once we have decided what a measurement is, we may find analogous processes happening without our consent (guguma's point). Whether or not we will call the analogous process a "measurement" could potentially create a lot of semantic confusion where there may or may not be any real disagreement. I would say that a measurement is a mechanism set up by an intelligence to couple a natural process (quantum or otherwise) of unknown behavior with a classical system of well-known behavior. The idea is to leverage what is known about the device into an understanding of the unknown natural system, and that leveraging occurs in the mind of an intelligence. But analogous processes can be found in nature, so it really doesn't matter if we call those measurements or not, we just have to be clear what we mean-- the distinction is only something more than semantic in quantum mechanical applications when other misconceptions are in place.

So what are those other misconceptions, in regard to quantum mechanics? They have to do with starting from a pure state of a subsystem, and having interactions with its environment that destroy the coherences that initially allowed us to see it as a pure state of one measurable and a superposition state of another, and instead forces us to see it as a mixed state of the measurable connected with the nature of the decoherence that was set up. This final state is a real mess viewed quantum mechanically, because it is a projection of a much larger system that we don't even want to begin to consider, but this is not a loss it's a win-- the mixed state of the final measurable is something we are fully comfortable with, it is the die that has been rolled but not looked at yet. So we might call it a "measurement" only if the die was rolled by us and looked at by us, or we might use a more general description that can happen naturally and does not have to be looked at, the fundamental issue is that our crucial participation in the act of doing science came when we intentionally destroyed those coherences so that we could fit that outcome into the "boxes" of scientific thought. We sought the mixed state, that was on us. This is what I believe reilly is saying as well.

In this view, the role of consciousness comes in when the quantum behavior is long gone-- when a brain checks in on what the die actually rolled. Physics never tells us that, it didn't classically when the level of mixing precludes it, and it won't quantum mechanically when we have intentionally applied suitable mixing, so there has always been and likely always will be an incompleteness to physics that can only be resolved by the consciousness. But we dealt with that thousands of years ago when we first started thinking about our environment; quantum mechanics has nothing to add to this. That's what I think guguma is also saying-- there is no fundamental role for consciousness, except, as reilly might add, in the whole process of science itself.

So in terms of the word "measurement", I think guguma's point is that coherences get destroyed naturally too. We can reserve the term "measurement" for the intentional version if we choose, but either way, our goal is to use what happens in our experiments to understand what happens outside of our experiments, so we will always require some concept of what a hypothetical measurement is (if we imagine Maxwell's demon jumping in and doing a measurement, what would the outcomes be, etc.). So the common ground here is that the role of consciousness is quite sophisticated, and hard to express in a scientific theory that can never transcend the intelligence that made it. We need intelligence (and the perception of our own intelligence, which is what I think we mean by consciousness) to do the science that builds the concepts, and then we need it to get out of the way while we apply those concepts to consciousness-free systems.

I see this common ground as what has always been the fundamental paradox of science-- we are like parasites that try as hard as we can not to "kill" our hosts, the natural processes we wish to understand. If we do not interact with our hosts at all, we are too passively observing to be able to achieve much science, and if we interact too much, we can only understand the other hosts that are similarly infected. The best we can do is try to keep track of how we are affecting the host so we can in some sense "subtract" that influence when we need to. I wonder if a behavioral scientist studying gorillas in their natural habitat understands this paradox all too well.
 
Last edited:
  • #64
peter0302 said:
However, I believe that your viewpoint is absurd.
Then I submit you do not understand the viewpoint in the way he expressed it. Maybe one has to already see its validity before one can understand its meaning, that's a tricky problem we all face, I have encountered the same problem voicing something similar and no doubt I've been on the other end as well. But I found the viewpoint to be convincing to the point of being virtually self-evident.
The other points in your post regarding our respective credentials are irrelevant to the discussion.
Although I agree that arguments here should stand on their own, the credentials are relevant to why you should suspect that you don't understand a viewpoint that you can view as absurd. One must admit, the range between "absurd" and "self-evident" is about as large as it can get-- a remarkable feature for a debate between intelligent and basically well-informed people.
 
Last edited:
  • #65
They are relevant to why you should suspect that you don't understand the viewpoint.
If you read my original post, it was in response to someone saying some philosophers believed an intelligence was required to collapse the wavefunction. I said anyone who thought that was "nuts." I stand by that, if not my choice of words. (By the way, we were talking in the context of the Zen Cultists who made "What the Bleep do You Know?"

Now, in Reilly's subsequent attempts to elaborate his position, he really says nothing of substance. He seems to ask what is a measurement, and suggests that human intellect might be required for measurement. Well, I suppose that's true, but I'm really not interested in rewriting dictionaries. Does any physical process at a subatomic level depend on whether a human consciousness is aware of it or not? That's my question. If a "professional physicist" believes the answer to that is yes, we have a seirous problem. On that, I frankly can no longer tell what Reilly's position is and his response to my statement (which was not even directed to him!) is so full of arrogance that I am not even interested in what he has to say.
 
  • #66
peter0302 said:
That's true of the detector behind the slits. That is not true of the detector at the focal plane. The whole point of putting the detector at the focal plane is to catch every photon. In that branch of the experiment, that detector stays stationary. See Figure 4.5(p.36) of the original paper. http://www.quantum.univie.ac.at/publications/thesis/bddiss.pdf
Figure 4.5 does not show the results, only a picture of the setup. Again, look at fig. 4 from this page, which comes directly from the thesis; the right side is for the "in focus" case, and the top graph on that page shows the upper detector, but you still see photons at a range of horizontal positions, in the top-right case they're concentrated into two discontinuous peaks. The point of focusing the light is that tells you the direction of photons at the upper detector based on the position it is when they hit it, which because of the entanglement tells you the direction the photons at the bottom detector went through. If you vary the upper detector while keeping the bottom one fixed, and you do a coincidence count so that you ignore photon hits at the upper detector when there wasn't a corresponding photon hit at the bottom detector, then the graph for the upper detector will show two discontinuous bands, since you're only measuring upper-detector photons whose momentum was such that the entangled bottom-detector photons went through one of the slits instead of being blocked by the screen.

You can also see this in figure 4.8 on p. 38 of the thesis, showing that if the bottom detector D2 is held at a fixed position while the upper detector D1 has its position varied, if you graph the results for D1 over many trials (presumably only 'counting' hits where the bottom detector D2 also registered a hit) you should get two discontinuous peaks. And fig. 4.26 on p. 68 seems to show the actual experimental results for this case.
peter0302 said:
Now the next point is critical. There will be MORE photons detected behind the lens than behind the slits, regardless of coincidence counting. Why? The slits block a lot of photons. So by introducing coincidence counting, we are taking a subset of the photons behind the lens, but NOT a subset of the photons behind the slits.
I don't think that's correct, because again, the upper detector D1 behind the lens isn't picking up every photon--it's intentionally made narrow so it can only pick up photons at a certain location. This is what "Ben" in the message I quoted, who seems to have some familiarity with the thesis, is saying; and fig. 4.7 and 4.8 on p. 38 seem to confirm this, since the double-headed blue arrow looks like it's indicating that the position of the upper detector has to be varied to produce a graph of photon positions, even for the "in focus" case in fig. 4.8. Likewise, the bottom detector D2 behind the slits isn't picking up every photon that makes it through the slits--the double-headed blue arrow on D2 in fig. 4.5 on p. 36 and in fig. 4.6 on p. 37 seems to show that they have to vary the position of the D2 to build up the pattern there, even in the "in focus" case in fig. 4.6. And here they are only "counting" hits at D2 that correspond to hits at D1 where D1 is held fixed at a single position; again, if D1 is narrow it won't pick up all the photons even in the "in focus" case. I imagine if you replaced D1 with a wide CCD that could pick up photons at a large horizontal range of positions, and then graphed all the hits at D2 that corresponded with hits anywhere on the CCD, the pattern at D2 would never show interference, even in the "out of focus" case.
 
Last edited by a moderator:
  • #67
peter0302 said:
If you read my original post, it was in response to someone saying some philosophers believed an intelligence was required to collapse the wavefunction. I said anyone who thought that was "nuts." I stand by that, if not my choice of words. (By the way, we were talking in the context of the Zen Cultists who made "What the Bleep do You Know?"
I suspected it might be about that movie, and indeed I suspect that this discussion has taken on an adversarial air among people who probably agree on rather quite a lot of things-- like certain ridiculous elements of that movie. But there is room to disagree on other more technical but equally important issues-- and that is what I think is happening. As to the absurdity of the view that intelligence is needed to collapse a wavefunction, I see that issue as fraught with semantic peril, and that may be a large contributor to apparent disagreement that is not really there. I would say it all depends on what means by "collapse". I think of the collapse as the destruction of coherences that allow a pure state of one observable to be a superposition state of another, rendering a 'collapsed' mixed state if the decoherence acts in the necessary way. This does not require intelligence, and is the end of the quantum mechanics. Others say the "collapse" doesn't happen until the mixed state is further reduced by "noting the actual result", which does require an intelligence but has nothing directly to do with quantum mechanics. So it's an important but semantic confusion that can result, and I'm not sure how much of that is behind what you are saying here.
Now, in Reilly's subsequent attempts to elaborate his position, he really says nothing of substance. He seems to ask what is a measurement, and suggests that human intellect might be required for measurement.
He is talking about the second type of "collapse", and that's where I agree with him. But if you are talking about the first type, then I can agree with both of you-- as long as we recognize that decoherence to result in a measurement is set up intentionally by an intelligence, even though analogous processes can happen naturally.
Does any physical process at a subatomic level depend on whether a human consciousness is aware of it or not? That's my question.
Are you talking about a process or an understanding of a process, and what is the difference?
If a "professional physicist" believes the answer to that is yes, we have a seirous problem.
Yes-- we'll need to work more on the meanings of our words. Rewriting dictionaries is quite essential, I'm afraid-- you can never rely on standard ones to do science.
On that, I frankly can no longer tell what Reilly's position is and his response to my statement (which was not even directed to him!) is so full of arrogance that I am not even interested in what he has to say.
I think he took the "nuts" remark personally. I hope you can both just leave that behind, I don't think you meant it personally, and I don't think he meant to be condescending, only frustrated that his positions were being discarded without sufficient consideration.
 
  • #68
JesseM said:
I don't think that's correct, because again, the upper detector D1 behind the lens isn't picking up every photon--it's intentionally made narrow so it can only pick up photons at a certain location.
No, don't you see in Fig 4.5 of the original paper - D1 is fixed.

This is what "Ben" in the message I quoted, who seems to have some familiarity with the thesis, is saying; and fig. 4.7 and 4.8 on p. 38 seem to confirm this, since the double-headed blue arrow looks like it's indicating that the position of the upper detector has to be varied to produce a graph of photon positions, even for the "in focus" case in fig. 4.8.
Yes, yes, but figure 4.7 and 4.8 are a different variation of the experiment from figure 4.5. In 4.5 and 4.6, D2 is "fix".

Now look at Figures 4.23 through 4.26. For all of the "D2 ist fix." Watch how the pattern slowly changes from an interference pattern to two Gaussian patterns as D2 is moved from the focal plane (which-path destroyed) to the imaging plane (which-path intact). That whole time, "D2 ist fix."

I imagine if you replaced D1 with a wide CCD that could pick up photons at a large horizontal range of positions, and then graphed all the hits at D2 that corresponded with hits anywhere on the CCD, the pattern at D2 would never show interference, even in the "out of focus" case.

That's the issue. I don't know if that's right.

Why the *BLEEP* hasn't anyone actually tested this?
 
  • #69
Take a look at the photon COUNT as it goes from figure 4.23 to figure 4.26. Goes way way way down per unit area doesn't it?

I will bet anybody here a steak dinner that we've all got it backwards. The interference pattern will ALWAYS show up without coincidence counting. When D2 is moved to the imaging plane, a _subset_ of photons winds up being detected which form two gaussian patterns corresponding to the known 'which-path" information.

Wouldn't that result be perfectly consistent with the HUP and still not allow FTL communication?
 
  • #70
JesseM said:
I don't think that's correct, because again, the upper detector D1 behind the lens isn't picking up every photon--it's intentionally made narrow so it can only pick up photons at a certain location.
peter0302 said:
No, don't you see in Fig 4.5 of the original paper - D1 is fixed.
Of course it is--why do you think that contradicts my statement? I interpret it to mean they are looking at the subset of photons arriving at D2 for which their entangled twin went to that one fixed position that D1 is at (even though there'd be plenty of hits at D2 where there was no hit at D1, but there would have been a hit if D1 was replaced by a wider CCD).
JesseM said:
This is what "Ben" in the message I quoted, who seems to have some familiarity with the thesis, is saying; and fig. 4.7 and 4.8 on p. 38 seem to confirm this, since the double-headed blue arrow looks like it's indicating that the position of the upper detector has to be varied to produce a graph of photon positions, even for the "in focus" case in fig. 4.8.
peter0302 said:
Yes, yes, but figure 4.7 and 4.8 are a different variation of the experiment from figure 4.5. In 4.5 and 4.6, D2 is "fix".
You mean D1 is fixed. But why do you think that proves D1 isn't narrow? Figures 4.7 and 4.8 seem to show that D1 needs to be moved if they want to build up the pattern of photons at that location while keeping D2 fixed, which wouldn't be necessary if D1 was already wide enough to pick up all the photons coming through the lens.
peter0302 said:
Now look at Figures 4.23 through 4.26. For all of the "D2 ist fix." Watch how the pattern slowly changes from an interference pattern to two Gaussian patterns as D2 is moved from the focal plane (which-path destroyed) to the imaging plane (which-path intact). That whole time, "D2 ist fix."
You're confused, it is D1 which is behind the lens and which is moved from the focal plane to the imaging plane--look at figures 4.7 and 4.8, which show the upper detector D1 being moved from a distance f from the lens (focal plane) to a distance 2f (imaging plane), while the bottom detector D2 behind the double-slit is held fixed. The schematic graphs there correspond to the actual data in figures 4.23-4.26.
 
  • #71
peter0302 said:
Take a look at the photon COUNT as it goes from figure 4.23 to figure 4.26. Goes way way way down per unit area doesn't it?
Perhaps just because the detector where those hits are being registered is getting farther away?
peter0302 said:
I will bet anybody here a steak dinner that we've all got it backwards. The interference pattern will ALWAYS show up without coincidence counting. When D2 is moved to the imaging plane, a _subset_ of photons winds up being detected which form two gaussian patterns corresponding to the known 'which-path" information.
A subset of an interference pattern can't look like two gaussians! After all, at the minima of an interference pattern no photons are being detected, but there should be photons there in the sum of the two gaussians.
 
  • #72
Sorry, you're right I meant to say "D1 is Fix". So we're looking at, Figs. 4.5, and 4.18-4.21. You're also right that there's no way you can pull 4.21 out of 4.18 (I thought, erroneously, you could pull 4.26 out of 4.23). Can I retract my steak dinner bet? :)

HOWEVER, what I haven't changed my mind on is the original point.

Of course it is--why do you think that contradicts my statement? I interpret it to mean they are looking at the subset of photons arriving at D2 for which their entangled twin went to that one fixed position that D1 is at (even though there'd be plenty of hits at D2 where there was no hit at D1, but there would have been a hit if D1 was replaced by a wider CCD).
The "subset of photons arriving at D2 for which their entangled twin went to that one fixed position that D1 is at" would be ALL of the photons that strike D2. That is why putting a CCD there instead of a narrow-band detector should not change the result.

But why do you think that proves D1 isn't narrow? Figures 4.7 and 4.8 seem to show that D1 needs to be moved if they want to build up the pattern of photons at that location while keeping D2 fixed, which wouldn't be necessary if D1 was already wide enough to pick up all the photons coming through the lens.
D1 is narrow, but the focal point of a lens is a POINT, so it doesn't matter how narrow D1 is. And, again, I am NOT talking about figures 4.7 and 4.8. Those are a different mode of the experiment.

You're confused, it is D1 which is behind the lens and which is moved from the focal plane to the imaging plane--l
Yes, I was citing the wrong figures. Look at figures 4.5, and 4.18-4.21. They all show results at D2 when D1 is fixed in the x direction but moved from the focal point to the imaging plane. This setup is the critical one which I contend does not require the coincidence circuit.
 
  • #73
And I'd also add that this is precisely what Cramer is trying to do...
 
  • #74
peter0302 said:
Sorry, you're right I meant to say "D1 is Fix". So we're looking at, Figs. 4.5, and 4.18-4.21. You're also right that there's no way you can pull 4.21 out of 4.18 (I thought, erroneously, you could pull 4.26 out of 4.23). Can I retract my steak dinner bet? :)
Steak dinner? What steak dinner? ;)
peter0302 said:
The "subset of photons arriving at D2 for which their entangled twin went to that one fixed position that D1 is at" would be ALL of the photons that strike D2. That is why putting a CCD there instead of a narrow-band detector should not change the result.
But do you agree that wouldn't be true if D1 was narrow? After all, even when D1 is on the imaging plane, you can see from the graph that photons can arrive at a number of positions (the two sharp peaks at different locations), so if you fix D1 at one position, there can be cases where a photon is registered at D2 but the corresponding photon misses D1, since it goes to a different position in D1's plane. You seem to be assuming that D1 is wide enough that it will catch all incoming photons, but the post by Ben says that's incorrect, and the fact that they show a double-headed blue arrow in D1's plane when the position of D2 is fixed in fig. 4.7 suggests it's incorrect as well (if D1 was already catching all incoming photons, what need would there be to move it around?)
peter0302 said:
D1 is narrow, but the focal point of a lens is a POINT, so it doesn't matter how narrow D1 is. And, again, I am NOT talking about figures 4.7 and 4.8. Those are a different mode of the experiment.
In what relevant way is it different? Do you deny that in figure 4.7, D1 is exactly at the focal distance, and when its position in the horizontal plane is varied it picks up photons at a range of locations? I don't see why we should expect all the light to be focused at a single point anyway--in classical optics a lens will only focus light perfectly at a point if all the light rays are coming in perfectly parallel, but in the quantum experiment there's some uncertainty in the momenta of the photons.
peter0302 said:
Yes, I was citing the wrong figures. Look at figures 4.5, and 4.18-4.21. They all show results at D2 when D1 is fixed in the x direction but moved from the focal point to the imaging plane. This setup is the critical one which I contend does not require the coincidence circuit.
I think you're wrong that in fig. 4.5, every hit at D2 would have a corresponding hit at D1. It's clear from fig. 4.7 that even when D1 is at the focal distance, photons can hit it at a range of horizontal positions.
 
  • #75
What I don't understand is, why is there so much buzz about experiments like this-- do they not always confirm the predictions of quantum mechanics? So we can focus on the predictions of the theory, and look at why they come out as they do in the theory itself, that's where the insights are-- we only need the experiments to tell us we can do that, and I'm pretty much good on that already, frankly. It's not as if we're all expecting quantum mechanics to fail when it "seems too bizarre to be right", but lo and behold, the experiments say it's right. Speaking for myself, I always expect quantum mechanics to be correct in every situation that it makes a prediction! So I hear of experiments like this and just say "yup, right again", and I'm done with it. Indeed, we could come up with more and more bizarre thought experiments with counterintuitive results, and if the experiment is actually realizable (unlike abominations like the cat paradox), then the problem is always going to be with our intuition, of course. What's the big deal?
 
  • #76
reilly said:
You call me nuts, and I thus dismiss your post -- except for this particularly offensive comment. It's clear that you are interested in physics, and know just enough to be dangerous.If you want to learn, ask questions -- like "what is an observation? Can non-humans make observations? Try to learn enough to read Dirac -- learn about QM in practice -- from the hydrogen atom to basic radiation theory. Then revisit interpretations -- and make sure you study the Peierls -(Wigner) approach-- both are Nobel prize winners ; they postulate that the wave function collapse occurs as the neural networks in the brain provide the single answer from a measurement;that is, the wave function describes your state of knowledge.

Certainly, abrasive language can be offensive and distract from the discussion. So I'll try to keep to the point. So what's your opinion on the question you mentioned: "Can non-humans make observations?" If, for example, an experimental setup is fully automatic, and the results are stored by the computer on a hard disk, do you think these results can change when a human reviews them?
By the way, as far as I know, Peierls was not a Nobel prize winner. Certainly, this does not make him a less respected physicist. However, I am not aware of any experimental confirmation of his postulate that "the wave function collapse occurs as the neural networks in the brain provide the single answer from a measurement". However big an authority on quantum mechanics Peierls may be, I don't think I have any moral obligation to agree with him, not because I disrespect such an authority, but for the simple reason that such people as Einstein, de Broglie, Schroedinger, and others disagreed with such thinking. As for the Born rule, I like article http://arxiv.org/abs/quant-ph/0702135 , where an analysis of an exactly solvable model of spin measurement shows that this rule may emerge from thermodynamic irreversibility.
 
  • #77
Ken G said:
I am maintaining that we have not the least experimental justification to require that "the "axioms of quantum mechanics" must apply to macro systems!

True, but the question is actually this one: is it *thinkable* (can we find a vision, a picture, a consistent toy world) in which it can be done ? Or are we SURE now that quantum mechanics doesn't apply to macro systems ?

In other words, is there an experiment that proves without any doubt that quantum mechanics CANNOT describe macro objects, or is there still a possibility that it is ?

I think (I might historically be wrong, I'm no expert, I only know the common myths :smile:) that Schroedinger's observation tried to show that *evidently* quantum mechanics is not applicable - gives rise to absurdities, wrong results - when applied to a cat. I think he was wrong, in that things are more subtle and that decoherence shows a way to get out consistent views, all respecting quantum mechanics.

So the question is not: did an experiment show that a macro object DID do something 'non-classical' and purely quantum mechanical, but rather, was there an experiment that FALSIFIED a prediction of quantum mechanics concerning macroscopic objects.

And in as much as the first has not been done with things like cats (and probably never will be - although I have my ideas about that), *it hasn't been shown either that quantum theory gives DIFFERENT results from what is observed.

In other words, *we don't know* in how much quantum mechanics "really applies" to macroscopic objects. It's an open question.

Now, you point out that perhaps we are not requiring this, we are choosing it-- but if that's true, why are we choosing this is if it is not required? Where do we benefit from this choice if it is not forced on us by nature?

Because nature would be SIMPLER if quantum mechanics was just universally valid! We would have a unique, universal set of principles. Now, I will hasten myself to add that this is probably dreaming out loud, because probably our current theories are still approximations to future theories, which will be approximations to even more future theories etc...

However, we DON'T KNOW what is the scope of quantum theory as of now. We don't know how universal it is. Gravity is in any case a pain. So this might be an indication of a fundamental problem. But for all we know, we cannot be sure that certain principles of quantum theory, in current or modified form, are NOT valid on macroscopic scales. We have no indication either way.

Right, I see that we are on the same page-- we are playing with "toy worlds" here, but the issue on the table is which one best describes the real world in a given situation. As I have never seen anyone meaningfully apply quantum mechanics to the state of a cat as a whole, I claim that is a clear case of using the wrong "toy world".

But what gives priority to classical physics ? What if quantum mechanics (as decoherence seems to show) REDUCES to observable effects which are identical to classical physics ? Why should we then say that classical physics is right and quantum physics is wrong ? Calculationally, I agree, classical physics is way easier to deal with. But why should classical physics have priority over quantum physics conceptually - which rises the problem of the transition between both ?

As it turns out, we do not suffer much from this problem. Nevertheless, it is a real problem, and your solution will not handle it any better than mine. Indeed, even the theory of large nuclei does not follow the approach you are suggesting! The first step always looks something like "well we can't really solve quantum mechanics for this system, so here's what we do instead".

Yes, but that is
1) for practical purposes
2) not a contradiction.

Indeed, we know that from the moment that the entangled states are complex enough, that probably no observation will give any interference effects, and that from that moment on, we will get IDENTICAL results between a semi-classical approach and a full quantum approach. As the former is practically much easier to handle than the latter, we prefer of course to do the former. This is what happens in much of quantum chemistry too. From the moment that explicit interference has become "unobservable" (that means, hidden in very high order correlation functions which are never observed), you can switch to a semi-classical approach with probability distributions.

But again, it is not a proof of the *unapplicability* of quantum mechanics as a principle. On the contrary. It is where quantum theory becomes identical to classical theory.

Certainly you can start somewhere, and see where it gets you, that's an excellent way to do science. But the question is, where does this get you, in regard to a cat, or in regard to wavefunction collapse? Are we trying to motivate actual new observations, or are we trying to satisfy ourselves that we in some sense understand the outcomes of impossible ones? What gets us somewhere is the mindset that says we are coupling quantum systems to macro systems expressly because we can rely on the macro system to act classically, which our brains like and we can actually call it a "measurement". What other kinds of experiments can we do? Given that, where is the gain for us in treating our macro system quantum mechanically, and why did we need a macro system involved in the first place if we were just going to treat it quantum mechanically?

On the practical, applied side, I agree with you. But the point is, if you insist on the inapplicability of quantum mechanics to macrosystems, then you are going to look for a *transition* theory. The "real" theory that will describe what happens when a system switches from quantum theory to classical theory (which are then "asymptotic" approximations to a more complete framework).
However, if it turns out that quantum mechanics IS actually valid "all the way up", then you will have excluded a whole scope of possibilities, and you will be looking for an entirely wrong theory.

In other words, you have excluded too soon a theory, that was not really falsified.

And there's another reason to play with a toy world in which to take your theory totally seriously (far beyond its proven domain of applicability): you get a good feeling for the machinery of the theory. You get a good understanding of what exactly the axioms imply - whether this corresponds to the real world or not.
 
  • #78
JesseM said:
But do you agree that wouldn't be true if D1 was narrow?
No, I don't agree. When D1 is placed at the focal point, all photons incident normal to the focal plane pass through the focal point. So, the detector at the focal point should register _every_ photon that passes through that lens as long as the photons are sufficiently perpendicular to the lens, which the diagrams certainly imply, and which should certainly be possible to do as a practical matter.

After all, even when D1 is on the imaging plane, you can see from the graph that photons can arrive at a number of positions (the two sharp peaks at different locations), so if you fix D1 at one position, there can be cases where a photon is registered at D2 but the corresponding photon misses D1, since it goes to a different position in D1's plane.
That's absolutely true when D1 is at the imaging plane. That si why the coincidence circuit is indeed required to see the gaussian pattern. But that's not true when D1 is at the focal _point_.

You seem to be assuming that D1 is wide enough that it will catch all incoming photons, but the post by Ben says that's incorrect, and the fact that they show a double-headed blue arrow in D1's plane when the position of D2 is fixed in fig. 4.7 suggests it's incorrect as well (if D1 was already catching all incoming photons, what need would there be to move it around?)
In Figure 4.7, they also show photons at are clearly not collimated and normal to the lens. So a detector fixed at the focal point would not pick up every photon that passes through the lens. In Figure 4.5, by contrast, the photons are clearly shown to be collimated, normal to the lens, and all passing through the focal point.

In what relevant way is it different? Do you deny that in figure 4.7, D1 is exactly at the focal distance, and when its position in the horizontal plane is varied it picks up photons at a range of locations? I don't see why we should expect all the light to be focused at a single point anyway--in classical optics a lens will only focus light perfectly at a point if all the light rays are coming in perfectly parallel, but in the quantum experiment there's some uncertainty in the momenta of the photons.
So I think we're starting to hit the issue. Perhaps you're right that the setup in 4.5 and 4.7 is the same, except that in 4.5, where D1 is fixed at the focal point, they're only dealing with a specific subset of photons which happen to be normal to the focal plane. Perhaps they're not actually generating collimated beams of entangled photons. So you're saying in either case the coincidence circuit is required to pick out the subset of photons for which position information is utterly impossible to obtain, thereby generating the interference pattern behind the slits.

However, I still think this could be done without a coincidence circuit. Shouldn't there be a way to collimate both beams of photons without a coincidence circuit? Then collimating the beams would accomplish the same thing, forcing all of the photons incident on the lens to strike D1, have position information destroyed, and therefore have the photons detected by D2 exhibit interference.

Incidentally, the way Cramer's destroying the which-path information is the problem in his experiment. He's using half-silvered mirrors, I believe. These change the phase of he photons. Just like in DCQE, what he'll wind up with is two out-of-phase interference patterns that combine to a perfect gaussian pattern. So indeed, he will see a gaussian pattern always unless he uses a coincidence circuit. But I think using a Heisenberg lens like Dopfer doesn't present the same problem.

I think you're wrong that in fig. 4.5, every hit at D2 would have a corresponding hit at D1. It's clear from fig. 4.7 that even when D1 is at the focal distance, photons can hit it at a range of horizontal positions.
Right, unless the photons are normal to the focal plane. So that seems to be the issue then.
 
  • #79
reilly said:
I wrote what I did only after some serious consideration. Physics is about describing nature. If you do your homework, you will find this idea goes back at least to the Greeks. Newton's Laws are computational recipes, just like the Schrodinger Eq. The only difference between the two is that Newton's ideas have been around for much longer
than the Schrodinger eq. . The consequence of that is that we have had several centuries to understand the descriptive power of Newton. So we have built a common intuitive consensus that we understand Newton, which is a huge difference from before Newton.

What's wrong with computational recipes?
Ask yourself exactly how it is you understand Newton?

At least some of the time people go on and on about something that was settled 20 years ago -- virtual particles are a good example. Spend a little time checking out your thoughts about something -- do a Google. As a retired professor I say, as I did to me students, do your homework and more. Those that do learn more, and make informed posts, which generally elicits more good stuff. How much time, guguma, have you spent understanding the concept of measurement in QM -- yours is a view not commonly held? There's 80 years of history to consider.Not for a moment do I think that humans are necessary to keep the universe ticking, as you put it. Perhaps a drunk could consider a bottle of scotch in anthropormorphic terms, and I suspect that only folks going to AA might concur.

Is a car wreck a measurement? Could the act of peeling a banana be an anti-measurement? Could waves crashing on a beach be a measurement? I keep asking this question, but I have yet to get an answer.

Consciousness has big effects on many things. This forum would not exist without consciousness. I'm sure that you can think of other things. And, when was the last time you saw bottles of scotch playing tennis, or going to school?

I'm literally dying to know how my sofa makes measurements, how do my pots and pans, stored so that they are in contact, make measurements, how does my hair make measurements, how does the sun make measurements?

Then there's a second round of questions: what do these various things measure, how do they do it, and how can I know?. I've asked this question also, without any answers. I'm in hope that my questions will be answered, not ignored.
Regards,
Reilly Atkinson

Prof. Atkinson,

First of all I want to state that I am not taking an offensive side against you, nor trying to mock you or something. It is easy to look like saying offensive things on a forum, but I can tell you that I am not. I am just trying to express my opinion on the matter, and I am very well aware that I have to do much (and much and much) homework, thus I am not questioning your experience it must be vast compared to mine. I will try to express my whole opinion on this issue now, including consciousness, and what I only ask for you is to read it sincerely and discuss it with me rather than questioning my experience, because the best way I can do my homework is to discuss with people who have more experience on the matter and if my arguments are outright falsifiable that is wonderful I will be happy to be falsified so that I can take one step further and learn to think otherwise.

First of all I want to state my opinion on human consciousness:

I think consciousness is far too overrated in any area of academics, and the only academic endeavor I have come across which does not overrate human consciousness is biological sciences (especially genetics).

If we look into a human being as a whole, humans are no different than an input-output mechanisms. We take a certain input and respond with an output to it. This mechanism is evolved through a process of evolution and natural selection, slowly and step by step and in the end we have this very complex neural network (which both of us should agree is where we think our consciousness comes from) and a bunch of other networks responsible for the continuation of this neural networks functions (simply put keeping it alive). This neural network is especially very efficient in depth and object recognition, it wonderfully organizes a serious amount of EM wave input and separates between different objects. We can separate between two objects on top of each other by color, shape, size etc. We should also agree that this neural network should have a memory like structure to be able to do this analysis, I am very well aware that there is no concrete scientific explanation as to how memory is maintained but certain disorders show that people are able to lose this memory structure completely, or partially. We can deduce that if this neural network is given a certain input the first thing it does is to compare it to other inputs in its memory terminal and concludes its output based on this comparison. So when a curved and closed object especially with its outer boundary is with a different color and a constant radius is recognized by our eyes we compare it to our previous input and group it into the circles, balls, spheres section of our memory. Every other function of this mechanism fears, love, hate, happiness, joy, AND thinking and consciousness are pretty much the same thing. If nothing in the universe moved, I mean nothing, including ourselves and every tiny bit of our composition too (just imagine it), would we have a consciousness of time. I do not think so because time is actually a ratio of motions and we are standardizing it to a particular reference object to make things easier. So I did not find it very surprising (it is not that it came to my mind or anything) when I learned in special relativity that space and time are not inseparable and our calculations are in need to be fixed especially for fast moving objects.

Does a naturally blind person have any consciousness of color. Yes this person can see colorized objects in his dreams (this is proven I guess) but which color is which? How does his neural network paints images? and what sort of images does this person see? Of course he can see certain images and paint them but it will not be based on the input from the environment thus this person will have a different consciousness than a person who sees and who can associate and categorize his input.

Lets compare this to a cpu, todays cpus are terrible at image and depth recognition, but they are wonderful in calculations. A cpu is designed by humans, and humans are designed by nature thus both of them are designed by nature (through physical processes I do not at all mean "Intelligent Design"). So a cpu takes an input and processes it according to its hardcoded or softcoded functions and gives an output. A CPU is awesome in one function and a Human is awesome in another, actually humans are so awesome that they do not even recognize that they are input-output machines and start thinking that they have this "consciousness" which is so different than a neural network + memory + functions.

There is even a deeper natural selection in the universe than genetic natural selection, which is on the particle (field, string, whatever) scale. Why less antiparticles? the exact solution is unknown but the general solution is that there is a physical symmetry break in the nature which favors particle structure over the antiparticle structure. why H atoms are abundant but not Uranium, again physical process, why certain molecules combine while others not? physical process. Why a certain gene (or rna, or dna or bacteria) survives in the gene pool? physical processes, just like antiparticle, particle selection, nature has a tendency to protect these stable genes this goes on a while and we get a human being which is extremely complex but still composed of particles, it only has a certain stability associated with it and an input-output mechanism which makes this stability continue. Consiousness is a complex process of memorial + input-output functions but still it is an input-output process, there is nothing special to it.

Now let's go to QM:

I think we both agree that what QM told us (actually reminded us) is that "We cannot talk about or predict the behavior of a single physical process because you have to have a certain input to talk about a process. You see a moving ball and what you talk about is the moving ball and what input you got. you shoot photons through a double slit and you talk about the photons + the double slit + screen, or photons + the double slit + detectors behind the slits + the screen and it is no surprise that two of them has different outputs"

So it is the detector that changes the output thus the input you got. If you were absent in these experiments but made a computer draw and print the interference images for you and when you would look at these two different papers would you say that "It must be the computers' consciousness which made this happen; if I have been present t would be different due to my consciousness"? I do not think so.THIS IS THE IMPORTANT PART

What I am advocating is that measurement is an interaction itself, whether it produces an input for you to interpret or not. We say measurement collapses the wave function but the measurement is the interaction between the detector and the detected, in some cases it may be your hair, or eyes or in some cases it is a Geiger counter and a photon. Or a bottle of scotch and sunlight.

You prepare a quantum system and say it can provide only two eigenvalues, when you do something to it it provides only one of them, what happened? You did not know how your system exactly was (and it is not incompleteness of QM it is just impossible for you to know because to know you have to measure it thus interact it with something else thus disturb it and see the disturbed output) you interacted it with something else and that interaction made it collapse telling you that after this interaction its energy is X.

Conclusion:

1. Consciousness is nothing special
2. One can only deduce conclusions about two interacted physical systems
3. No one knows the exact state of a prepared system, one can only provide a subset of possible outcomes which can come out after an interaction (eigenvalues), and every interaction disturbs the system thus you only talk about the interacted systems.
 
Last edited:
  • #80
peter0302 said:
No, I don't agree. When D1 is placed at the focal point, all photons incident normal to the focal plane pass through the focal point.
This would be true for light rays in classical optics, but I'm not so sure it would be true in QM--maybe if you had measured/filtered all the photons to make sure they had parallel momentum beforehand. But in this setup I don't think there was anything done to them to ensure that they'd have parallel momentum, and if you look at fig. 4.3 on page 30 (which seems to show a setup using a lens to measure non-entangled particles going through a double-slit, but the idea of what the lens is supposed to do seems pretty similar to the actual experiment), the rays going to the imaging plane (red lines) are actually coming in at very different angles; in the actual experiment, the subset of photons going to the upper detector D1 that had the right momentum so their twins went through the double slit and registered at D2 would have to have hit the upper lens at the same sort of angle as seen in fig. 4.3, I think. Therefore there's no reason to think these photons would be focused in the focal plane, they aren't coming in parallel like the blue lines in fig. 4.3.

By the way, the link to the paper again is http://www.quantum.univie.ac.at/publications/thesis/bddiss.pdf , if anyone is trying to follow this discussion but lost track.
peter0302 said:
So, the detector at the focal point should register _every_ photon that passes through that lens as long as the photons are sufficiently perpendicular to the lens, which the diagrams certainly imply, and which should certainly be possible to do as a practical matter.
I don't think the diagrams imply that at all; fig. 4.3 seems to imply something quite different about the point of what the lens is supposed to do.
JesseM said:
After all, even when D1 is on the imaging plane, you can see from the graph that photons can arrive at a number of positions (the two sharp peaks at different locations), so if you fix D1 at one position, there can be cases where a photon is registered at D2 but the corresponding photon misses D1, since it goes to a different position in D1's plane.
peter0302 said:
That's absolutely true when D1 is at the imaging plane. That si why the coincidence circuit is indeed required to see the gaussian pattern. But that's not true when D1 is at the focal _point_.
Sorry, I meant to say "even when D1 is on the focal plane". And my mistake carried over to the graphs, the case where D1 is on the focal plane shows an interference pattern rather than two shark peaks (as depicted in the diagram of fig. 4.7 on page 38), so this shows that photons are arriving at a range of horizontal positions in this plane.
peter0302 said:
In Figure 4.7, they also show photons at are clearly not collimated and normal to the lens. So a detector fixed at the focal point would not pick up every photon that passes through the lens. In Figure 4.5, by contrast, the photons are clearly shown to be collimated, normal to the lens, and all passing through the focal point.
Ah, I see what you mean. But what is different in the setup that they're ensuring the photons are collimated? I would guess that the only way they ensure this is by placing D1 at the focal point and then ignoring all the hits at D2 that don't correspond to a hit at D1--coincidence counting, in other words. They didn't do anything different to the beam of photons beforehand to make sure they were all collimated, it's just that they only paid attention to the ones that ended up at the focal point, which they can retroactively say must have been coming in parallel to the plane. If I'm right about that, then this would mean you're wrong that all hits at D2 will also have a corresponding hit at D1--there'd be plenty of cases where D2 registered a hit, but it was thrown out because the twin didn't hit D1 due to it not coming in parallel to the focal plane.
peter0302 said:
So I think we're starting to hit the issue. Perhaps you're right that the setup in 4.5 and 4.7 is the same, except that in 4.5, where D1 is fixed at the focal point, they're only dealing with a specific subset of photons which happen to be normal to the focal plane. Perhaps they're not actually generating collimated beams of entangled photons. So you're saying in either case the coincidence circuit is required to pick out the subset of photons for which position information is utterly impossible to obtain, thereby generating the interference pattern behind the slits.
Yes! Although I wasn't saying that originally, but it's the conclusion I came to after seeing your above point about the photons shown coming in parallel in fig. 4.5, before reading the paragraph above...great minds think alike!
peter0302 said:
However, I still think this could be done without a coincidence circuit. Shouldn't there be a way to collimate both beams of photons without a coincidence circuit? Then collimating the beams would accomplish the same thing, forcing all of the photons incident on the lens to strike D1, have position information destroyed, and therefore have the photons detected by D2 exhibit interference.
I don't know if there'd be a way to collimate them except by some kind of filter which blocks photons that are coming in at the wrong angle--but blocking photons in one beam wouldn't block the corresponding photons in the other beam, so you'd still need coincidence-counting. I suppose if you filtered both beams, then since they are entangled by momentum, ideally any time one photon made it through the filter its entangled twin would as well? If that was possible then you'd have a point, in a setup like 4.5 it would seem like every time you had a hit at D2 you'd also get one at D1. But I'm not sure if there's any way to do this sort of filtering.

If it is possible, then it does seem like every hit at D2 should correspond to a hit at D1, and that therefore the total pattern of photons at D2 would show interference if the beam going to D2 is filtered in this way. I'm not sure if this is actually a problematic conclusion though, even if you then move D1 to the imaging plane and remove the filter from the beam at D1, I don't think there'd be any way to use the hits at D1 to determine the which-path info for the hits at D2--as you can see from the way the red curves representing the beam are depicted in fig. 4.8, and the red lines representing particles focused into two distinct points are shown in fig. 4.3, using the lens to determine the which-path info crucially depends on looking at a subset of photons that do not come in parallel to the plane of the lens. So if you take the subset of hits at D1 which also correspond to hits at D2, in the case where the D2 beam was filtered so the photons were collimated, I don't know if you'd still get those two distinct peaks in this subset at D1 which allow you to determine which slit the photons at D2 went through. Perhaps it's a position-momentum uncertainty issue--collimating the beam at D2 means you are confining them to a narrow range of momenta, which means the corresponding subset of hits at D1 must also be confined to the same narrow range, so that may destroy the possibility of measuring the photons at D1 in such a way as to gain precise position information about which slits the photons at D2 went through.
 
Last edited by a moderator:
  • #81
JesseM said:
Perhaps it's a position-momentum uncertainty issue--collimating the beam at D2 means you are confining them to a narrow range of momenta, which means the corresponding subset of hits at D1 must also be confined to the same narrow range, so that may destroy the possibility of measuring the photons at D1 in such a way as to gain precise position information about which slits the photons at D2 went through.
Incidentally, after doing a little searching on the subject, it seems the setup shown in fig. 4.3 of the thesis is known as a "Heisenberg microscope", and it's understood that such a lens can allow you to retroactively determine either the position or the momentum that incoming photons had prior to hitting the lens, depending on whether you measure them in the image plane or the focal plane--see p. 49-50 of the book , a thought-experiment involving such a microscope actually played an important role in the conceptual development of Heisenberg's uncertainty principle, helping to answer the question which introduces that article: "Are the uncertainty relations that Heisenberg discovered in 1927 just the result of the equations used, or are they really built into every measurement?"
 
Last edited by a moderator:
  • #82
That could be the answer. A _collimated_ beam will _always_ produce an interference pattern because momentum will be (relatively) certain, so position will be uncertain.

A non-collimated, scattered beam, on the other hand, will be a totally random mix of position certain (two nice gaussian patterns) and momentum certain (interference pattern) and neither certain (blob inbetween) to create one, single gaussian pattern.

Have we resolved the question finally?
 
  • #83
peter0302 said:
That could be the answer. A _collimated_ beam will _always_ produce an interference pattern because momentum will be (relatively) certain, so position will be uncertain.

A non-collimated, scattered beam, on the other hand, will be a totally random mix of position certain (two nice gaussian patterns) and momentum certain (interference pattern) and neither certain (blob inbetween) to create one, single gaussian pattern.

Have we resolved the question finally?
It seems plausible that the total pattern of photons going through the double slit will form an interference pattern if the beam is collimated (since in this case we'd expect that if the upper detector D1 was placed at the focal point of the lens, all the photons in the upper beam corresponding to photons that made it through the collimation filter of the lower beam would have the right momentum to be focused onto the focal point), although I'm still not totally confident. The interesting case, and the one where I'm even less confident, is when the lower beam going through the double-slit and ending up at D2 is collimated, but the upper beam is not; if the total pattern of hits at D2 does show interference, what happens when you move the upper detector to the image plane at D1, and look at the subset of hits there that correspond to hits at D2? If there is indeed interference at D2, then it seems the corresponding hits at D1 can't show two distinct peaks without violating complementarity, but I don't have a good mental picture how the rays would avoid being focused into two distinct peaks at the image plane (maybe trying to think in terms of neat rays like in classical optics is not a good idea here).

Maybe the issue is that by collimating the lower beam, you are in effect measuring the momentum of all the photons that continue on to the double slit, and due to the position/momentum uncertainty relation this destroys the position entanglement of these photons with the photons on the upper beam; so in this case you can no longer be sure that if a photon on the lower beam went through a slit, the corresponding photon on the upper beam also went through one of the positions corresponding to a slit, and thus you could also no longer be sure these upper photons would be focused by the lens onto one of two spots on the image plane. I don't know what the pattern of this subset of photons on the upper beam would look like with the detector in the image plane--maybe it would just look identical to whatever the total pattern of photons on D1 is in the normal version of the experiment without collination shown in fig. 4.8 (in the 'normal' version only the subset of hits at D1 corresponding to detections at D2 looks like two discontinuous peaks, the total pattern of hits at D1 would presumably look different, perhaps a gaussian).
 
  • #84
I wish I had access to a source of entangled photons to try these things out. :) More importantly I wish Cramer would get on the ball and tell us how his experiment failed (which I'm sure it did).
 
  • #85
akhmeteli said:
So what's your opinion on the question you mentioned: "Can non-humans make observations?" If, for example, an experimental setup is fully automatic, and the results are stored by the computer on a hard disk, do you think these results can change when a human reviews them?
I know your question is to reilly, but I can give you an answer that I'll bet is close to his as well, because it just involves keeping track of what is actually happening. First of all, the "experimental setup" you refer to did not spring up spontaneously, it was put together for a purpose. That purpose is not of incidental connection to the way we do science, it is the way we do science, so is integrally related to the equations that we used science to establish. Even the "results" you mention are conditioned to be results by us, the universe gets its "results" all the time, it needs no "experimental setup". So why did you attach your computer to such a setup? A computer that is hooked up to a random noise source is also getting "results" from the universe's point of view, if you will.

The results will not "change" when a human reviews them, because a "change" is a comparison between two things, and here we only have one. I do not think reilly is saying that the role of an intelligence is to change anything, but rather to have the one thing in the first place, whatever it is that we have that makes it something we would call a "result" and include it in our conception of reality.
However, I am not aware of any experimental confirmation of his postulate that "the wave function collapse occurs as the neural networks in the brain provide the single answer from a measurement".
My interpretation of this remark is that the "collapse" being referred to is not just the destruction of the superposition state of the subsystem, which is a physical effect that occurs any time our way of conceiving the state of the subsystem chooses to "average over" interactions with noise from a larger system we are not analyzing, but it also includes the determination of which result "actually occured". It is only a conscious mind that requires that final step, an "unlooked at" universe is perfectly content to function forever as an accumulation of mixed states, like dice that are rolled but never looked at. No experiment can tell the difference, until that experiment also involves the connection to an intelligence. Still, in my view, the quantum mechanics is over before this final stage of collapse is completed, it's a perfectly classical step. Indeed, the classical nature of this step is the whole reason for including it in our science-- the final stage of all science is classical, it's in the guts of science.
 
Last edited:
  • #86
vanesch said:
True, but the question is actually this one: is it *thinkable* (can we find a vision, a picture, a consistent toy world) in which it can be done ? Or are we SURE now that quantum mechanics doesn't apply to macro systems ?
This must be the divergence in our views right here. You are essentially taking an "assumed true until proven false" approach to building toy worlds that are intended to mimic the real one, whereas I take an "assumed false until proved true" approach. I would say we have many examples in the history of physics where my approach would have saved some embarassment, and a lot of philosophical hand-wringing as well. On the other hand, your approach has led to new physics like neutrinos and positrons. So I think this shows the advantages of each-- when looking for new physics, by all means go ahead and assume your axioms extend across the frontier of what is known. But when building a philosophical world view, do the opposite, or you fall victim to the very same type of mysticism that science was essentially invented to replace.

The issue is in what is testable. It was "harmless" to postulate neutrinos, positrons, and now supersymmetry, based on "good" axioms to date. But concepts that you know are not testable by their very nature, like the wave function of a cat, lead you down the primrose path. Knowing going in that you can never test these notions, to hold them as true anyway makes one guilty of not looking for new science, but rather looking for that "warm fuzzy feeling" that is recognized as the illusion of control and understanding when the truth is pure mystery. The scientifically honest thing to do is recognize mystery when we encounter it, and not pretend that an approach that yields testable results in one area can be extended to an untestable realm with the pure intention of extinguishing that sense of mystery. So I say, if a cat has a wave function, prove it-- make a testable prediction. Failing that, the requirement is to say "I have no idea if the concept of wave function has any connection with a cat, and I will not build a philosophy around the idea that it does, simply to assuage my sense of order". There are more accessible ways to assuage our desire for order that do not even require an education in physics.
I think (I might historically be wrong, I'm no expert, I only know the common myths :smile:) that Schroedinger's observation tried to show that *evidently* quantum mechanics is not applicable - gives rise to absurdities, wrong results - when applied to a cat. I think he was wrong, in that things are more subtle and that decoherence shows a way to get out consistent views, all respecting quantum mechanics.
I'm not sure why he did it either, but it's my sense that he was trying to expose flaws in the Copenhagen interpretation by using it to argue to an absurd result, the result that a cat could be in a superposition state. In other words, he took it as given that a cat could not-- which is why it is so ironic that his "paradox" is often expressed as saying that quantum mechanics shows a cat can be in a superposition state! It sounds like you and I can agree the paradox is irrelevant, because the superposition state (whether it can exist or not) cannot be created that way, due to the problem of decoherence. But what you are saying is that if a closed system containing a cat starts out in a pure state, quantum mechanics says it will remain in a pure state. I'm not denying that, I don't need quantum mechanics to be wrong-- I'm saying that even if you can get a closed system including a cat to be in a pure state (and I don't say you can), the cat, as a subsystem, will not be in a pure state. When you project a system onto a subsystem, you lose the pure state unless you can track all the coherences that connect the subsystem to the larger system-- and the fact that you can't do that is exactly what makes a cat a classical system.

So my bottom line is, a cat is a classical system, and the reason we couple quantum systems to classical systems is that we know we can count on the classical systems to respond classically. The logic of the cat paradox is exactly backward-- we should be asking how the quantum system got turned into a mixed state by its interaction with the cat, not how the cat got turned into a superposition by the quantum system.
So the question is not: did an experiment show that a macro object DID do something 'non-classical' and purely quantum mechanical, but rather, was there an experiment that FALSIFIED a prediction of quantum mechanics concerning macroscopic objects.
That's just the "correspondence principle" requirement that quantum mechanics is already held to. It doesn't show that quantum mechanics works on classical systems, only that it doesn't demonstrably fail on classical systems. I would say this means that quantum mechanics is "not even wrong" when applied to macro systems-- it simply isn't usable.
In other words, *we don't know* in how much quantum mechanics "really applies" to macroscopic objects. It's an open question.
It is only an open question by virtue of being untestable. That's not a strength of a scientific theory.
Because nature would be SIMPLER if quantum mechanics was just universally valid!
I don't agree there, and I'll express my disagreement with an analogy. Imagine you are an ornithologist studying the migration of the birds from some remote island. There are two species of birds on the island, and every Winter they disappear, and return in the Spring. You use radio tracking devices to track one of the species, but you find the other species rejects the trackers and pecks them off every time. So you track the one species, and see where they go. Now, does Occam's razor say it is simpler to assume the other species does the same thing, or does it say the simplest result is simply to not ask the question where the other ones go because it would be a pointless question to ask? I say the latter, if a question cannot be answered, the simplest thing is not to ask it-- not to assume the answer is something that cannot be falsified.
But for all we know, we cannot be sure that certain principles of quantum theory, in current or modified form, are NOT valid on macroscopic scales. We have no indication either way.
True-- but we also expect that we never will. That's the problem-- such axioms are only helpful if they lead to testable new physics. If they don't, they become philosophical baggage that the "razor" should trim away.
But what gives priority to classical physics ?
The tutor of our brains does that. Classical physics defines the guts of science. If you look at the structure of quantum physics, you see that it is designed as a theory to reduce quantum behavior in a predictable way into classical behavior. That's why we "measure" quantum systems, rather than just leaving them alone. Classical physics, on the other hand, is not a description of how classical systems can be made to act quantum mechanically. So it is we who give the priority to classical physics.
What if quantum mechanics (as decoherence seems to show) REDUCES to observable effects which are identical to classical physics ?
Decoherence is cherry-picked from all the things that can happen physically to a quantum system, and it is picked expressly because it is the subset of actions that leads to classical behavior. We choose that, we focus on decoherence, and set up our experiments to achieve it-- all to get the unknown to behave like the known, all to get a quantum system to leave a footprint on a classical one-- the latter being what we can use science on. So quantum mechanics doesn't "reduce" to classical mechanics. we project it onto classical physics on purpose, and formulate all our equations to describe the result of that projection. So classical behavior was always built into what quantum mechanics is, right from the start. There is no such thing as quantum mechanics without classical physics, that's what operators are. As a purely formal theory, a mathematician would likely say that quantum mechanics is just one arbitrary mathematical structure, and a fairly trivial one at that.

I may have said this before, but I think this is really the crucial point. There is not a physical place where quantum physics gives way to classical physics, we decide where that transition occurs when we change our approach to tackling a problem. The transition occurs the moment we feel compelled to average over some aspects of the state of the system that we do not wish to track explicitly. We know from experience that we can do that with our measuring devices, so that's why we feel comfortable coupling them to quantum systems to learn about the latter. So quantum mechanics cannot "reduce" to classical physics, because the averaging process goes outside the quantum system, it is a super-theory if you will, not part of the unitary transformations of quantum mechanics. This is precisely why, in my opinion, wavefunction "collapse" causes such hand-wringing within the confines of quantum mechanics-- it is expressly a process that leaves those confines. We set it up to do that, and then somehow forgot we did it, like a detective mistaking his own fingerprints at the scene of a crime.
Calculationally, I agree, classical physics is way easier to deal with. But why should classical physics have priority over quantum physics conceptually - which rises the problem of the transition between both ?
It's not just ease, it's the entire structure of scientific thinking. It was all built by classical brains-- electrons might have a very different approach to science.

Indeed, we know that from the moment that the entangled states are complex enough, that probably no observation will give any interference effects, and that from that moment on, we will get IDENTICAL results between a semi-classical approach and a full quantum approach.
But there is no full quantum approach at this state-- the instant you decide to average over what you can't know, you are not doing quantum mechanics any more (in the formal sense of the mathematical structure of the unitary operators, etc.). That's my point, the quantum mechanics becomes classical when we say it does, when we lose patience with following its axioms and resort to a semi-classical picture. If we always do that before we come to macro systems (and it seems to me that's true), then we simply have no quantum mechanics to test at the macro level, and cannot be impressed it hasn't been falsified.
From the moment that explicit interference has become "unobservable" (that means, hidden in very high order correlation functions which are never observed), you can switch to a semi-classical approach with probability distributions.
This is the crucial point we agree on-- but my interpretation of this is that it proves why quantum mechanics doesn't work for macro systems. To "work" doesn't just mean "doesn't make wrong predictions", it has to mean "is useful".
But again, it is not a proof of the *unapplicability* of quantum mechanics as a principle. On the contrary. It is where quantum theory becomes identical to classical theory.
Does it retain its axiomatic structure there? I don't think so, it seems to me it has to lose its soul, and become a mechanized simulation of that very classical theory it is becoming identical to. The kind of reduction you refer to happens when we add mass-energy to a particle by accelerating it until it behaves as though it had a trajectory, but that's different from what I'm talking about-- I'm talking about adding mass to the particle in the form of lots of other particles, like a baseball, and then treating its trajectory. That's a very different animal, for a quantum mechanical treatment that could make correct predictions in some situations would be wrong in others, since a baseball is not a quantum.

But the point is, if you insist on the inapplicability of quantum mechanics to macrosystems, then you are going to look for a *transition* theory.
Exactly, that is a good way to establish my point-- we would indeed require a transition theory, and I claim we do require a transition theory-- a theory in the realm where you are unable to use quantum mechanics for practical reasons, but the classical treatment of stochastically averaging over the unknowns fails to achieve sufficient accuracy. I maintain that we have a "blind spot" in our science of real systems because we can't treat that domain, but it rarely comes up.
And there's another reason to play with a toy world in which to take your theory totally seriously (far beyond its proven domain of applicability): you get a good feeling for the machinery of the theory. You get a good understanding of what exactly the axioms imply - whether this corresponds to the real world or not.
That I have no objection to-- if anyone can start their analysis with "the following is not intended to be taken seriously as a macroscopic theory, it is merely a macroscopic analog used to better picture our quantum axioms" then I'm fine. I've seen some use the Shrodinger cat that way. But inevitably, people mistake the analogy for the "real thing", and that opens the philosophical floodgates.
 
Last edited:
  • #87
JesseM said:
The interesting case, and the one where I'm even less confident, is when the lower beam going through the double-slit and ending up at D2 is collimated, but the upper beam is not; if the total pattern of hits at D2 does show interference, what happens when you move the upper detector to the image plane at D1, and look at the subset of hits there that correspond to hits at D2? If there is indeed interference at D2, then it seems the corresponding hits at D1 can't show two distinct peaks without violating complementarity, but I don't have a good mental picture how the rays would avoid being focused into two distinct peaks at the image plane (maybe trying to think in terms of neat rays like in classical optics is not a good idea here).
If you look at the subset of hits at D1, when those photons are not collimated, that correspond to hits at D2, when photons are collimated, what you should get is exactly the same result as the Afshar experiment. There should be an interference pattern at D2, but peaks at D1. However, then the debate will be, as it is in the Afshar experiment, whether we can be certain that we know which photons at D1 corresponded to which slit.

Maybe the issue is that by collimating the lower beam, you are in effect measuring the momentum of all the photons that continue on to the double slit, and due to the position/momentum uncertainty relation this destroys the position entanglement of these photons with the photons on the upper beam; so in this case you can no longer be sure that if a photon on the lower beam went through a slit, the corresponding photon on the upper beam also went through one of the positions corresponding to a slit, and thus you could also no longer be sure these upper photons would be focused by the lens onto one of two spots on the image plane. I don't know what the pattern of this subset of photons on the upper beam would look like with the detector in the image plane--maybe it would just look identical to whatever the total pattern of photons on D1 is in the normal version of the experiment without collination shown in fig. 4.8 (in the 'normal' version only the subset of hits at D1 corresponding to detections at D2 looks like two discontinuous peaks, the total pattern of hits at D1 would presumably look different, perhaps a gaussian).
I think that's exactly right. The HUP is pretty hard to defeat!
 
  • #88
Ken G said:
I know your question is to reilly, but I can give you an answer that I'll bet is close to his as well, because it just involves keeping track of what is actually happening.

I do appreciate your answer.

Ken G said:
First of all, the "experimental setup" you refer to did not spring up spontaneously, it was put together for a purpose. That purpose is not of incidental connection to the way we do science, it is the way we do science, so is integrally related to the equations that we used science to establish. Even the "results" you mention are conditioned to be results by us, the universe gets its "results" all the time, it needs no "experimental setup". So why did you attach your computer to such a setup? A computer that is hooked up to a random noise source is also getting "results" from the universe's point of view, if you will.

No doubt, the experimental setup is prepared by humans. However, even if all humans die out after that, the setup can still work for some time. Depending on the way the results are stored, they can be perceived by animals or by intelligent life emerging a million years after that. I don't think this example is too far-fetched: indeed, scientists do study remains of dinosaurs that died millions of years ago. So do you really think those remains were much different before humans bothered to look at them?

Ken G said:
The results will not "change" when a human reviews them, because a "change" is a comparison between two things, and here we only have one.

I am not sure about that, unless you insist that "the Moon is not there when nobody's looking at it". I think the results are there, whether somebody looks at them or not. If, however, you do insist, your position is unshakeable, but I cannot agree with it, so further discussion would be more appropriate in a forum on philosophy. So I think we can reasonably talk about two things: the results stored by the computer before and after somebody reviews them.

Ken G said:
I do not think reilly is saying that the role of an intelligence is to change anything, but rather to have the one thing in the first place, whatever it is that we have that makes it something we would call a "result" and include it in our conception of reality.My interpretation of this remark is that the "collapse" being referred to is not just the destruction of the superposition state of the subsystem, which is a physical effect that occurs any time our way of conceiving the state of the subsystem chooses to "average over" interactions with noise from a larger system we are not analyzing, but it also includes the determination of which result "actually occured".

Could you explain more clearly the meaning of the phrase "the determination of which result "actually occured"?

Ken G said:
It is only a conscious mind that requires that final step, an "unlooked at" universe is perfectly content to function forever as an accumulation of mixed states, like dice that are rolled but never looked at.

I thought die rolling is to all intents and purposes a classical process, so if you ensure that the initial position and velocity are the same with a great accuracy for several tosses (I heard the position must be accurate to about one micron), the die will always fall on the same side.

Ken G said:
No experiment can tell the difference, until that experiment also involves the connection to an intelligence.

I don't mind if an "experiment also involves the connection to an intelligence". However, I have not heard about any such experiments confirming the Peierls' postulate. On the other hand, if you mean that no experiment can confirm or falsify the postulate, then I am not sure such a postulate has anything to do with science

Ken G said:
Still, in my view, the quantum mechanics is over before this final stage of collapse is completed, it's a perfectly classical step. Indeed, the classical nature of this step is the whole reason for including it in our science-- the final stage of all science is classical, it's in the guts of science.

No offence, but this sounds like a mantra. I don't see why I should agree with that. I believe that is what Bohr and Heisenberg taught (or preached :-) ), but equally great physicists did not buy it.

By the way, I'd like to mention the article http://arxiv.org/abs/quant-ph/0702135 again. It is extremely relevant, and its conclusions seem fascinating to me. It clearly suggests a connection between measurements and thermodynamical irreversibility. Therefore, for finite systems, the results of measurements are reversible, but this has no more practical importance than any other processes forbidden by the second law of thermodynamics. The conclusions of the article seem to suggest that we do not need conciousness-related mysticism to understand quantum measurements.
 
  • #89
akhmeteli said:
No doubt, the experimental setup is prepared by humans. However, even if all humans die out after that, the setup can still work for some time. Depending on the way the results are stored, they can be perceived by animals or by intelligent life emerging a million years after that. I don't think this example is too far-fetched: indeed, scientists do study remains of dinosaurs that died millions of years ago. So do you really think those remains were much different before humans bothered to look at them?
That's very much the issue, and note this is a purely classical issue. It is sometimes classified as a quantum mechanical issue, but I agree with you that paleontologists face the exact same issue all the time, as do poker players. It has to do with something we take very much for granted but is really terribly subtle: probability theory. That is a model of how we treat what we don't know, and there's never any reason to think that once we know it, that information wasn't "there" all the time-- but there's also no reason to think that it was! Poker players understand this quite well-- if you don't call the hand, it makes no difference at all what the cards "really were", and no theory of reality requires that they be anything but an unactualized probability. The same for every dinosaur bone that is not dug up. You can choose to believe they are there, because it seems silly not to, but the real point is it makes no difference whatever what you believe, all testable aspects of a theory of reality function exactly the same either way. The two are indistinguishable, such is the nature of probability-- it is a theory about what you don't know as much as about what you do.

What brings this into better focus is when you ruminate on the question "what is the probability of X?" We tend to ask this question as if it had a definite meaning (what is the probability I will die tomorrow, what is the probability an electron shows up in a given place in a given trial, etc.), but it does not. The answer depends entirely on what information I put in, my "fingerprints" are all over the answer (ask the first question to an actuary to see that, or the second to a physicist studying entanglement). There usually is no absolute answer to that question, obviously, so why do we pretend that the probabilities that result from our scientific theories are any different? They aren't-- our fingerprints are all over those too. So that's the sense to which science does not exist without us-- we choose the parameters we are considering. All probabilities are a comparison, made by us-- we choose both the numerator and the denominator. That choice only ends when our brains perceive an answer-- when the cards are shown. And we use the results to test if the comparisons we were making were appropriately constrained by our science (or our poker strategy).
I am not sure about that, unless you insist that "the Moon is not there when nobody's looking at it".
It is not necessary to assume it goes away, it is merely necessary to point out that it makes no difference at all to science if it is there or not. No scientific theory can test that assertion, it is a question that cannot even be asked. This is indeed the problem with most of what I read about quantum entanglement, people endlessly debating what science tells us the answer to certain questions are, when what science is really telling us is we can't put those questions to science.

So it is with the Moon-- it is pure philosophy if the Moon is there or not. Now I realize that you might think it is more natural to assume the Moon is there, even if you don't know it, but that's not the point I'm making. I'm saying that if you have two things that can happen to the Moon, maybe it is or is not hit by an asteroid, then all science can ever tell you is your estimate of the chances that the Moon is still there. It makes no difference at all if it is or not, scientifically speaking, until it affects you in some way. You can't use it to test your prediction, you can't perceive the result, it can't make you a happy or miserable person, it just doesn't matter. That we think something "really happened", even though we don't know what, is just a handy picture for thinking about all this-- it cannot be tested in any way, so it isn't science. I would say it is scientifically in a "mixed state" and leave it at that, science has no more to say on the matter.

So I think we can reasonably talk about two things: the results stored by the computer before and after somebody reviews them.
What this means to me is, we will agree to enter into the "interpretation", or "picture", that the computer stores "real" results even if we do not know what they are. That is fine with me-- I use that picture of reality myself, as a matter of fact.

Could you explain more clearly the meaning of the phrase "the determination of which result "actually occured"?
The registering in an intelligence what happened. Isn't that what you would mean by that phrase too?

I thought die rolling is to all intents and purposes a classical process, so if you ensure that the initial position and velocity are the same with a great accuracy for several tosses (I heard the position must be accurate to about one micron), the die will always fall on the same side.
That was a bad assumption made by the post-Newtonians. In point of fact there was never any way to do that, long before quantum mechanics, for a "suitably shaken die". If the die is not suitably shaken, it is not functioning like a die. The point is, even classical systems involve probability concepts in their analysis-- always have and always will (consider the crucial role of "ergodicity" in thermodynamics, for example). Are we in a position to replace thermodynamics?

However, I have not heard about any such experiments confirming the Peierls' postulate. On the other hand, if you mean that no experiment can confirm or falsify the postulate, then I am not sure such a postulate has anything to do with science.
The point is, and perhaps reilly can corroborate, one does not seek an experiment to "falsify" Peierl's postulate, for the postulate is built right into how we do science. How will one set up an experiment to falsify that postulate, when the postulate is central to what we mean by an experiment? It is really an axiom, that is the point-- it is an inseparable part of science itself, and that's what it has to do with science.

No offence, but this sounds like a mantra. I don't see why I should agree with that. I believe that is what Bohr and Heisenberg taught (or preached :-) ), but equally great physicists did not buy it.
Then I would like to see you, or them, describe a means for doing science that does not include the "mantra": the final stage of all science is classical, it's in the guts of science. Will anyone please cite for me an example of an experimental result whose final stage was not classical? How can anyone claim this is something they "don't need to agree with"?
By the way, I'd like to mention the article http://arxiv.org/abs/quant-ph/0702135 again. It is extremely relevant, and its conclusions seem fascinating to me. It clearly suggests a connection between measurements and thermodynamical irreversibility.
I'll give it a look, but I expect it to provide complete verification of my position. You see, thermodynamics is the quintessential example of a classical theory of probability, where nothing is ever actualized beyond what the intelligence can discern! All thermodynamic concepts (temperature, pressure, etc.) are based on the idea that states never distinguished by any intelligence are to be treated as if they were indistinguishable elements of reality.
Therefore, for finite systems, the results of measurements are reversible, but this has no more practical importance than any other processes forbidden by the second law of thermodynamics. The conclusions of the article seem to suggest that we do not need conciousness-related mysticism to understand quantum measurements.
I completely agree with all of that-- the irreversibility comes from our analysis technique. The instant we "average over" what we cannot know, we obtain a probabilistic treatment, and probabilistic treatments are also quintessentially irreversible. None of that refutes the importance of consciousness in deciding "what counts as indistinguishable", i.e., what is the very meaning of "the probability of X".
 
  • #90
akhmeteli said:
Certainly, abrasive language can be offensive and distract from the discussion. So I'll try to keep to the point. So what's your opinion on the question you mentioned: "Can non-humans make observations?" If, for example, an experimental setup is fully automatic, and the results are stored by the computer on a hard disk, do you think these results can change when a human reviews them?
By the way, as far as I know, Peierls was not a Nobel prize winner. Certainly, this does not make him a less respected physicist. However, I am not aware of any experimental confirmation of his postulate that "the wave function collapse occurs as the neural networks in the brain provide the single answer from a measurement". However big an authority on quantum mechanics Peierls may be, I don't think I have any moral obligation to agree with him, not because I disrespect such an authority, but for the simple reason that such people as Einstein, de Broglie, Schroedinger, and others disagreed with such thinking. As for the Born rule, I like article http://arxiv.org/abs/quant-ph/0702135 , where an analysis of an exactly solvable model of spin measurement shows that this rule may emerge from thermodynamic irreversibility.

First, you are on the money; Sir Peierls did not win the Nobel Prize. As you suggest, he was a very key figure in the early days of modern QM.

Quite a few years ago, so-called artificial neural networks became one of the tools many of us used in market research and business statistics. Thus many started following the research in neurophysiology, which I did for ten-fifteen years or so. The notion that most of what happens in the brain is the result of pulses traveling through neural networks is a central tenant of the field -- this is elegantly discussed by Sir Francis Crick in his book The Astonishing Hypothesis: the Scientific Search for the Soul.

One day it struck me that that there is a physically understandable mechanism behind probability collapse. As in, right now, the best we can say is that one of three candidates will become the next US president, add a dark horse if you want. This knowledge is stored in your memory. Then, once the election is over, and you hear about it, your knowledge changes, and your brain has to do some readjusting. Among the things it will have to change is the probability structure of the election; that structure clearly can be said to collapse to from (p1,p2,p3,p4) to (0,1,0,0). In fact, it's pretty unlikely that you will consciously be aware of such a collapse, but there is no doubt that it happens.

You decide to do an interesting double slit experiment with photons or electrons. Randomly change the width of the slits; randomly change the distance between slits; use polaroid or mylar to slow down the particles, randomly with one or two slits as you wish. Do the experiment for a long time, and do the random thing however you want. You probably won't have a clue about the pattern you'll see on the detector screen. So the probability structure in your brain will very likely be (?). If you do not look at the screen until the experiment is done, and then wait ten minutes, before you open your eyes, you still have that (?) structure, and after you have ("pattern").

If you watched the screen for the entire experiment, your notions of the pattern will clearly converge stochastically to the final pattern. The probability structure -- not a great name here -- changes gradually, but still is consistent with collapse as a change in knowledge. I think that this knowledge approach makes repeated and continuous measurements -- every day vision for example -- easier to handle in QM.

So you can see that noway a human can change what's on a disc without 1. programming and executing some program or routine, or, 2. trashing the disc. Once you read the disc, you know.Once you read a mystery novel you know, at the end for sure, who donnit. When neither of us is participating in the forum, it's still there. The best game in town is to assume there is an objective reality. Seems, generally, to be a good working assumption.

I was delighted to discover Peierls' work on QM interpretation promoting the idea that the wave function and consequent probabilities refer to our knowledge. For me, at least, many issues I had with QM were solved with the knowledge interpretation. And let's be clear:My state and actions have, generally, little effect on the world, so whether my eyes are open or closed makes no difference to anyone or anything but me.

As far as I know, he did not discuss neural networks -- they were yet to become important when he was writing. Also, it's consistent with the standard statistical practice of many years and in many disciplines. There's collapse in any practical probabilistic system; once you know, things change in your head often as a consequence of what's outside your head.

Regards,
Reilly Atkinson
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
2K
Replies
41
Views
5K
  • · Replies 43 ·
2
Replies
43
Views
5K
Replies
18
Views
2K
Replies
58
Views
4K
Replies
68
Views
9K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
5
Views
3K
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K