Double slit experiment, expensive gear?

In summary: Or read some pop science articles about quantum mechanics and not think they know it all?It looks like you first need to unlearn what you think you know about QM and start fresh.
  • #36
vanhees71 said:
A photon is a photon, and it does never behave like a particle. It has not even a position observable. The naive photon picture, introduced by Einstein in 1905, is long outdated (in 1926 Born and Jordan and somewhat later Dirac gave the correct description in terms of field quantization).

A photon is a asymptotically free single-quantum Fock state of the electromagnetic field. So the first thing you need is a single-photon source. Today that's provided usually by parametric down conversion, i.e., by shooting with a laser on a birefringent crystal, where you can get entangled photon pairs and use the one photon to "herald" the other photon, which you can then let go through a double slit. Finally you need a single-photon detector, which I guess is the most expensive part.

What you will observe is that for any single photon going through the slit (with some probability) you'll register this photon at one spot at the photodetector. The location of this spot cannot be predicted in any way but only the probability. Now it turns out that quantum electrodynamics predicts that the probability distribution is given by the properly normalized classical energy densit, i.e., the interference pattern you expect from classical diffraction theory.

What's meant by "wave-particle duality" in modern terms just means that on the one hand you have a kind of "particle property" for a single-photon Fock state, i.e., it can be registered either as a whole or not at all. It's, however not some kind of localizable massless particle following a trajectory of any kind. You cannot even define a position observable for a photon (as you can for massive "particles", i.e., field quanta of massive fields)! All you can know is the probability distribution for the "registration events" at any place of the photodetector (you can use a pixel detector, so that you get an interference pattern by registering many equally prepared photons). On the other hand the probability distribution is given in terms of field theory and it thus shows interference effects.

The next thing is that you can also try to figure out through which slit of the double slit each photon came. This is only possible, if you somehow mark each photon behind the slit in such a way that you can say with certainty through which slit it came. One clever way to do this is to use the polarization observable of this photon to mark it accordingly, i.e., you aim for a perfect "entanglement" between the photon's polarization state and the "which-way information".

This can be achieved by using linearly polarized incoming photons (say in ##x##-direction) (this is easily achieved by using a polarization filter) and mount two quarter-wave plates into the slits one oriented in ##+\pi/4## and the other in ##-\pi/4## orientation. Then any photon going through one slit will be left-circular polarized and any photon going through the other slit will be right-circular polarized, i.e., by measuring the polarization state for the photons going through the slits you could figure out exactly through which slit each photon came (but you cannot predict it beforehand, i.e., you have with 50% probability a photon coming from one and 50% from the other slit). Since now the photons going through the one slit are perfectly polarized in a way that the polarization state is perpendicular to the one of the photon going through the other slit, the interference effect is completely done, i.e., now the partial intensities for photon going through the one or the other slit add, i.e., you completely loose the double-slit interference pattern (the single-slit envelope is still visible though).

This example shows that it is very easy to understand, why gaining 100% "which-way information" (or only the possibility to gain it by measuring the polarization state of the outgoing photons) destroys the double-slit interference pattern completely. Again, this is not understandable within the naive photon picture of 1905 but you need quantum-field theory!
Thanx, I learn a lot. You wright something of the photon being asymptotic, does that mean that it has no real boundary? You can not say really where its "volume" starts or begins? I understand that, when I with my limited knowledge react strongly against this popular science idéa that a conscious is nessecary to collaps the wavefunction, you guys must really puke on what is all over the webb regarding a lot of quantum mechanics theory...
 
Physics news on Phys.org
  • #37
rolnor said:
Thanx, I learn a lot. You wright something of the photon being asymptotic, does that mean that it has no real boundary? You can not say really where its "volume" starts or begins? I understand that, when I with my limited knowledge react strongly against this popular science idéa that a conscious is nessecary to collaps the wavefunction, you guys must really puke on what is all over the webb regarding a lot of quantum mechanics theory...
So its the detector that act as a "environment" and causes decoherence?
 
  • #38
rolnor said:
Thanx, I learn a lot. You wright something of the photon being asymptotic, does that mean that it has no real boundary? You can not say really where its "volume" starts or begins? I understand that, when I with my limited knowledge react strongly against this popular science idéa that a conscious is nessecary to collaps the wavefunction, you guys must really puke on what is all over the webb regarding a lot of quantum mechanics theory...
"Asymptotically free" means you look in regions far away from things the photon interacts with, i.e., where you can describe the em. field as not experiencing any interactions with charges. Only in this "asymptically free" limit there is a clear interpretation in terms of photons, and that's what we usually observe in high-energy physics, i.e., scattering processes, where we consider that initially we have two particles, which are far away from each other and thus can be considered as not interacting, i.e., we prepare the two particles in "asymptotically free" "in-states". Then they come closer to each other and the interaction between them becomes non-negligible. We don't bother with interpreting this state in any way in terms of particles but we look after this interaction when all the particles going out from this collisions are again far distant from each other and thus can be considered as "asymptotically free" again ("out-states"). What we calculate is the transition probability from the asymptotically free in-state to the asymptotically free out-state, which is encoded in the "scattering matrix" (S-matrix). That's what's usually done using perturbation theory and nicely encoded in Feynman diagrams, and that's what's measured in the high-energy-particle experiments (e.g., at the LHC at CERN). That's the physics, i.e., that's what we can say on a scientific basis, what's done in physics research: You have a theory ("Standard Model of elementary particles"), with which you can predict the outcome of what can be really measured (scattering cross sections) and can be objectively compared to the theory.

Now, unfortunately QT is plagued from the very beginning by a strong influence of "philosophy" or "meta-physics". That's quite understandable, because QT is really a revolution with regard to our very fundamental understanding of the physical world around us. That's, because QT says that Nature behaves inevitably in an indeterministic way, i.e., "observables" (something that can be measured) do not necessarily take well-defined ("determined") values, but only if the system under investigation is prepared in a "state" (something that describe the specific properties of the system before measurement), for which the observable under investigation takes determined values. The outcome of a precise measurement of an observable, when the system is prepared in a state, where this observable doesn't take determined values, is inevitably random, and this randomness is not due to the lack of our knowledge about the state of the system (as in classical statistical mechanics) but because it's a property of Nature that the observable doesn't take a determined value, if the system is prepared in such a state.

That's why there was a big debate from the very beginning among physicists, whether "quantum mechanics can be considered complete", i.e., whether there may be a more comprehensive theory, which is in accordance with the observations but still being deterministic, i.e., a theory where all observables always take determined values, which we only don't know for some reason, and thus use probabilistic descriptions as in classical statistical physics. This culminated in 1935 in the famous (I would say infamous) paper by Einstein, Podolsky, and Rosen, and the even more cryptic anser to it by Bohr. The problem with both papers, in my opinion, is that the EPR paper just states a philosophical prejudice about how "Nature should be", namely deterministic, and thus claims that quantum mechanics were incomplete. Bohr answered in his usual nebulous style without making a scientific statement either about how the problem could be solved with scientific means.
This only happened about 30 years later, when Bell made the philosophical, vague EPR definition of "reality" to a solid mathematical description of a "realistic" (i.e., deterministic) "local" (i.e., excluding strictly the influence of measurements on far-distant places with faster-than light signals). He figured out that such a theory gave probabilistic predictions about the outcome of measurements and about the far-distant correlations (which is the only thing, which can be named "non-local", which however is an unfortunate choice of terminology, as I'll explain below) described by "entanglement", which are different from the predictions of QT (the famous Bell inequalities). Since then the question was attacked by experimentalists, and that lead to the clear conclusion that the world does not behave according to any "local, realistic theory" but according to the predictions of QT.

It's unfortunate to call the correlations of measurement results at far-distant places, which are described by entanglement "non-local", because ironically it's relativistic QFT which is strictly local by construction, i.e., the demand of locality, i.e., that there cannot be any causal influences between space-like separated events, is worked into the theory from the very beginning. This is, why we describe relativistic QT exclusively in terms of such a quantum-field theory, because only in terms of a QFT we can impose this "microcausality constraint". The consequence is that particle numbers are not conserved when scattering particles at "relativistic energies", and indeed that's what's observed: In such collisions it can well be that we destroy the incoming particles completely and get out completely different particles (and not only two as in usual scattering experiments but a whole "spray" of new particles), i.e., we can "annihilate and create" particles (according to the laws imposed by conservation laws like charge conservation, which are also built into the theory from the very beginning, based on corresponding empirical knowledge). So the description of relativistic QT in terms of a local (i.e., micro-causal) QFT is well-founded both in theoretical demands (causality in relativistic spacetime models) and in observations (all predictions of the Standard Model still hold true, although for the vigorous search for "physics beyond the Standard Model" for decades now!). So there is no "non-locality" but only the correlations of properties on far-distant parts of quantum systems, described by entanglement.

For sure, there's no need for consciousness in the entire theory. Nowadays the measurement results are just stored on some electronic storage device and only explored in detail by humans long after the experiment is done. There's no need for a human interaction with the investigated quantum systems to get well-defined measurement results. It's all due the interaction of the particles with the detectors and the corresponding storage of the information about these interactions.
 
  • #39
Thanx, I have seen that the "entanglement" means that two particles share the same wavefunction, thats the reason why they "interact" on long distanses? Somehow, this wavefuncion covers the whole universe and therefore we get "non-locality". It is really one particle, not two? Sorry if I use bad language.
 
  • #40
No, that's again an entirely wrong picture. A two-particle wave function is always a function ##\psi(t,\vec{x}_1,\vec{x}_2)##, and you have entanglement, if this wave function does not factorize like ##\phi_1(t,\vec{x}_1) \phi_2(t,\vec{x}_2)##.

Entanglement also doesn't mean that there are instantaneous interactions but that there are long-range correlations, i.e., it's a feature of the state and thus of the preparation of the two particles and not with measurements on these particles.
 
  • #41
rolnor said:
In chemistry we have many people beleive that life started from a "primordial soup", if you have water and some chemicals and subject this to a little lightning you eventually get life, cells. This i complete BS ,it will not happen, nobody has observed any of this in any experiment, no complex molecules large enough, ordered enough, is formed. Just because you can get some traces of racemic amino-acids this way it can never form life as we se it now on earth, even if you wait a quadrillion years.. I beleive in evolution theory but life must have some special origin, it does not form by chance at the conditions on early earth.
This is not the forum discuss this, but your statements about the origin of life show that you base your science on your beliefs, not on scientific facts.
There are plenty of scientific paper that describe successful experiments that cover different parts of a possible path from inorganic molecules to something that can be called life. And that something definitely does not need to be "life as we see it now on earth".
For a popular science coverage and summary of that, I suggest you follow "Professor Dave Explains" on YouTube.
 
  • Like
Likes weirdoguy
  • #42
rolnor said:
Thanx, I have seen that the "entanglement" means that two particles share the same wavefunction, that's the reason why they "interact" on long distances? Somehow, this wavefunction covers the whole universe and therefore we get "non-locality". It is really one particle, not two?
Two entangled photons are essentially a single system of two particles, and is sometimes called a biphoton. Such a system has spatial (technically spatiotemporal) extent, which is to say that the system cannot be localized to a single point in spacetime. Most physicists consider this to be an example of quantum nonlocality, but some (like @vanhees71) do not like that term. (He does agree that a biphoton has spatial extent.)

You could say that the wavefunction of just about any particle is spatially large, but with only an infinitesimal chance of being found in an unexpected place. That is not usually called quantum nonlocality though.
 
  • Like
Likes gentzen
  • #43
DrChinese said:
Two entangled photons are essentially a single system of two particles, and is sometimes called a biphoton. Such a system has spatial (technically spatiotemporal) extent, which is to say that the system cannot be localized to a single point in spacetime. Most physicists consider this to be an example of quantum nonlocality, but some (like @vanhees71) do not like that term. (He does agree that a biphoton has spatial extent.)

You could say that the wavefunction of just about any particle is spatially large, but with only an infinitesimal chance of being found in an unexpected place. That is not usually called quantum nonlocality though.
It's ok to use terms like nonlocality which might have different meanings, so long as these different meanings don't paper over important distinctions. This new paper explores the distinction between nonlocality in the "biphoton" sense and in the relativistic causality/superluminal influence sense from a histories perspective, and argues that a system can be nonlocal in the former sense and also local in the latter sense.

https://arxiv.org/pdf/2305.16828.pdf
 
  • #44
DrChinese said:
Two entangled photons are essentially a single system of two particles, and is sometimes called a biphoton. Such a system has spatial (technically spatiotemporal) extent, which is to say that the system cannot be localized to a single point in spacetime. Most physicists consider this to be an example of quantum nonlocality, but some (like @vanhees71) do not like that term. (He does agree that a biphoton has spatial extent.)
It has spatial extent in the sense that the probability for detectors to detect them peaks (as a function of time) at far distant places. Photons have no position in the usual sense. All you can tell about them is the probability to detect them. This detection is due to local interactions with the material making up the detector (usually the photoelectric effect). That's, because by construction interactions are described by local quantum field theories, which is formally implemented by the microcausality constraint on local observables.

If two photons are in an entangled state, there are strong correlations between single-photon observables on the two photons in this pair, which can be observed, no matter how far the detectors are away from each other. The important point is that these correlations are due to the entangled state and not due to the measurements on the single photons although the measured single-photon observables are maximally uncertain before measurement. It's the unnecessary collapse assumption which leads to the misconception as if there must be non-local interactions due to the measurements. This is simply a misconception. All there is described by the quantum state are the statistical properties for the outcome of measurements on an ensemble of equally prepared systems (in this case entangled "biphotons").
DrChinese said:
You could say that the wavefunction of just about any particle is spatially large, but with only an infinitesimal chance of being found in an unexpected place. That is not usually called quantum nonlocality though.
Photons cannot be described by wave functions but only within relativistic local (!!!) quantum field theory. Only for massive particles, there is in the non-relativistic limit an approximate description by wave functions.
 
  • #45
vanhees71 said:
It's the unnecessary collapse assumption which leads to the misconception as if there must be non-local interactions due to the measurements. This is simply a misconception. All there is described by the quantum state are the statistical properties for the outcome of measurements on an ensemble of equally prepared systems (in this case entangled "biphotons").
Bell's inequality theory clearly shows this interpretation cannot be correct. The measurement of a particle must necessarily influence the state of the partner particle. Equally prepared systems on their own without non-local correlation cannot explain the violation of Bell's inequality. This is the essence of what is meant with "non-locality" and the EPR argument.
 
  • Skeptical
Likes weirdoguy
  • #46
Rene Dekker said:
Bell's inequality theory clearly shows this interpretation cannot be correct. The measurement of a particle must necessarily influence the state of the partner particle. Equally prepared systems on their own without non-local correlation cannot explain the violation of Bell's inequality. This is the essence of what is meant with "non-locality" and the EPR argument.
No, Bell's inequality doesn't show this. You somehow seem to assume that the partner particle has a (mostly classical) state of his own that could be influenced to begin with. But if state only refers to the entire system under investigation, and mostly descibes an equivalence class of equally prepared systems, then your conclusion that "this interpretation cannot be correct" is not clear at all.
 
  • Like
Likes Morbert, Lord Jestocost and vanhees71
  • #47
Exactly! To put it in another way: Entanglement means that the two photons are not separated in any way. Einstein coined thus the very accurate term "inseparability", which is much better than "non-locality", because as stated above, ironically it's QT that is strictly local, while a local description in terms of classical (deterministic) theories is ruled out by the observation of the violation of Bell's inequalities.
 
  • Like
Likes Lord Jestocost
  • #48
gentzen said:
No, Bell's inequality doesn't show this. You somehow seem to assume that the partner particle has a (mostly classical) state of his own that could be influenced to begin with. But if state only refers to the entire system under investigation, and mostly descibes an equivalence class of equally prepared systems, then your conclusion that "this interpretation cannot be correct" is not clear at all.
If state refers to the entire system, then the state of that system necessarily needs to change in a non-local way when one of the particles is measured. That is, the probabilities for the other particle change because of the measurement of its partner, and that happens in a non-local way. That's what Bell's inequality tells us.
Trying to hide that with "the state refers to the entire system" does not change the fact that that system necessarily needs to contain non-local correlations.
That is, the relation of your equivalence class is a non-local relation.

vanhees71 said:
Exactly! To put it in another way: Entanglement means that the two photons are not separated in any way.
They are separated. The probabilities for their momentums and other properties are separate. They are non-locally correlated, but still separate.
 
  • #49
If they are "non-locally correlated" then they are not separate by the precise definition of the word "inseparability" as coined by Einstein.
 
  • #50
Rene Dekker said:
If state refers to the entire system, then the state of that system necessarily needs to change in a non-local way when one of the particles is measured. That is, the probabilities for the other particle change because of the measurement of its partner, and that happens in a non-local way. That's what Bell's inequality tells us.
Trying to hide that with "the state refers to the entire system" does not change the fact that that system necessarily needs to contain non-local correlations.
That is, the relation of your equivalence class is a non-local relation.
The state indeed changes in a nonlocal way. But the state is not real under some interpretations. As @gentzen implied, the state can represent any preparation that reproduces it (there are many ways to prepare a biphoton system in a specific state, hence the equivalence class). It is the preparation that is real, and the causal relation is between the preparation and the instrument setting that reveals the correlation. This causal relation is consistent with relativity.
They are separated. The probabilities for their momentums and other properties are separate. They are non-locally correlated, but still separate.
You could do a trace to model measurements on one part of the system, but the whole system cannot be two separate systems ontologically [edit] - at least without use of a hidden-variable framework.
 

Similar threads

  • Quantum Physics
Replies
14
Views
1K
Replies
5
Views
782
  • Quantum Physics
2
Replies
36
Views
1K
Replies
42
Views
1K
Replies
60
Views
3K
Replies
7
Views
1K
  • Quantum Physics
Replies
14
Views
1K
Replies
1
Views
1K
Replies
3
Views
714
Replies
75
Views
4K
Back
Top