The double slit in the thermal interpretation

In summary: correlations... "guarantee" that neglecting the environment results in probabilistic features at all.
  • #1
A. Neumaier
Science Advisor
Insights Author
8,595
4,620
TL;DR Summary
This thread is for the discussion of issues specific to the thermal interpretation of the double slit experiment.
vanhees71 said:
Take an electron in the double-slit experiment: A single electron's measured position on the screen is usually not found to be at the place given by the position expectation value. It's of course pretty probable to land in the main maximum of the distribution, which in this case is the expectation value, but it's also found somewhere else. Thus in this case, which is the paradigmatic example for the probabilistic interpretation of QT, solving the infamous "wave-particle dualism self-contradiction" of the "old quantum theory": There's nothing that precisely determines at which position the electron will hit the screen. All that can be known are the probabilities where it hits the screen, and what's found is that it hits the screen anywhere, but in a single measurement it's not found to have hit the screen on the average position given by the quantum state!

I prefer to discuss in the double-slit experiment light in place of electrons since it makes the underlying principle more clear. Consider the quantum system consisting of the screen and an external (classical) electromagnetic field. This a very good approximation to many experiments, in particular to those where the light is coherent. The standard analysis of the response of the electrons in the screen to the field (see, e.g., Chapter 9 in the quantum optics book by Mandel and Wolf) gives - according to the standard interpretation - a Poisson process for the electron emission, at a rate proportional to the intensity of the incident field. This is consistent with what is observed when doing the experiment with with coherent light. A local measurement of the parameters of the Poisson process therefore provides a measurement of the intensity of the field.

There is nothing probabilistic or discrete about the field; it is just a term in the Hamiltonian of the system. Thus, according to the standard interpretation, the probabilistic response is in this case solely due to the measurement apparatus - the screen, the only quantum system figuring in the analysis. At very low intensity, the electron emission pattern appears event by event, and the interference pattern emerges only gradually. Effectively, the screen begins to stutter like a motor when fed with gas at an insufficient rate. But nobody ever suggested that the stuttering of a motor is due to discrete eigenvalues of the gas. Therefore there is no reason to assume that the stuttering of the screen is due to discrete eigenvalues of the intensity - which in the analysis given is not even an operator but just a coefficient in the Hamiltonian!

In the thermal interpretation, one assumes a similar stuttering effect at low intensity of a quantum field (whether the photon field or the electron field or a silver field or a water field), illustrated by the quantum bucket introduced in post #272 and post #6 of a companion thread.
 
Last edited:
Physics news on Phys.org
  • #2
charters said:
The first reason is the need to explain interference. If the uncertainty is just epistemic uncertainty due to measurement device error/resolution limits, then we would expect the 20-80 split between 6.57 and 6.58 will be insensitive to whether the device is placed in the near field or far field of the (beam) source. But in QM, it is possible for the eigenstates to interfere, and so the probability of measuring 6.57 or 6.58 can vary based on this path length (and varies with a high regularity that is very well predicted by summing over histories).

I don't question you already know this, but I think the reason you are untroubled by this relative to others is due to the superdeterminist underpinnings of the Thermal Interpretation, which have not been fully articulated. I think without being more explicit on this, you and others will just continue to talk past each other.
What counts is only the field intensity at any photosensitive spot. Accepting the standard detector analysis, depends on the determinist nature of the ihermal interpretation. But the latter is sufficient to explain why neglecting the environment results in probabilistic features at all.
 
  • #3
A. Neumaier said:
What counts is only the field intensity at any photosensitive spot

A. Neumaier said:
Effectively, the screen begins to stutter like a motor when fed with gas at an insufficient rate.

But why does the screen stutter in such a specific and predictable way that, for any N=1 field/beam, only and exactly 1 of its, say, 100,000 spacelike separated silver halide cells is triggered? If "what counts is only the field intensity at any photosensitive spot" I don't see how these correlations are guaranteed. And it is especially difficult for me to reconcile such precise correlations with the claim that these are low accuracy measurements. However, this guarantee is part of what you get for free with the normal Born rule measurement story.

Put another way, I don't see what in the TI does the work of the orthogonality of eigenstates in other well known interpretations of QM.
 
  • #4
charters said:
But why does the screen stutter in such a specific and predictable way that, for any N=1 field/beam, only and exactly 1 of its, say, 100,000 spacelike separated silver halide cells is triggered? If "what counts is only the field intensity at any photosensitive spot" I don't see how these correlations are guaranteed. And it is especially difficult for me to reconcile such precise correlations with the claim that these are low accuracy measurements. However, this guarantee is part of what you get for free with the normal Born rule measurement story.

Put another way, I don't see what in the TI does the work of the orthogonality of eigenstates in other well known interpretations of QM.
The double slit experiment is not about correlations, but about the interference pattern. For this, only the field intensities matter.

The low accuracies refer to accuracies of the implied field intensity - namely one unit at the responding position and zero units elsewhere, while the true intensity is low but nonero everywhere where the high intensity interference pattern would show up.

Of course each single spot is measurable to much higher accuracy, but this is a high accuracy measurement of the screen, not of the field (or its particle content)!

The correlations are guaranteed as in the Stern-Gerlach experiment for a silver field in an N=1 state and two possible spots; see the discussion starting here.
 
  • #5
A. Neumaier said:
The low accuracies refer to accuracies of the implied field intensity - namely one unit at the responding position and zero units elsewhere, while the true intensity is low but nonero everywhere where the high intensity interference pattern shows up.

Ok this is what I am trying to focus on, and I don't see an on-point response in the SG thread. If its ok, let me first just set up my understanding and see if you agree I understand your claim.

So, let's consider a simplified "screen" consisting of 5 cells (labelled A through E), and an N=1 beam/field incident on the screen which has traversed a double slit. Let's say the field intensity goes from 0 to 100. As the field is incident on the screen, let's say the intensities at each screen cell are

A = 25 , B = 5, C = 40, D = 5, E = 25

This is a rough fringe pattern.

Normally, these sorts of numbers are interpreted as the relative probability of the outcome of a projective measurement, ie the wavefunction amplitude. In Copenhagen, upon interaction, the field values non-unitarily collapse to, say,

A = 0 , B = 0, C = 100, D = 0, E = 0

But I believe you are saying instead that in TI, the original field intensities per cell actually survive the measurement interaction (and without branching into many worlds). The only reason the 0,0,100,0,0 result appears is just because the screen cell each make an inaccurate measurement of the field, where cell A misreads 25 as 0, and C misreads 40 as 100, etc.

Is this the idea or do I not get it?
 
  • Like
Likes akvadrako and A. Neumaier
  • #6
charters said:
Ok this is what I am trying to focus on, and I don't see an on-point response in the SG thread. If its ok, let me first just set up my understanding and see if you agree I understand your claim.

So, let's consider a simplified "screen" consisting of 5 cells (labelled A through E), and an N=1 beam/field incident on the screen which has traversed a double slit. Let's say the field intensity goes from 0 to 100. As the field is incident on the screen, let's say the intensities at each screen cell are

A = 25 , B = 5, C = 40, D = 5, E = 25

This is a rough fringe pattern.

Normally, these sorts of numbers are interpreted as the relative probability of the outcome of a projective measurement, ie the wavefunction amplitude. In Copenhagen, upon interaction, the field values non-unitarily collapse to, say,

A = 0 , B = 0, C = 100, D = 0, E = 0

But I believe you are saying instead that in TI, the original field intensities per cell actually survive the measurement interaction (and without branching into many worlds). The only reason the 0,0,100,0,0 result appears is just because the screen cell each make an inaccurate measurement of the field, where cell A misreads 25 as 0, and C misreads 40 as 100, etc.

Is this the idea or do I not get it?
Yes, that's it!
 
  • #7
A. Neumaier said:
Yes, that's it!

Ok wonderful.

So, then my question is: how do you guarantee that whenever cell C misreads 40 as 100, that cell A misreads 25 as 0? What in the TI prevents the case where (again, still assuming an N=1 beam) C misreads 40 as 100 and A simultaneously misreads 25 as 100 as well?
 
  • #8
charters said:
Ok wonderful.

So, then my question is: how do you guarantee that whenever cell C misreads 40 as 100, that cell A misreads 25 as 0? What in the TI prevents the case where (again, still assuming an N=1 beam) C misreads 40 as 100 and A simultaneously misreads 25 as 100 as well?

The correlations are guaranteed as in the Stern-Gerlach experiment for a silver field in an N=1 state and two possible spots; see the discussion starting here, and posts #8, #30. Please discuss questions about correlations first in the SG setting in that thread until it is resolved there to your satisfaction; then come back to here - your problem will be solved in the same way.
 

1. What is the double slit experiment in the thermal interpretation?

The double slit experiment is a famous experiment in quantum mechanics that involves firing particles, such as photons or electrons, through two parallel slits onto a screen. The experiment shows that particles can behave like waves and exhibit interference patterns, which is a fundamental principle of quantum mechanics.

2. How does the thermal interpretation explain the results of the double slit experiment?

The thermal interpretation, also known as the statistical interpretation, suggests that the particles in the double slit experiment are not waves, but rather individual particles with a random distribution. This randomness is due to the thermal energy of the particles, which causes them to move and interact in unpredictable ways. The interference patterns observed in the experiment are a result of this random distribution of particles.

3. What are the implications of the thermal interpretation for our understanding of quantum mechanics?

The thermal interpretation challenges the traditional understanding of quantum mechanics, which suggests that particles can exist in multiple states simultaneously. Instead, it suggests that particles are always in a specific state, but their behavior appears random due to thermal energy. This interpretation has significant implications for the concept of wave-particle duality and the nature of reality at the quantum level.

4. Is the thermal interpretation widely accepted among scientists?

The thermal interpretation is one of several interpretations of quantum mechanics and is not universally accepted among scientists. Some scientists argue that it does not fully explain the results of the double slit experiment and that other interpretations, such as the Copenhagen interpretation, may provide a more comprehensive understanding of quantum mechanics. However, the thermal interpretation has gained some support and continues to be studied and debated by scientists.

5. How does the thermal interpretation impact our daily lives?

The thermal interpretation may not have direct implications for our daily lives, as it deals with the behavior of particles at the quantum level. However, the principles of quantum mechanics have led to technological advancements, such as the development of transistors and lasers, which have greatly impacted our daily lives. The thermal interpretation may also have broader implications for our understanding of reality and the universe.

Similar threads

  • Quantum Interpretations and Foundations
4
Replies
105
Views
4K
  • Quantum Interpretations and Foundations
2
Replies
41
Views
3K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
  • Quantum Interpretations and Foundations
Replies
20
Views
2K
  • Quantum Interpretations and Foundations
Replies
4
Views
3K
  • Quantum Interpretations and Foundations
Replies
24
Views
3K
  • Quantum Interpretations and Foundations
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
17
Views
2K
  • Quantum Interpretations and Foundations
24
Replies
826
Views
69K
  • Quantum Interpretations and Foundations
Replies
5
Views
790
Back
Top