Graduate How does the thermal interpretation explain Stern-Gerlach?

Click For Summary
The discussion centers on how the thermal interpretation of quantum mechanics explains the results of the Stern-Gerlach experiment, particularly when a beam of electrons with spin-z up is passed through a device oriented in the x direction. The thermal interpretation suggests that the observed splitting of the beam into two distinct paths corresponds to an uncertain measurement of the q-expectation of the spin-x operator, which is calculated to be zero. However, this interpretation raises questions, as it implies a normal distribution around zero, contradicting the experimental outcome of two distinct beams. The conversation also touches on the treatment of measurement devices in quantum mechanics, emphasizing that they should be viewed as quantum entities rather than classical ones. Ultimately, the thermal interpretation posits a deterministic framework that accounts for the observed phenomena, although its specifics remain a point of contention among participants.
  • #31
A. Neumaier said:
For a nonisolated quantum spin, the reduced density matrix is again represented on the Bloch sphere but has a nonlinear, dissipative dynamics obtained by coarse-graining.

This would appear to be the fundamental difference between the treatment you are giving and the "standard" QM treatment, which uses the linear dynamics of the Schrodinger equation combined with the obvious assumption about how the interaction Hamiltonian between the SG apparatus, with its inhomogeneous magnetic field, and the spin-1/2 passing through it acts on spin states that are parallel to the field inhomogeneity (e.g., +x and -x spins for an apparatus oriented in the x direction). In the standard treatment, the action of the Hamiltonian on a +z spin is given by simple superposition (which is allowed due to the linearity of the Schrodinger equation) of the actions on the +x and -x spins.

But the dynamics you are attributing to the system + apparatus is nonlinear, so the simple standard linear superposition picture does not work and there is no solution to the nonlinear dynamics that describes a superposition of a spot at the +x position and a spot at the -x position. Instead, there are two solutions, one describing a +x spot and one describing a -x spot, and which solution gets realized in a particular case depends on random and unmeasurable fluctuations.

Is this a fair description of what you are saying?
 
Physics news on Phys.org
  • #32
PeterDonis said:
This would appear to be the fundamental difference between the treatment you are giving and the "standard" QM treatment, which uses the linear dynamics of the Schrodinger equation [...]

But the dynamics you are attributing to the system + apparatus is nonlinear, so the simple standard linear superposition picture does not work.
Standard quantum mechanics also has lots of nonlinear approximations. Prime examples are the Hartree-Fock equations and the quantum-classical dynamics discussed in Part III of my series of papers. Nothing is really new in the thermal interpretation except for the reintepretation of what a measurement means.

The instantaneous collapse in the standard interpretation mimicks the nonlinearities by ignoring details of what happens during the measurement, which in fact tkes a finite time. it is like mimicking the classical continuous dynamics of an electric switch by treating it as an instantaneous discontinuity.

Objective-collapse theories introduce such nonlinearities explicitly into the dynamics of the universe. But they are not needed since they arise automatically through the well-knon coarse-graining procedures.

PeterDonis said:
there is no solution to the nonlinear dynamics that describes a superposition of a spot at the +x position and a spot at the -x position.
The solutions are time-dependent. The trajectories started anywhere (e.g., in a superposition) move within a very short time (the duration of the measurement) towards one of the distinguished up and down states.
 
Last edited:
  • Like
Likes julcab12
  • #33
A. Neumaier said:
Standard quantum mechanics also has lots of nonlinear approximations.

Yes, but the standard description of what happens if you make a SG measurement oriented in the x-direction on a single +z spin-1/2 does not, as I understand it, use them. It uses the linear Schrodinger equation and superposition. As I understand it, you are saying that that is wrong. In fact, it would seem that the thermal interpretation implies that it is always wrong--that as soon as you include any kind of macroscopic apparatus to make a measurement (which in practice you always will), the dynamics are no longer linear and so the Schrodinger Equation is not correct.
 
  • #34
A. Neumaier said:
The solutions are time-dependent.

If this just means that the starting state is not preserved by the dynamics, of course this is true; but it's just as true for the linear Schrodinger Equation as for the nonlinear dynamics you appear to be using.

A. Neumaier said:
The trajectories started anywhere (e.g., in a superposition) move within a very short time (the duration of the measurement) towards one of the distinguished up and down states.

But that's not what happens with the linear Schrodinger Equation. The linear Schrodinger Equation says that if the spin-1/2 starts in the +z spin state, which is a 50-50 superposition of +x and -x spin (for a SG apparatus oriented in the x direction), the solution is a superposition of a +x spot and a -x spot. There is no such solution of the nonlinear dynamics, as you say; every solution ends up with either a +x spot or a -x spot, not a superposition of both.
 
  • #35
PeterDonis said:
If this just means that the starting state is not preserved by the dynamics, of course this is true; but it's just as true for the linear Schrodinger Equation as for the nonlinear dynamics you appear to be using.
Yes.
PeterDonis said:
But that's not what happens with the linear Schrodinger Equation. The linear Schrodinger Equation says that if the spin-1/2 starts in the +z spin state, which is a 50-50 superposition of +x and -x spin (for a SG apparatus oriented in the x direction), the solution is a superposition of a +x spot and a -x spot. There is no such solution of the nonlinear dynamics, as you say; every solution ends up with either a +x spot or a -x spot, not a superposition of both.
Yes, but traditional interpretations claim the validity of the linear Schrodinger Equation only for isolated systems. A detector is never isolated, hence the linear Schrodinger Equation does not apply.

Traditional interpretations claim nothing during the measurement, which is usually thought of being instantaneous, and only claim that the state of the system after the measurement collapsed. This is obviously a nonlinear stochastic process.

The thermal interpretation explains this nonlinear stochastic process as an effect of the neglected environment.
 
  • #36
A. Neumaier said:
If the environment is such that it corresponds to a spin measurement with collapse to an up or down state, this dynamics is expected to have just two stable fixd points

Hmm...I think I might get it. Let me try my own words again and see if you agree. The appearance of Copehagen style collapse is inextricably bound up in the physical construction of the device itself. We should think of the incident field as causing the device to transition from its (sort of metastable) "ready" configuration to 1 of its 2+ possible (sort of "ground state") "clicked" configurations, which represent inaccurate TI measurements (as opposed to Copenhagen projections).

But the key is that not all transitions to all arbitrary clicked configurations can be induced by an N=1 field. In particular, such a field cannot induce a transition to a clicked configuration where the device has clicked 2+ times at different cells of the device.

Is that the idea?
 
  • #37
A. Neumaier said:
traditional interpretations claim the validity of the linear Schrodinger Equation only for isolated systems. A detector is never isolated, hence the linear Schrodinger Equation does not apply.

Just to be clear: you are saying that the linear Schrodinger Equation does not apply to the detector and the spots that appear on it, correct? The linear Schrodinger Equation seems to work fine for explaining how the interaction of the spin-1/2 with the inhomogeneous magnetic field splits one trajectory into two. But the spin-1/2 is not an isolated system in this interaction: the system also includes the magnetic field.
 
  • #38
charters said:
Hmm...I think I might get it. Let me try my own words again and see if you agree. The appearance of Copehagen style collapse is inextricably bound up in the physical construction of the device itself. We should think of the incident field as causing the device to transition from its (sort of metastable) "ready" configuration to 1 of its 2+ possible (sort of "ground state") "clicked" configurations, which represent inaccurate TI measurements (as opposed to Copenhagen projections).

But the key is that not all transitions to all arbitrary clicked configurations can be induced by an N=1 field. In particular, such a field cannot induce a transition to a clicked configuration where the device has clicked 2+ times at different cells of the device.

Is that the idea?
Yes. Which transitions are possible is constrained by conservation laws and by selection rules.
 
  • #39
A. Neumaier said:
Yes. Which transitions are possible is constrained by conservation laws and by selection rules.

So, it seems these detector transitions are restricted at the holistic level, requiring a top down definition of the overall detector, which can be a highly nonlocal object in space and time.

Consider a detector made of multiple, widely separated components, such as arbitrarily many quad cell photodetectors, each at the end a of different arm of a Mach Zender interferometer. How do photodetectors A and B (which can be kilometers or lightyears apart) know if and when they are part of the same overall MZI detector, such that their transitions have to be constrained by each other? How do they know if/when they are meant to act as one non-local detector?
 
  • Like
Likes eloheim
  • #40
charters said:
So, it seems these detector transitions are restricted at the holistic level, requiring a top down definition of the overall detector, which can be a highly nonlocal object in space and time.
Yes. In the thermal interpretation, the whole is more than its parts. In mathematical terms, a composite system has more independent beables (q-expectations) than the beables of its parts. This is a consequence of the formal apparatus of quantum mechanics, which the thermal interpretation does not change.
charters said:
Consider a detector made of multiple, widely separated components, such as arbitrarily many quad cell photodetectors, each at the end a of different arm of a Mach Zender interferometer. How do photodetectors A and B (which can be kilometers or lightyears apart) know if and when they are part of the same overall MZI detector, such that their transitions have to be constrained by each other? How do they know if/when they are meant to act as one non-local detector?
I don't know how they know. This seems to be a secret of the creator of the universe.

But other interpretations of quantum mechanics also have no explanation for long-distance correlations violating Bell-type inequalities. One can only say that the dynamics assumed predicts these phenomena, and that Nature conforms to these predictions.
 
  • #41
A. Neumaier said:
But other interpretations of quantum mechanics also have no explanation for long-distance correlations

In Bohm, you have the pilot wave making the necessary trajectory corrections to the particle HVs. In MWI you have local decoherence and branching. In GRW you have the stochastic collapse mechanism. In superdeterminism, you have it all baked into the initial conditions. Retrocausal interpretations have the backwards evolving state vector. I don't know what equivalent story you want to tell here, esp if the TI is meant to be non-random.

I also would note most of these other interpretations require the above stories specifically to deal with Bell violations. You appear to need this for even a basic MZI. It is sort of like even when the quanta is unentangled, the entire macroscopic world of all detectors is still highly entangled (and at long distances, to a stronger degree than implied by something like Reeh Schlieder).
 
  • Like
Likes eloheim
  • #42
charters said:
In Bohm, you have the pilot wave making the necessary trajectory corrections to the particle HVs. In MWI you have local decoherence and branching. In GRW you have the stochastic collapse mechanism. In superdeterminism, you have it all baked into the initial conditions. Retrocausal interpretations have the backwards evolving state vector.
They have stories, not explanations. They all assume an unexplained nonlocal dynamics from the start.
charters said:
I don't know what equivalent story you want to tell here, esp if the TI is meant to be non-random.
The existence of multilocal q-expectations, which provide the potentially nonlocal correlations and evolve deterministically.
charters said:
I also would note most of these other interpretations require the above stories specifically to deal with Bell violations.
So does the thermal interpretation, but without artificial baggage (no additional micropositions, no multiple worlds, no postulated collapse, no causality violations). Though it is superdeterministic in the sense that every deterministic theory of the universe has its complete fate encoded in the initial condition. (I don't really know what the extra 'super-' is about.)
charters said:
You appear to need this for even a basic MZI.
No. If one does MZI with coherent light there are no correlations between the different detector results; each one fires independently according to its own locally incident field intensity, and the observed coincidence statistics (no bunching or antibunching) comes out. The apparent nonlocality is due to looking at the five detectors only at the random times where one of the detector fires, and observes that at this exact time no other detector fires. In fact, the collection of all five detectors responds with an all zero result almost all the times and occasionally with a 100% (in your setting), exact coincidencde has zero probability. Thus nothing nonlocal happens.
 
  • #43
A. Neumaier said:
(I don't really know what the extra 'super-' is about.)

The super in superdeterminism means that the interpretation is set up such that, for a generic choice of initial conditions, the standard equations/laws we use to make predictions will not work.

So, in this case, you explain the TI as relying on:

A. Neumaier said:
The existence of multilocal q-expectations, which provide the potentially nonlocal correlations and evolve deterministically.

But I can imagine different initial conditions with arbitrary/different multilocal q-expectations and therefore different non-local correlations between detector components. These detector correlations won't reproduce the correct experimental outcomes, eg in EPR experiments. It's only a special subclass of conceivable multilocal q-expectations which have to be assumed/baked into the initial conditions in order to reproduce QM.

This is not really anything to do with the TI in particular. It is just a consequence of Bell's theorem that any single world, deterministic interpretation will feature either a pilot wave+preferred foliation, retrocausality, or superdeterminism. Based on what you've written (and since you don't seem to adopt the former two concepts) superdeterminism seems like the choice the TI makes here.
 
  • Like
Likes eloheim
  • #44
charters said:
The super in superdeterminism means that the interpretation is set up such that, for a generic choice of initial conditions, the standard equations/laws we use to make predictions will not work.
But this is the case for any deterministic dynamics of a specific system. For a generic choice of initial conditions, Nwton's law for our Solar system is not predictive. Would you therefore call Newton's mechanics superdeterministic?

On the other hand, the universe is a single system, so has to be treated on par with our Solar system.
charters said:
Based on what you've written (and since you don't seem to adopt the former two concepts) superdeterminism seems like the choice the TI makes here.
Sure, the TI is deterministic, and applies only for our single universe.

By the preceding it is superdeterministic in your sense, just like Newton's mechanics for our Solar system.
charters said:
But I can imagine different initial conditions with arbitrary/different multilocal q-expectations and therefore different non-local correlations between detector components. These detector correlations won't reproduce the correct experimental outcomes, eg in EPR experiments
But these detector and environment preparations would also not reproduce the actual detector and environment preparations needed to guarantee the correct performance of these experiments.

Thus TI is predictive without the need for assuming more about the initial conditions than is assumed in the analysis of the experiment.
 
  • #45
A. Neumaier said:
Would you therefore call Newton's mechanics superdeterministic?

Yes, in a limited sense. Newtonian mechanics does have to assume restriction to the set of initial conditions where nonrelativistic physics is valid. But this is not really something to worry about for emergent theories only valid in some restricted regime. In contrast, QM is claimed to be universal and fundamental, so if the validity of its equations/laws are claimed to be contingent on initial conditions in this way, a lot of people experience some heartburn and doubt.

A. Neumaier said:
But these detector and environment preparations would also not reproduce the actual detector and environment preparations needed to guarantee the correct performance of these experiments.

This is begging the question/assuming the superdeterminist methodology. The anti-superdeterminism worldview is that you can't look to outcomes to decide which initial conditions are valid.

I'm not really trying to say superdeterminism is an unacceptable philosophy. It doesn't work for me, but it does for many people smarter than me, most prominently t'Hooft. I guess I just wanted to highlight what I see as *the* major philosophical wedge issue/commitment in the TI, which doesn't get much attention in the papers.
 
  • #46
charters said:
In contrast, QM is claimed to be universal and fundamental, so if the validity of its equations/laws are claimed to be contingent on initial conditions in this way, a lot of people experience some heartburn and doubt.
Well, in the TI, the universal laws approximately follow from the law for the full universe, for all small subsystems of the universe that physicists find (by Nature or by special equipment, which is just human-manipulated Nature) prepared in the initial states they use to make successful predictions. To produce these approximations, the initial state of the universe is irrelevant; only the initial state of the subsystem and some general features of the universe known to be valid at the time of performing the experiment matter.

Thus no fine-tuning of the universe is needed beyond perhaps a low entropy state of the early universe. And even that might perhaps come about through coarse-graining.
charters said:
This is begging the question/assuming the superdeterminist methodology. The anti-superdeterminism worldview is that you can't look to outcomes to decide which initial conditions are valid.
I don't see the problem.

It is obvious that one can predict states of a subsystem of a big deterministic system only when the initial conditions of this subsystem actually have the values assumed for the prediction! One does not have to look at the outcomes but at the preparation!
 
  • #47
A. Neumaier said:
But this is the case for any deterministic dynamics of a specific system. For a generic choice of initial conditions, Nwton's law for our Solar system is not predictive. Would you therefore call Newton's mechanics superdeterministic?

The distinction between deterministic and superdeterministic theories is basically in what can be considered "free variables". For example, in the EPR experiment, we have two experimenters, Alice and Bob, who choose what measurements to perform (so that's one source of variability) and then we have the experimental results themselves, which is another source of variability. In Bell's analysis of EPR, he treats Alice's and Bob's choices as "free variables", and considers the measurement results to be functions of those choices (plus the "hidden variable", which is another free variable). In contrast, if you consider Alice's and Bob's choices to be constrained so that there is a hidden relationship between the three variables--(1) Alice's choice, (2) Bob's choice, and (3) the hidden variable value--then Bell's analysis doesn't apply. You can certainly match the predictions of EPR with local hidden variables if you assume that Alice's and Bob's choices are predictable (or are determined by the hidden variable ##\lambda##). That loophole is the superdeterminism loophole.

It might seem at first that determinism implies superdeterminism. If Alice and Bob are described by deterministic laws, then their choices should be predictable, right? But they're not really the same. Alice might decide to make her choice based on some external event, such as whether she sees a supernova explosion in a certain region of the sky right before her measurement. Bob might decide to make his choice based on whether a basketball player makes his shot. Their choices can depend on absolutely anything. So in order for Alice's and Bob's choices to be reliably correlated, it's not enough that things be deterministic, but that the whole universe (or at least the part that is observable by Alice and Bob) be set up precisely in order to make that correlation. Such superdeterminism is not just a matter of having the future determined by current conditions (ordinary determinism), but would require that current conditions of the entire universe be fine-tuned.
 
  • Like
Likes eloheim and andrew s 1905
  • #48
stevendaryl said:
The distinction between deterministic and superdeterministic theories is basically in what can be considered "free variables".
Would a Laplacian classical multiparticle universe in which observers (taken to be machines to avoid problems with consciousness) are also multiparticle systems be superdeterministic in this sense?
 
Last edited:
  • #49
A. Neumaier said:
Would a Laplacian classical multiparticle universe in which observers (taken to be machines to avoid problems with consciousness) are also multiparticle systems be superdeterministic in this sense?
Consider such a world where the gravitational potential is ##r^{-1+a}##, for some constant ##a > 0## let's say.

Then imagine that the initial state of the universe is such that your machines are "destined" to never obtain the accuracy or sufficient statistical certainty to confirm the ##a## correction and are thus "doomed" to believe gravity has a ##r^{-1}## potential. That would be superdeterminism.

In essence a deterministic world where the initial conditions never evolve into states corresponding to observers obtaining an accurate determination of the physical laws.
 
Last edited:
  • #50
A. Neumaier said:
Would a Laplacian classical multiparticle universe in which observers (taken to be machines to avoid problems with consciousness) are also multiparticle systems be superdeterministic in this sense?

As I said in my previous post, being deterministic does not imply being superdeterministic. Classical mechanics is not superdeterministic.
 
  • Like
Likes Demystifier
  • #51
A. Neumaier said:
I don't see the problem

Ok, let me try a different route. Consider a basic SG experiment with an N=1 beam. You claim the TI is deterministic. Accordingly, to encode this hidden determinism we should be able to write the state of the experiment *prior* to the detector click as

(|UP>| + |DOWN> ) ⊗ {up}

where the Dirac notation is the normal quantum state and {n} is the state of the hidden variable which deterministically predicts the click. In the TI, I believe {up} and {down} would represent different fine grained distinctions in the configuration of the detector itself (as opposed to BM, where it represents different configurations of the beam).

Do you agree with this description being faithful to the TI so far?
 
  • #52
charters said:
Ok, let me try a different route. Consider a basic SG experiment with an N=1 beam. You claim the TI is deterministic. Accordingly, to encode this hidden determinism we should be able to write the state of the experiment *prior* to the detector click as

(|UP>| + |DOWN> ) ⊗ {up}

where the Dirac notation is the normal quantum state and {n} is the state of the hidden variable. In the TI, I believe {up} and {down} would represent different fine grained distinctions in the configuration of the detector itself (as opposed to BM, where it represents different configurations of the beam).

Do you agree with this description being faithful to the TI so far?
No. The beables (hidden variables) are the collection of all q-expectations of the universe. Given a single spin prepared in a pure state ##\psi## we know at preparation time that for any 3-vector ##p## that the quantity ##S(p):=p\cdot\sigma## of the spin satisfies ##\langle S(p)\otimes 1\rangle= \psi^*S(p)\psi##. In your case, this is the sum of the four entries of ##S(p)##. Your curly up and down correspond to pointer readings, i.e., functions of q-expectations (beables, hidden variables) of the detector, not to a state of the detector. Many states of the detector lead to identical pointer readings.

This is completely independent of the deterministic dynamics, which is the Ehrenfest dynamics of the universe.

In the most general case we know nothing more, unless we make assumptions a similar kind about the environment, i.e., the state and the dynamics of the remainder of the universe and its interactions with the spin. These assumptions define a model for what it means that this environment contains a detector with a pointer or screen, that responds to the prepared spin in the way required to count as a measurement.

Thus you need to specify a complete model for the measurement process (including a Hamiltonian for the dynamics of th model universe) to conclude something definite. This is the reason why the arguments for analyzing meaurement are either very lengthy (as in the AB&N paper) or only qualitative (as in my Part III).
 
  • #53
stevendaryl said:
As I said in my previous post, being deterministic does not imply being superdeterministic. Classical mechanics is not superdeterministic.
DarMM said:
In essence a deterministic world where the initial conditions never evolve into states corresponding to observers obtaining an accurate determination of the physical laws.
Do you have different notions of the meaning of superdeterminisitc?
 
  • #54
A. Neumaier said:
Your curly up and down correspond to pointer readings, i.e., functions of q-expectations (beables, hidden variables) of the detector, not to a state of the detector. Many states of the detector lead to identical pointer readings

Ok it is possible I just don't get it or you are talking about hidden variables in a way very different from what I am used to. But this may all be semantics around the use of the word "state" so I want to rephrase:

All I am trying to pin down is whether or not the hidden variable descriptions are such that, just before the measurement, all HV descriptions of the detector that will lead to an observable "up" reading (for a particular choice of axis) are completely disjoint/distinct from all the HV descriptions that will lead to an observable "down" reading?

In essence, would knowledge of the hidden variable description of the detector at t<1 allow me to perfectly predict the observed click at t=1?
 
  • #55
A. Neumaier said:
Do you have different notions of the meaning of superdeterminisitc?
I'm saying:
A superdeterministic world is a deterministic world where the initial conditions never evolve into states corresponding to observers obtaining an accurate determination of the physical laws.

A world can be deterministic without being superdeterministic if the initial conditions permit the development of observers who obtain accurate enough measurements to determine the laws of the world.

So for example in 't Hooft's model Quantum Mechanics is literally completely wrong. Not approximately right but inaccurate in some remote regimes like the Early Universe, but literally completely wrong even in its predictions of say the Stern Gerlach experiment. However the initial conditions of the world are such that experimental errors occur that make it look correct.
 
  • #56
charters said:
would knowledge of the hidden variable description of the detector at t<1 allow me to perfectly predict the observed click at t=1?
I don' think so, because in reading a discrete pointer there is a fuzzy decision boundary. This is like race conditions in computer science which may delay decisions indefinitely. Thus there is a partition into 3 sets, one deciding for spin up, one deciding for spin down, and one for indecision; the third one having positive measure that goes to zero only as the duration of the measurement goes to infinity.

In experimental practice, this accounts for the limited efficiency of detectors.
 
Last edited:
  • #57
stevendaryl said:
As I said in my previous post, being deterministic does not imply being superdeterministic. Classical mechanics is not superdeterministic.
DarMM said:
A world can be deterministic without being superdeterministic if the initial conditions permit the development of observers who obtain accurate enough measurements to determine the laws of the world.
In a classical Laplacian universe, a Laplacian detector of finite size perfectly knowing its own state can never get an arbitrarily accurate estimate of a single particle state external to it. Thus a classical Laplacian universe would be superdeterministic. Do you mean that, @DarMM, contradicting @stevendaryl?

If so, the thermal interpretation is also superdeterministic, for essentially the same reason.
 
  • #58
A. Neumaier said:
I don' think so, because in reading a discrete pointer there is a fuzzy decision boundary. This is like race conditions in computer science which may delay decisions indefinitely. Thus there is a partition into 3 sets, one deciding for spin up, one deciding for spin down, and one for indecision; the third one having positive measure that goes to zero only as the duration of the measurement goes to infinity.

In experimental practice, this accounts for the limited efficiency of detectors.

I don't understand how this answer is consistent with what you wrote in III.4.2, specifically:

"These other variables therefore become hidden variables that would determine the stochastic elements in the reduced stochastic description, or the prediction errors in the reduced deterministic description. The hidden variables describe the unmodeled environment associated with the reduced description.6 Note that the same situation in the reduced description corresponds to a multitude of situations of the detailed description, hence each of its realizations belongs to different values of the hidden variables (the q-expectations in the environment), slightly causing the realizations to differ."

Would your answer be different had I phrased my question as?:

would knowledge of the hidden variable description of the detector plus its local environment (eg, the detector casing or surrounding air in the lab) at t<1 allow me to perfectly predict the observed click at t=1?
 
  • #59
charters said:
I don't understand how this answer is consistent with what you wrote in III.4.2, specifically:

"These other variables therefore become hidden variables that would determine the stochastic elements in the reduced stochastic description, or the prediction errors in the reduced deterministic description. The hidden variables describe the unmodeled environment associated with the reduced description. Note that the same situation in the reduced description corresponds to a multitude of situations of the detailed description, hence each of its realizations belongs to different values of the hidden variables (the q-expectations in the environment), slightly causing the realizations to differ."

Would your answer be different had I phrased my question as?:

would knowledge of the hidden variable description of the detector plus its local environment (eg, the detector casing or surrounding air in the lab) at t<1 allow me to perfectly predict the observed click at t=1?
No. You can take the detector to be the whole orthogonal complement of the measured system, and my answer is still the same. You can also take it to be just the pointer variable; all other beables of the universe are effectively hidden variables, no matter whether they are actually hidden. My first response was less focussed and ignored the race conditions since your question was less clear.

This is because of the nature of a real detection process (which is what is modeled in the thermal interpretation). There is a continuous pointer variable ##x## (a function of the beables = hidden variables = q-expectations, all of them continuous) of the detector that is initially at zero. Suppose that the pointer readings for a decision up are close to ##1##, that for down are close to ##-1##, and a reading counts as definite only if the sign and one bit of accuracy have persisted for more than a minimal duration ##\Delta t##. This defines the three response classes up, down, and undecided. At short times after the preparation, the detector didn't have sufficient time to respond, and the third (undecided) set of conditions has measure essentially 1; the up and down measures are essentially zero. These measures are a continuous function of the observation time and gradually move to ##0,p,1-p##, but achieve these values only in the limit of infinite time.
 
  • #60
Ok I appreciate the details, but I don't think this is necessary for the heart of my question. Some finite time after the N=1 beam has become incident on the detector, the pointer is going to visibly have pointed towards 1 or -1. I am not concerned with how quickly this happens.

All I want to know is: would a full hidden variable/beable description of the detector/environment at some time before the beam is incident be sufficient to predict whether the detector eventually reads 1 or -1 (for any given beam).

I take this to be the minimal definition of hidden variable determinism in quantum foundations, so if you say no to this, I don't understand how you claim the TI is deterministic (except in the classical limit where all interpretations are effectively deterministic) or has meaningful hidden variables. Hidden variables that don't make this sort of prediction are not really fulfilling their defined purpose.
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 826 ·
28
Replies
826
Views
87K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 24 ·
Replies
24
Views
3K