I Is Free Will a Foundational Assumption in Quantum Theory?

  • #91
A. Neumaier said:
One has that also in a sequence of quantum measurements...
Yes, but not in the meantime between the two measurements. In the meantime the system is in an undefined state (according to Copenhagen interpretation), which cannot be said for stochastic processes in the usual sense.
 
Physics news on Phys.org
  • #92
To be completely clear this is part of the formalism. Another system constitutes the POVM selected for the system under study. Only a POVM provides a well defined statistical model, the full algebra of projectors does not.
 
  • #93
Demystifier said:
Yes, but not in the meantime between the two measurements. In the meantime the system is in an undefined state (according to Copenhagen interpretation), which cannot be said for stochastic processes in the usual sense.
There are also discrete stochastic processes. They are much used in practical time series analysis, where one only has access to a discrete series of measurements. Continuous stochastic processes arise in the quantum mechanics of continuous measurements.
 
  • #94
DarMM said:
To be completely clear this is part of the formalism. Another system constitutes the POVM selected for the system under study. Only a POVM provides a well defined statistical model, the full algebra of projectors does not.
Repeated application of Born's rule with collapse, as usually stated since Heisenberg and Dirac, already provides a well defined (though too idealized) statistical model involving a discrete stochastic process.
 
  • #95
A. Neumaier said:
Repeated application of Born's rule with collapse, as usually stated since Heisenberg and Dirac, already provides a well defined (though too idealized) statistical model involving a discrete stochastic process.
Of course, that's (in general) a sequence of POVMs. What distinction are you pointing out?
 
  • #96
DarMM said:
Of course, that's (in general) a sequence of POVMs.
Not quite. A POVM neither specifies the observed value nor the posterior state. To have a well-defined and realistic discrete stochastic process, one needs more than a POVM, namely a quantum instrument.
DarMM said:
What distinction are you pointing out?
The main point was that no POVMs are needed to have stochastic processes in a quantum setting. Note that POVMs for quantum measurement were introduced in 1968, long after Born obtained his Nobel prize.
 
  • #97
A. Neumaier said:
Not quite. A POVM neither specifies the observed value nor the posterior state. To have a well-defined and realistic discrete stochastic process, one needs more than a POVM, namely a quantum instrument.
Of course true. I was only referring to the need for an external system to define an outcome space. That system being represented by a choice of POVM. I wasn't saying you only need a POVM, you of course need the state as well, etc. Rather it is that only after the selection of a POVM is the statistical model defined, unlike the classical case where no such selection is needed on the algebra of random variables.

A. Neumaier said:
The main point was that no POVMs are needed to have stochastic processes in a quantum setting. Note that POVMs for quantum measurement were introduced in 1968, long after Born obtained his Nobel prize.
Again of course. My point was more so that one needs something to select an outcome space for the system in order to have a well defined statistical model, unlike in the classical probabilistic case where events are defined without such a choice of an auxiliary system.
It's only in general that the auxiliary system is represented by a POVM, I wasn't claiming that nobody had a statistical understanding of QM prior to 1968. PVMs being a special case also represent a certain idealized such auxillary system.
 
  • #98
DarMM said:
My point was more so that one needs something to select an outcome space for the system in order to have a well defined statistical model, unlike in the classical probabilistic case where events are defined without such a choice of an auxiliary system.
Unlike only in simplistic classical models.

Events are also not defined in a Laplacian classical universe without such a choice of an auxiliary system.

To get a proper statistical system, one needs something to select parts of the universe to serve as observed systems and detectors, respectively, and then do some coarse-graining of the detector dynamics.

This is the Heisenberg cut! It is also necessary in classical physics if you model the detector in a microscopic way.
 
  • #99
Smarter guys have more free will. The higher the organization, the higher the emergent new properties - lower level organisms have little or no free will(e.g. worms, mollusks, etc.). Self conscious thought must play a big role in free will. Low intelligence individuals are usually bound by their animal instincts and thus often end up in jail unable to explain why they do the stuff they do. Probably little free will to speak of. Higher consciousness is roughly equal to 'free will'.
 
  • #100
A. Neumaier said:
Unlike only in simplistic classical models.

Events are also not defined in a Laplacian classical universe without such a choice of an auxiliary system.
I don't think so. Since one simply has a Boolean algebra of propositions the events can be considered to occur independent of the device, with the device simply recording them with some small disturbance to both. All observables "mesh" correctly to be considered random variables on one sample space of outcomes.

Do you have a reference for this?
 
Last edited:
  • #101
A. Neumaier said:
Do you really think that a child is not free in its decisions just because we can predict that it will say yes when it is asked whether it likes to have ice cream?

We we should distinguish between "free will" as a sensation of a conscious being along the lines of "I decide to... " versus the properties of the underlying physical processes that implement this sensation. Do the physical processes that implement the sensation of free will differ in some fundamental way from the physical processes that implement the weather or the behavior of insects?
 
  • #102
For me, it would be a huge surprise if it is ever shown that biological systems, and concretely neural systems, follow laws independent of the laws of physics.

As far as I know, there is currently not a single hint that points in that direction.
 
  • #103
DarMM said:
I don't think so. Since one simply has a Boolean algebra of propositions the events can be considered to occur independent of the device, with the device simply recording them with some small disturbance to both. All observables "mesh" correctly to be considered random variables on one sample space of outcomes.

Do you have a reference for this?
A classical universe has no probability - everything is deterministic. It just has particles with time-dependent positions and momenta - no observers or detectors, unless these are introduced through a Heidelberg cut.
Only the latter introduces probability - quite in the spirit of Heisenberg.

I don't know a single paper dealing with the classical measurement problem - the question how a detector subsystem of a large classical chaotic system can acquire information about a disjoint subsystem to be measured.

This would be the classical analogy of the quantum measurement situation, and has a lot of the features of the latter.

A classical event would be something happening to the measured system that is approximated by something happening in the detector. This introduces probability in an otherwise deterministic classical universe.
 
Last edited:
  • #104
mattt said:
For me, it would be a huge surprise if it is ever shown that biological systems, and concretely neural systems, follow laws independent of the laws of physics.

As far as I know, there is currently not a single hint that points in that direction.
The Emperor's new Mind? Not sure, read it some 10 years back -- didnt even finish it iirc.
 
  • #105
A. Neumaier said:
A classical universe has no probability - everything is deterministic.

This can't be the case.

This would mean there's some mechanism that knows the Experimenter's choice prior to measurement and this would go against all of the evidence like the Free Will Theorem which states:

Given the axioms, if the two experimenters in question are free to make choices about what measurements to take, then the results of the measurements cannot be determined by anything previous to the experiments.

The Axioms are:

  1. Fin: There is a maximal speed for propagation of information (not necessarily the speed of light). This assumption rests upon causality.
  2. Spin: The squared spin component of certain elementary particles of spin one, taken in three orthogonal directions, will be a permutation of (1,1,0).
  3. Twin: It is possible to "entangle" two elementary particles and separate them by a significant distance, so that they have the same squared spin results if measured in parallel directions. This is a consequence of quantum entanglement, but full entanglement is not necessary for the twin axiom to hold (entanglement is sufficient but not necessary).
Free Will Theorem

There would have to be some hidden variable mechanism that would transmit information faster than light to quantum system being measured and that system would be determined by this hidden variable mechanism prior to measurement and that goes against everything that has been observed. Here's some experiments.

The Big Bell Test which closed the freedom of choice loophole.

Challenging local realism with human choices

A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism1, in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings2,3. Although technology can satisfy the first two of these requirements4,5,6,7, the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human ‘free will’ could be used rigorously to ensure unpredictability in Bell tests8. Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology9. The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons5,6, single atoms7, atomic ensembles10 and superconducting devices11. Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite12 scenarios. Project outcomes include closing the ‘freedom-of-choice loophole’ (the possibility that the setting choices are influenced by ‘hidden variables’ to correlate with the particle properties13), the utilization of video-game methods14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science.

https://www.nature.com/articles/s41586-018-0085-3

Here's 2 more:

Experimental rejection of observer-independence in the quantum world

The scientific method relies on facts, established through repeated measurements and agreed upon universally, independently of who observed them. In quantum mechanics, the objectivity of observations is not so clear, most dramatically exposed in Eugene Wigner's eponymous thought experiment where two observers can experience fundamentally different realities. While observer-independence has long remained inaccessible to empirical investigation, recent no-go-theorems construct an extended Wigner's friend scenario with four entangled observers that allows us to put it to the test. In a state-of-the-art 6-photon experiment, we here realize this extended Wigner's friend scenario, experimentally violating the associated Bell-type inequality by 5 standard deviations. This result lends considerable strength to interpretations of quantum theory already set in an observer-dependent framework and demands for revision of those which are not.
https://arxiv.org/abs/1902.05080

Wheeler's delayed-choice gedanken experiment with a single atom

The wave–particle dual nature of light and matter and the fact that the choice of measurement determines which one of these two seemingly incompatible behaviours we observe are examples of the counterintuitive features of quantum mechanics. They are illustrated by Wheeler’s famous ‘delayed-choice’ experiment1, recently demonstrated in a single-photon experiment2. Here, we use a single ultracold metastable helium atom in a Mach–Zehnder interferometer to create an atomic analogue of Wheeler’s original proposal. Our experiment confirms Bohr’s view that it does not make sense to ascribe the wave or particle behaviour to a massive particle before the measurement takes place1. This result is encouraging for current work towards entanglement and Bell’s theorem tests in macroscopic systems of massive particles.

https://arxiv.org/abs/1902.05080

So everything can't be deterministic. The freedom of choice of the Experimenter has to be totally free unless there's some faster than light hidden variable that determines the outcomes of quantum systems prior to a measurement occurring.
 
  • #106
Quantum Alchemy said:
This can't be the case.

Note that @A. Neumaier said a classical universe. The Spin and Twin axioms of the Free Will Theorem would not hold in a classical universe; they are quantum assumptions.
 
  • #107
PeterDonis said:
Note that @A. Neumaier said a classical universe. The Spin and Twin axioms of the Free Will Theorem would not hold in a classical universe; they are quantum assumptions.

Yes they would hold and this is stated by Kochen and Conway in their lectures. This has to be the case or there would need to be some mechanism that transmits information faster than light and that determines the Experimenters choice. Here's an example:

Say you have an entangled particle pair and one particle goes to Alice in lab A and the other particle goes to Bob in lab B. There's not any information in the brain or anywhere else that can determine the choices between Alice and Bob.

They can get together and say Alice will carry out her measurement before Bob or Vice Versa. They can also say a random number generator will determine who will carry out there measurement first. If it's a 1 then Alice will carry out her measurement at 1 and Bob at 1:01. If the RNG is an 0, then Bob will carry out his measurement at 1.

The Big Bell Test which I listed earlier shows that these would have to be free choices carried out by Bob and Alice. It says this at the end of the Abstract.

Project outcomes include closing the ‘freedom-of-choice loophole’ (the possibility that the setting choices are influenced by ‘hidden variables’ to correlate with the particle properties13), the utilization of video-game methods14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science.

As far as I'm concerned, Determinism isn't Scientific, it's a Philosophy. There's no evidence of any mechanism in the brain or any mechanism anywhere that can determine the choice of the Experimenter prior to carrying out a measurement.
 
  • #108
Quantum Alchemy said:
Yes they would hold and this is stated by Kochen and Conway in their lectures.

Where do they say the Spin and Twin axioms hold in classical physics? Please give a specific paper, section, and page number.

Quantum Alchemy said:
Here's an example

You don't need to explain how EPR experiments work; we all know that. You need to back up your claim about the Spin and Twin axioms holding in classical physics.
 
  • #109
The physics we know today today is too rudimentary to derive anything of significance for this topic. Hyerarchy and emergence of new features(top down causation) play a big role in how quantum superpositions can manifest as atoms, molecules, chemicals, materials, biological systems, conscious entities able to process information, social phenomenons and self awareness. Other planets may be more simple but this one is home to molecules as big as 200 billion atoms. And complexity comes with new useful features. Somehow this universe is too survivable to be an accident. Due to parsimony one would hardly expect such an extraordinary myriad of new properties that can combine and lead to such a long chain of hyerarchial structures and ultimately conscious thought and free will, but probably 1 or 2 different mundane basic ingredients. Why is this possible? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3262299/
 
  • #110
PeterDonis said:
Where do they say the Spin and Twin axioms hold in classical physics? Please give a specific paper, section, and page number.

Here you go:

These are excerpts from the paper The Strong Free Will Theorem by Kochen and Conway that refute Determinism.

The TWIN Axiom: For twinned spin 1 particles, suppose experimenter A performs a triple experiment of measuring the squared spin component of particle a in three orthogonal directions x, y, z, while experimenter B measures the twinned particle b in one direction, w. Then if w happens to be in the same direction as one of x, y, z, experimenter B’s measurement will necessarily yield the same answer as the corresponding measurement by A.

The MIN Axiom: Assume that the experiments performed by A and B are space-like separated. Then experimenter B can freely choose anyone of the 33 particular directions w, and a’s response is independent of this choice. Similarly and independently, A can freely choose anyone of the 40 triples x, y, z, and b’s response is independent of that choice. 4 It is the experimenters’ free will that allows the free and independent choices of x, y, z and w. But in one inertial frame – call it the “A-first” frame – B’s experiment will only happen some time later than A’s, and so a’s response cannot, by temporal causality, be affected by B’s later choice of w. In a B-first frame, the situation is reversed, justifying the final part of MIN. (We shall discuss the meaning of the term “independent” more fully in the Appendix.)

Here's the coup de grace:

Some readers may object to our use of the term “free will” to describe the indeterminism of particle responses. Our provocative ascription of free will to elementary particles is deliberate, since our theorem asserts that if experimenters have a certain freedom, then particles have exactly the same kind of freedom. Indeed, it is natural to suppose that this latter freedom is the ultimate explanation of our own.

The tension between human free will and physical determinism has a long history. Long ago, Lucretius made his otherwise deterministic particles “swerve” unpredictably to allow for free will. It was largely the great success of deterministic classical physics that led to the adoption of determinism by so many philosophers and scientists, particularly those in fields remote from current physics. (This remark also applies to “compatibalism,” a now 8 unnecessary attempt to allow for human free will in a deterministic world.)

Although, as we show in [1], determinism may formally be shown to be consistent, there is no longer any evidence that supports it, in view of the fact that classical physics has been superseded by quantum mechanics, a non-deterministic theory. The import of the free will theorem is that it is not only current quantum theory, but the world itself that is nondeterministic, so that no future theory can return us to a clockwork universe.


https://arxiv.org/pdf/0807.3286.pdf

I also listed other experiments that support what I'm saying including the Big Bell Test. There's no evidence to support determinism. It's not scientific in any way as shown by Conway and Kochen in the Strong Free Will Theorem.
 
  • #111
Quantum Alchemy said:
These are excerpts from the paper The Strong Free Will Theorem by Kochen and Conway that refute Determinism.

There is currently nothing that can refute Determinism, given that our best Physics Theories of today, admit deterministic interpretations.

...classical physics has been superseded by quantum mechanics, a non-deterministic theory. The import of the free will theorem is that it is not only current quantum theory, but the world itself that is nondeterministic...


That is not correct. Quantum Mechanics admits several deterministic interpretations. Quantum field Theory too.
 
Last edited:
  • Like
Likes Delta2
  • #112
By the way, I have never felt that I have free will in the sense that other people say they feel they have free will. I can try to formulate an a posteriori explanation of my thoughts and my feelings, but they are only guesses, because we don't have access to the exact neural processes that give rise to thoughts and feelings arising in consciousness.
 
  • #113
mattt said:
There is currently nothing that can refute Determinism, given that our best Physics Theories of today, admit deterministic interpretations.
That is not correct. Quantum Mechanics admits several deterministic interpretations. Quantum field Theory too.

This isn't the case. There's nothing that reduces probabilities to 1 outside of a measurement. QM is inherently in-deterministic.

When you look at QFT, the standard formulation is in terms of the S-Matrix. So when you apply the S Matrix to the state you can only ask questions like what's the probability that this vector describes...

So the probability distribution or the outcomes that can occur are deterministic but the outcomes that do occur are not.

This would be like saying, the outcomes for a pair of dice are deterministic. You can only roll a 2-12 no matter how many times you roll the dice. Each roll is random and we can only talk about the outcomes of each roll in terms of probability. For instance, you're likely to see a 7 more than a 2 as you keep rolling the dice because there's more ways for a 7 to be rolled.

There's nothing in QFT or any formulation of QM that reduces the probabilities that can occur to 1. So either it's 2 outcomes or maybe 10^500 but never just 1. This is the inherent in-determinism you can't avoid in QM. This is based on things like the Strong Free Will Theorem, the Big Bell Test and more. I've listed experiments to support what I'm saying and I haven't seen anything that refutes it.

Again, you can only get a probability distribution. So 52 cards are deterministic. 2-12 on a dice are deterministic. There's 2,598,960 poker hands that can occur and that's deterministic. Knowing this, we can calculate what probabilities can occur with certainty. We can't calculate who will get dealt what hands in a poker game or what number you will roll in a dice game no more than we can calculate if you're going to measure spin up or spin down.

So it's inherently in-deterministic and you can never reduce the outcomes to 1.
 
  • Like
Likes Lord Jestocost
  • #114
Quantum Alchemy said:
This isn't the case. There's nothing that reduces probabilities to 1 outside of a measurement. QM is inherently in-deterministic.

When you look at QFT, the standard formulation is in terms of the S-Matrix. So when you apply the S Matrix to the state you can only ask questions like what's the probability that this vector describes...

So the probability distribution or the outcomes that can occur are deterministic but the outcomes that do occur are not.

This would be like saying, the outcomes for a pair of dice are deterministic. You can only roll a 2-12 no matter how many times you roll the dice. Each roll is random and we can only talk about the outcomes of each roll in terms of probability. For instance, you're likely to see a 7 more than a 2 as you keep rolling the dice because there's more ways for a 7 to be rolled.

There's nothing in QFT or any formulation of QM that reduces the probabilities that can occur to 1. So either it's 2 outcomes or maybe 10^500 but never just 1. This is the inherent in-determinism you can't avoid in QM. This is based on things like the Strong Free Will Theorem, the Big Bell Test and more. I've listed experiments to support what I'm saying and I haven't seen anything that refutes it.

Again, you can only get a probability distribution. So 52 cards are deterministic. 2-12 on a dice are deterministic. There's 2,598,960 poker hands that can occur and that's deterministic. Knowing this, we can calculate what probabilities can occur with certainty. We can't calculate who will get dealt what hands in a poker game or what number you will roll in a dice game no more than we can calculate if you're going to measure spin up or spin down.

So it's inherently in-deterministic and you can never reduce the outcomes to 1.

What is deterministic or not, is the model we use, and Quantum field Theory, as a model of reality (the best one we have currently), in the Thermal Interpretation (for example), is completely deterministic.
 
  • Like
Likes PeroK
  • #115
But is emergent determinism the same as the determinism observed on the macro scale? Without making unwarranted assumptions, the micro world is fundamentally interministic and non realistic. All these so called interpretations rely on assumptions that fail in experiments(see the Wigner's friend experiment from March 2019). Einstein was a proponent of this line of reasoning and was forced to concede defeat.
 
  • #116
EPR said:
Without making unwarranted assumptions, the micro world is fundamentally interministic

That's not correct. We have both deterministic and non-deterministic equally valid models of fundamental physics.

EPR said:
.All these so called interpretations rely on assumptions that fail in experiments

No experiment can favor one interpretation over another (if they are actually interpretations).
 
  • Like
Likes AndreasC
  • #117
mattt said:
That's not correct. We have both deterministic and non-deterministic equally valid models of fundamental physics.
Are the deterministic interpretations known to fully work with QFT? Genuine question. My understanding was that Bohmian Mechanics hasn't been successfully generalized, due to the Reeh-Schlieder theorem there are issues with understanding MWI branching and the Thermal Interpretation relies on certain as of yet unproven conjectures.
 
  • #118
A. Neumaier said:
This would be the classical analogy of the quantum measurement situation, and has a lot of the features of the latter.

A classical event would be something happening to the measured system that is approximated by something happening in the detector. This introduces probability in an otherwise determinstic classical universe.
That's the point though. In the classical case we can consider the imprint in the device as some kind of approximation of an event that occurred in the system. This is because all random variables in the classical case can be considered as functions on a space of outcomes. Thus we have some notion of the events of microsystem when no external system is present to register them.

In quantum theory viewed as a probability theory, due to the non-Boolean structure, we do not. Some device must be present to define the outcome space. The external system is a crucial aspect of defining the events unlike in the classical case where we can consider there to be events independent of the external system, with the external system simply recording them with some hopefully small error.

A recent paper by Janas, Cuffaro, Janssen (https://arxiv.org/abs/1910.10688) puts it well:
Quantum mechanics is about probabilities. The kinematical framework of the theory is probabilistic in the sense that the state specification of a given system yields, in general, only the probability that a selected observable will take on a particular value when we query the system concerning it. Quantum mechanics’ kinematical framework is also non-Boolean: The Boolean algebras corresponding to the individual observables associated with a given system cannot be embedded into a global Boolean algebra comprising them all, and thus the values of these observables cannot (at least not straightforwardly) be taken to represent the properties possessed by that system in advance of their determination through measurement. It is in this latter—non-Boolean—aspect of the probabilistic quantum-kinematical framework that its departure from classicality can most essentially be located.
The profound problem of measurement stems, rather, from the fact that of the many classical probability distributions that are implicit in the quantum state description, the one that emerges in a given scenario is always conditional upon the choice that we make from among the many possible measurements performable on the system. In other words it is the—in part physical and in large part philosophical—problem to account for the fact that, owing to the nature of the non-Boolean kinematical structure of quantum mechanics, only some of the classical possibility distributions implicit in the quantum state are actualized in the context of a given measurement, and moreover which of them are actualized is always conditional upon that measurement context.
Given a particular measurement context, quantum mechanics provides us with all of the resources we need in order to account for the dynamics of the measurement interaction between the system of interest and measurement device, and through this account we explain why a particular classical probability distribution is applicable given that measurement context, despite the non-classical nature of the quantum state description. Quantum mechanics does not tell you, however, which of the many possible measurements on a system you should apply in a given case. From the point of view of the theory the choices you make or do not make are up to you.
So quantum theory provides a stochastic description of a system-external system interaction when supplied with a choice of external system, but it is intrinsically incapable of modelling that choice of external system. Moreover this is a feature of any non-Kolmogorovian probability theory.
 
  • Like
Likes mattt
  • #119
DarMM said:
Are the deterministic interpretations known to fully work with QFT? Genuine question. My understanding was that Bohmian Mechanics hasn't been successfully generalized, due to the Reeh-Schlieder theorem there are issues with understanding MWI branching and the Thermal Interpretation relies on certain as of yet unproven conjectures.
The standard source for Bohmian bosonic field theory

Bohm.D., Hiley, B.J., Kaloyerou, P.N. (1987). An ontological basis for the quantum theory, Phys. Reports 144(6), 321-375

For bosons, it considers mainly the scalar field, but this is the key. For the gauge fields, I would simply reuse the scheme for scalar fields and throw away gauge symmetry as unimportant. It is necessary for renormalizability? So what, anyway I have to care only about effective field theory, and what remains on large distances from a general vector field are the renormalizable parts, thus, the gauge-invariant parts. So, this seems nothing one would have to care about if one defines the theory resp. its Bohmian version.

For fermions, the Dürr group favors the particle ontology.

I would prefer to use a construction that gives fermions out of bosonic fields. To such a construction you can, then, apply the scheme above.

I know one such construction, given in arxiv:0908.0591. It starts with a scalar field with a degenerated vacuum regularized on a 3D lattice. This gives for low energies a ##\mathbb{Z}_2## valued field together with a much more massive scalar field. The ##\mathbb{Z}_2## valued field has in each point already a fermionic character, but the operators for different of different points do not commute. But to transform this into a fermionic operator algebra is also quite standard, all one needs is to define some order. Some quite strange order is constructed on the lattice, and the resulting lattice equations become, via a doubling effect, in the large distance limit the equations of two Dirac fermions.
 
  • #120
mattt said:
That's not correct. We have both deterministic and non-deterministic equally valid models of fundamental physics.
No experiment can favor one interpretation over another (if they are actually interpretations).
These local hidden variable theories aka determinism have gone out of favour since the ever growing efficency of the experimental setups has eliminated more than 99% of loopholes of Bell tests(if memory serves me right). All in favour of quantum mechanics in its orthodox form. With non-local deterministic models, it's impossible to do physics as results start to precede causes. Not many proponents of this line of reasoning.
 

Similar threads

  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 292 ·
10
Replies
292
Views
10K
Replies
175
Views
12K
  • · Replies 874 ·
30
Replies
874
Views
43K
Replies
204
Views
12K
  • · Replies 97 ·
4
Replies
97
Views
8K
Replies
61
Views
6K
  • · Replies 76 ·
3
Replies
76
Views
8K
  • · Replies 333 ·
12
Replies
333
Views
18K