# T hooft Cellular Automata Quantum Mechanics

1. Jun 22, 2014

### twistor

2. Jun 22, 2014

### atyy

I have long wanted to know. Is his model completely discrete? There is an argument that a hidden variables model cannot be discrete if the dynamics is Markovian. http://arxiv.org/abs/0711.4770

3. Jun 22, 2014

4. Jul 7, 2014

### stevendaryl

Staff Emeritus
T'hooft says something intriguing about hidden-variables and Bell's Theorem, and I haven't yet figured out whether it is profound or not.

I'm stripping the stuff about cellular automata out, and getting to what I think is the heart of t'hooft's explanation of how they evade Bell's theorem.

If you're trying to mimic the predictions of quantum mechanics for the EPR experiment using classical means, you need to have a setup for a "game" of the following type:

There are three "players", call them Alice, Bob and Charlie. Play consists of a number of rounds, and each round has a number of turns:

1. Turn 1: Charlie creates two messages and puts them in sealed envelopes. Call them $M_A$ and $M_B$. Meanwhile, Alice chooses a direction $\vec{a}$ and Bob chooses a direction $\vec{b}$. Nobody gets to see what anyone else is doing during this round. (So that we can get good statistics, let's assume that $\vec{a}$ and $\vec{b}$ are chosen from finite sets of possibilities.)
2. Turn 2: Charlie sends $M_A$ to Alice and $M_B$ to Bob.
3. Turn 3: Alice uses some fixed algorithm $F_A(\vec{a}, M_A)$ to compute a result $R_A$ which is either $+1$ or $-1$. Bob uses another fixed algorithm $F_B(\vec{b}, M_B)$ to compute $R_B$ (again either $+1$ or $-1$).

After playing many, many rounds, for different choices of $M_A$, $M_B$, $\vec{a}$ and $\vec{b}$, we compute probability functions:

$P_1(\vec{a}) =$ the fraction of rounds in which Alice chooses $\vec{a}$ and $R_A = +1$.

$P_2(\vec{b}) =$ the fraction of rounds in which Bob chooses $\vec{a}$ and $R_B = +1$.

$P_3(\vec{a}, \vec{b}) =$ the fraction of rounds for which Alice choose $\vec{a}$ and Bob chooses $\vec{b}$ and $R_A = R_B = +1$

The challenge for such a classical explanation of EPR is to make the predictions match those of quantum mechanics:

$P_1(\vec{a}) = P_2(\vec{b}) = 1/2$
$P_3(\vec{a}, \vec{b}) = 1/2\ cos^2(\theta/2)$ where $\theta$ is the angle between $\vec{a}$ and $\vec{b}$.

The loophole that t'hooft suggests is that the choices made by the players, $\vec{a}, \vec{b}, M_A, M_B$ are NOT independent. They are correlated.

They certainly can be correlated. Even though people may think that they are "freely choosing" something, in fact, they could be basing their choice on some fact about the world. Even though they may think of themselves as making unpredictable choices, there's actually a deterministic algorithm at work. Since the three players all (presumably) share a common past history (they had to have gotten together at some point in the past in order to agree to play the game), it's certain that the states of their brains are correlated by past interactions. It's then possible that this correlation would give rise to a correlation in their seemingly random choices.

T'hooft's idea is certainly a logical possibility, but it seems wildly implausible to me. Even if the players' choices are determined by their pasts, the exact algorithm for making a choice may be incredibly complex. Maybe Alice is using the digits of pi. Maybe Bob is basing his choice on the latest soccer scores from the World Cup. I can see how correlated initial states of the players might cause correlations in the outcomes, so there is no necessary reason for Bell's inequality to hold. But the weird thing is that the statistical predictions of QM for EPR are completely insensitive to the details of how the choices are made. It's hard for me to see how it is possible to make a "superdeterministic" theory that explains EPR in any way short of Charlie precisely simulating the decision-making processes of Alice and Bob, and then making his choice with their eventual choices for $\vec{a}$ and $\vec{b}$ in mind.

5. Jul 8, 2014

### DrChinese

I agree with you, incredibly implausible. Considering that Weihs et al already showed that independent random number generators must be in collusion.

And more specifically, how is this superdeterministic explanation expected to apply to Bell tests but not other experiments? In other words: if the "true" value is X but we measure Y because of superdeterminism... why does that only apply in this one narrow area? Perhaps c is really a different value (such as 1.2c), but superdeterminism makes us think it is c. And so on.

6. Jun 6, 2015

Of course they would be in any deterministic interpretation of QM, ie. RNGs aren't actually random. So what?

No more complex than the idea that each quantum event generates a set of parallel worlds, it's just working backwards instead of forwards. That's probably why it's mind-bending, since everything is necessarily correlated with everything else right back to the Big Bang.

I'm kind of thinking of it like a decision tree, where each interaction particle X undergoes adds a constraint leaf to that tree that must be satisfied by all future interactions. This tree has certain invariants that are maintained no matter which path we follow, which yield the equations with which we are familiar.

It's definitely a novel way of looking at things, though it's unclear if it will provide new insights. It's also the first actual superdeterministic theory I'm aware of, so it was worth the effort for that reason alone.

7. Jun 6, 2015

### Jimster41

I've only looked at the first one...

"highly entangled vacuum" states all the way back (and down, and forward, and up) that only become an ontological structure, with time? (Seems a stretch to call it "classical")

What was it the oracle said to Neo there in the kitchen, with the cookies, "We can never see past the choices we don't understand."

The connection to holography seems really tangible. Information on the boundary encodes what happens in the bulk. So the boundary has everything except the sequence? Once we tell it that (by making some choice, any choice) its algorithm spits out a structure we experience as past and present?

Gets even more mind-bendy (for me) to associate expansion with negative curvature of ADS and gravitational clumping with the geometry of events and things. In that the results of events under expansion represent the "shape" of contact with the boundary: The evolution of ontology, or maybe the ontology of evolution.

Last edited: Jun 6, 2015
8. Jun 6, 2015

### Jimster41

Tha paper is way way...way

But I kind of got the idea the "highly entangled vacuum" and super-determinism (whatever entanglement compatibility algorithm the CA's follow) suggest at least that the "classical" process (in the Bulk) isn't Markovian? That the boundary entanglement of un-emerged states, involves "future" and "past" in the bulk.

Also does the problem proved in that paper bear on the "black hole entropy scales with area even when you chuck stuff from some huge volume into it" - information loss problem? Seems related.

9. Jun 25, 2015

### Berlin

't Hooft is writing about CA and after that about conformal symmetry for space-time. Seems logical that he is currently working on discrete space-time versions of conformal symmetry (DCS). When you look at dcs you stumble upon ising models. Anyone a good reference for discrete cs? Could 't Hooft be looking at an Ising model for his CA? This reminded me of a paper of Wetterich (nov 2011) where he formulated a connection between a classical Ising model and qm fermions. Could space-time itsselve be a lattice of ising spins?

Berlin
Nb. I would love to see a Peierls phase transition of space-time as a basis for the ckm matrix but that is surely crackpot (for now)!

10. Jun 25, 2015

### Jimster41

"Lattice of Ising Spins" is that related formally, actually, or just in similarity of stage name, to "Quantum Spin Foam"?