# Non-local uncertainty - does it make sense?

1. Mar 18, 2015

### Feodor

Guys,

I have been watching \ reading some introductory courses in QM for fun, and there's a thing I'd like to discuss. As you might expect, I was displeased by the seeming non-intuitiveness of QM and its interpretations, but one thing appeared to be more intuitive than people imagine. If you look at an entangled pair of, say, spin orientations, there is a way to think of it that doesn't require the "spooky action at a distance": you could think that choice was made at the moment the pair interacted originally, like if you have a black box with two balls, one of the white, and the other one is black, and the tow of you pick out a ball without looking, walk away and then look into your hands, whenever one of you has the white ball in his hand, the other one must always have the black one. This idea was expressed by Susskind in his online lectures, but he said it was a simplification, because Bell's inequalities show that the state can't be fixed until measured. Particularly, Susskind said that you can't simulate this behavior correctly with a usual computer, if you move the simulation of one of the spins to another computer after the original interaction, without having the other computer notify the first one at the time of the "measurement". But, the funny thing is that you can. In computer programming, there is really no uncertainty - it is emulated by a random-number generator. A random-number generator is a sequence of numbers, where each next element of the sequence fully depends on the previous one, making the sequence nearly impossible to extrapolate, therefore the sequence appears to be just a set of random numbers, while in fact being completely deterministic. Any randomness in a computer is emulated by picking the next number in the sequence, and projecting it onto the domain of whatever requires a random input. So, in a computer, it is possible to synchronize two random-number generators, if the algorithms of the generators are the same, the base seed is the same, and the number of iterations is the same, the two quazi-random numbers will match even on disconnected systems, so in fact it is not necessary to notify the other computer at the time of the measurement. Could the same idea apply to the real life? As far as I understand, everybody thinks of uncertainty as of a local process, so that the two measurements are independently uncertain, while they could be jointly uncertain. Basically, the the states of the two spins may be indeed undefined until the measurement takes place, but the measurement reflects not the local uncertainty of the state of the spin, but the global uncertainty of the evolution of the universe. Basically what I mean sounds like the many-worlds interpretation, but without the other worlds existing objectively, just being opportunities that never materialized. This naturally raises the question of how frequently the world branches, as a random-number generator is a sequence, not a smooth function; were it a smooth function, there should have been more determinism at the smaller scale. It could be a fractal though. Or the time could be discreet.

So the questions are: (1) does this idea make any sense at all? (2) is there something that rules this idea out immediately, (3) are there any articles which would develop similar ideas?

Thanks!

2. Mar 18, 2015

### Doug Huffman

E. T. Jaynes 'Probability Theory' addresses Bernoulli's Urn at some length. IIRC, concluding that cause and effect are constrained to the arrow of time, logic not so.

3. Mar 18, 2015

### Staff: Mentor

Sorry, but I cant make any sense of it.

It is vaguely reminiscent of Consistent Histories:
http://quantum.phys.cmu.edu/CHS/histories.html

Thanks
Bill

4. Mar 18, 2015

### stevendaryl

Staff Emeritus
Let's get to a more concrete example, motivated by the spin-1/2 EPR experiment. Suppose we have a set-up like this:

1. Alice and Bob have identical devices.
2. Each device has a knob with 3 settings: A, B, and C.
3. Each device has two lights: a red light and a green light.
4. Each round of the experiment, Alice and Bob choose a setting and notice which light turns on.
The experiment has the following statistics:
1. No matter what Alice's setting, she gets a red light or a green light with 50% probability each.
2. No matter what Bob's setting, he also gets 50/50 red or green.
3. If Alice and Bob choose the same setting, they always get the opposite result.
4. If Alice and Bob choose different settings, they get the same result 75% of the time and the opposite result 25% of the time.
It's not obvious from these numbers, but if you tried to simulate this using a deterministic computer program, then you would have to do one of the following:
1. Make Bob's result depend on Alice's setting.
2. Make Alice's result depend on Bob's setting.
3. Make Bob's setting dependent on the program.
4. Make Alice's setting dependent on the program.
1&2 involve faster-than-light interactions. 3&4 don't, but they are kind of weird. If I hand a device to Alice and say: pick any setting, A, B, or C, then it would be weird if her choice were influenced by the device itself. The normal assumption of these kinds of experiments is that the setting is a "free variable". Of course, it's not REALLY free, since Alice is making her choice through the actions of her brain, which presumably is governed by physical laws that are in principle predictable. So it's theoretically possible that Alice's setting is predictable. But since Alice can base her choice on anything at all--a random-number generator, the lyrics of a song on the radio, the weather, etc.--to be able to reliably predict Alice's setting would seem to require either precognition or a detailed knowledge of the entire universe (or that portion of the universe close enough to affect Alice).

5. Mar 18, 2015

### Jimster41

If I'm reading your post right, I think you capture the picture I took away from Robert Gisin's book https://www.amazon.com/Quantum-Chance-Nonlocality-Teleportation-Marvels/dp/3319054724, and I think stevendaryl is saying this too, when he says options 3 and 4 are "weird".

It definitely feels pretty weird to me to take two distinct particles of time and space then say their random outcomes must be identical because they are generated by the same random number generator in "the computer" of the universe. That's exactly what is so mind-blowing about entanglement... at least in the latest iteration of my struggle to understand and remember it.

Honestly i gotta say I kind of like this metaphor: One random number generator, the universe, connecting spatio-temporally distinct points... It's easy to remember. I think it's accurate (for a metaphor), and it captures the bizzarre implication - where's the computer, where is the rand() function located? Since things in this universe (per GR apparently) can't connect distinct points in space at speeds faster than light and the rand() algorithm connects the two numbers, the implication is that this simultaneously present computer algorithm generating the identical random numbers in both places... can't be in here.

I hope in my unqualified contribution I'm not just screwing this up for you... My understanding is that it is a fundamental puzzle, and I'm always trying to improve on my cartoon of it.

Last edited: Mar 18, 2015
6. Mar 18, 2015

### lukesfn

Actually, the funny think is that you can't. Perhaps for Susskind's oversimplified example you can, but a deterministic pseudo-random number generator can't reproduce bells inequalities by local operations on 2 comperters in every possible case, only in special cases, such as the example, or in a limited number of cases at a time.

You might need to take a little more time to carefully try to understand bells inequalities and why they can't be reproduced this way.

I am a foolish coder who likes to take on apparently impossible problems. Some times I win, but I was surprised with how many creative tactics I could come up to attempt to reproduce bells inequalities, and how they would always find some way to fail. I guess it isn't that surprising because the proof of bells inequalities are not all that complicated.

I once tried doing a deterministic brownian motion simulator. A deterministic random number approached usually leads to a linear relationship between angular settings, and probability of correlation, while bells inequalities require a cosine relationship, where the cosine correlation is higher then the linear correlation. Brownian motion disperses rate is distance = time squared, which sounded close to the cosine relationship, so it sounded promising, but in the end, the time factor proved irrelevant, and the correlation proved to only depend on linearly on distance.

I did once, subsequently, come across a paper where somebody else used a time squared dispersal method to closely match bells inequalities, but, that required use of the fair sampling loop hole, which I believe has been closed in at least one experiment.

7. Mar 18, 2015

### Khashishi

The experimenter is part of system, so the experimenter's choices are entangled with the results. The results that you see will agree with the experimenter's choices. You could say the results depend on the choice, or you could say the choice depends on the results... there isn't really a meaningful distinction. Basically, what I'm saying is if you allow faster than light effects, (which I do), cause and effect are indistinguishable and stevendaryl's options 1, 2, 3, and 4 are all essentially the same thing.

8. Mar 18, 2015

### Staff: Mentor

That's wrong.

Even if it was done by a computer and the results recorded to computer memory the results will be the same.

The observational apparatus is part of the system, and what that apparatus is obviously makes a difference - but its got nothing to do with the experimenter being part of the system - they aren't.

Thanks
Bill

Last edited: Mar 18, 2015