B Is Randomness in Quantum Mechanics Truly Non-Deterministic?

  • B
  • Thread starter Thread starter Aufbauwerk 2045
  • Start date Start date
  • Tags Tags
    Qm Randomness
  • #51
stevendaryl said:
Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.

Or if it were possible to determine ##\lambda## without disturbing the photon (or its partner), then you could communicate FTL.
 
Physics news on Phys.org
  • #52
Nugatory said:
@Aufbauwerk 2045 's original question may be more relevant to the cryptographers than the physicists; it is a very big deal if the bad guys can figure out the PRNG you're using for key generation.
I'd like to chime in on this... :smile:
Aufbauwerk 2045 said:
I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?
I like this question! :smile:

Without venturing into quantum mechanics, I think this can be distinguished if the mathematical/statistical capabilities of the machine analyzing the data from the two processes is good enough.

How the analyzing could be done? By using Information Theory.

Let's say we have a sufficiently long message, preferably very long, for statistical reasons.
And let's say we encode this message two times, using two different random generators.
The first encoding is made by modifying the message using a "true" random quantum process.
The second encoding is made by modifying the message using a pseudorandom1 process.

Then we can perform analysis of the so-called information entropy2 of the two encodings with respect to the original message.
If our hypothesis that the quantum mechanics process is truly random, and the pseudorandom generator "less random", this should be able to be seen in the values of the information entropies of the two encoded messages. The QM message entropy should be at maximum, and the pseudorandom message entropy should be less than the QM value.
(see e.g. Entropy as information content)

Edit:
On a second thought I may have been a little too quick here, it was a long time ago since I used information theory. Maybe we could use mutual information as well... I have to think about it... :smile:

Edit 2:
Aufbauwerk 2045 said:
how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?
I just remembered a thing from my time studying cryptography... a pseudo-random process can be identified by analyzing sufficiently long output sequences of the process. A pseudo-random process will at one point or another, repeat itself, that is, start over.
So, the quality of the pseudo-random process, the "randomness", if you like, can be judged by how long it takes for the process to start repeating itself.

Edit 3: Footnotes:

1 With pseudorandom I mean a sequence generated by a digital machine only, like a computer.
2 This is not physical entropy, it is a purely information theoretical concept.
 
Last edited:
  • #53
stevendaryl said:
Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.

Could you describe the communication method explicitly? ( I'm losing track of whether we are or are-not assuming instantaneous communication between entangled electrons and whether we are using "pseudo-random" as a synonym for "predictable". )
 
  • #54
Stephen Tashi said:
Could you describe the communication method explicitly? ( I'm losing track of whether we are or are-not assuming instantaneous communication between entangled electrons and whether we are using "pseudo-random" as a synonym for "predictable". )

The idea for a model that matches the statistics of EPR:
  1. The source of twin pairs randomly picks a polarization ##\lambda##. It can be any number between 0 (Vertical polarization in the x-y plane) and ##\pi/2## (Horizontal polarization). Two photons with these polarizations are created, one going to Alice and the other going to Bob.
  2. When Alice (or whoever measures the polarization first) measures her photon's polarization, she picks an orientation ##\alpha## for her filter. The way her filter works is: If the photon's polarization is aligned with the filter, the photon passes through. If its polarization is at right angles to that, it definitely does not pass through. For intermediate cases, it passes through with probability ##cos^2(\lambda - \alpha). (Malus' law)
  3. Immediately after Alice's measurement, Bob's photon's polarization switches to be ##\alpha## (if Alice's photon passed through her filter) or perpendicular to ##\alpha## (otherwise).
  4. Bob chooses an orientation ##\beta## and his photon passes or not following the same rules as Alice's photon in 2, except with the polarization determined in 3.
For clarification, step 3 is FTL. So this is a nonlocal model, not a violation of Bell's theorem.

The issue is whether the random choices made in steps 1, 2 and 4 could be pseudorandom (predictable) without allowing FTL communication.

What I said earlier was that I thought that if step 1 was truly random/unpredictable, then FTL communication would be impossible, even if steps 2 and 4 were predictable. Now I'm not sure about that. I don't immediately see a strategy for Alice to communicate with Bob.
 
  • Like
Likes Mentz114
  • #55
stevendaryl said:
For clarification, step 3 is FTL. So this is a nonlocal model, not a violation of Bell's theorem.
I don't know how these EPR post are relevant to the OP but in your example there is only one λ and there is no way to describe what a FTL/immediate means. Non-locality imply there is a unique value for λ at all time for ever observer possible.

stevendaryl said:
I don't immediately see a strategy for Alice to communicate with Bob.
You need at least a triplet of particle sharing λ (which is physically impossible). Alice need to have two particle on her side, and arrange for bob to measure between her two measurement.
 
  • #56
Boing3000 said:
I don't know how these EPR post are relevant to the OP

It's what prompted splitting this thread off from the original.

But in your example there is only one λ and there is no way to describe what a FTL/immediate means. Non-locality imply there is a unique value for λ at all time for ever observer possible.

I don't know what you mean. Alice's measurement instantaneously changes Bob's photon state, far away. That's FTL.

It's not FTL communication, though. For Alice to communicate FTL, there would have to be two choices that Alice can make on her end that affect the statistics seen by Bob.
 
  • #57
stevendaryl said:
I don't know what you mean. Alice's measurement instantaneously changes Bob's photon state, far away. That's FTL.
I mean there is only one state for the two photon. That's not FTL in the sense that there is no speed of change involve.

stevendaryl said:
It's not FTL communication, though. For Alice to communicate FTL, there would have to be two choices that Alice can make on her end that affect the statistics seen by Bob.
Precisely, if she had two photon on her side she could make two choices of polarization angle.
 
  • #58
stevendaryl said:
The issue is whether the random choices made in steps 1, 2 and 4 could be pseudorandom (predictable) without allowing FTL communication.
On the one hand, the idea that data generated by a pseudorandom process would be predictable is interesting. But if we use "predictable" as a synonym for "pseudorandom" then, in the context of communication, we have to answer the question "Predictable by who?".

The way I think of predicting a pseudorandom process is that I'd know some history of its outputs and then be able to predict the next output. That's no problem if I play the role of an omniscient observer. But does saying step 1 is predictable process imply that Alice in step 2 can predict the value of ##\lambda## generated by step 1?
 
  • #59
Boing3000 said:
I mean there is only one state for the two photon. That's not FTL in the sense that there is no speed of change involve.

Sorry, I still don't know what you mean. Something happens to Alice's photon. It's either transmitted or absorbed. Then some time later, something happens to Bob's photon---its polarization changes to be either aligned with Alice's filter orientation, or perpendicular to it. If the change in Bob's photon takes place instantaneously, then that's an FTL interaction.
 
  • #60
Stephen Tashi said:
On the one hand, the idea that data generated by a pseudorandom process would be predictable is interesting. But if we use "predictable" as a synonym for "pseudorandom" then, in the context of communication, we have to answer the question "Predictable by who?".

I'm talking about predictable by Alice and Bob.
 
  • #61
stevendaryl said:
Then some time later, something happens to Bob's photon---
You keep writing "sometime", like it was possible for observer to agree whose time(FOR) it is. The simpler situation is to consider that as far as entangled values are concerned, there is one photon. So if Bob's photon as already gone trough Alice one or not only depend on the path length.

stevendaryl said:
If the change in Bob's photon takes place instantaneously, then that's an FTL interaction.
FLT implies an "action at distance" from two distinct space-time event. This also implies breaking lorenz covariance and you'll have to propose some way to compute this FTL speed (because infinite is not a number).
"Instantaneous" is different than FLT in sense that those are the same event, they just happens to be non-local, that is spanning space. That way all observer can agree on what instantaneous means.
 
  • #62
Boing3000 said:
the same event, they just happens to be non-local, that is spanning space

This is a contradiction. An event is a single point in spacetime. It can't be non-local.
 
  • Like
Likes bhobba
  • #63
Boing3000 said:
You keep writing "sometime", like it was possible for observer to agree whose time(FOR) it is.

I'm assuming that from the point of view of Alice's frame of reference, the change in Bob's photon happens instantaneously (or quicker than light could travel from Alice to Bob).

The simpler situation is to consider that as far as entangled values are concerned, there is one photon. So if Bob's photon as already gone trough Alice one or not only depend on the path length.

I'm not talking about entanglement. I'm talking about a hidden-variables explanation for EPR statistics.

FLT implies an "action at distance" from two distinct space-time event. This also implies breaking lorenz covariance

Yes, that's what I'm talking about.
 
  • #64
DennisN said:
How the analyzing could be done? By using Information Theory.
DennisN said:
a pseudo-random process can be identified by analyzing sufficiently long output sequences of the process.

This was fun to think about, so I've thought some more, and this is how I think it could be done:

Setup

We have two processes which will function as number generators, which then will produce long sequences of numbers which we later will perform statistical and information theoretical analyses on. The first sequence is produced by a quantum mechanical process, and the hypothesis Q is that this process is purely random due to quantum mechanics, so I call this sequence R. The second sequence is produced by a digital computer and is thus a pseudorandom process, and I call this sequence P.

For simplicity I choose the base 2 (binary) for the two sequences, which means we don't have to convert back and forth between bases when we talk about them.

R can be produced by any quantum mechanical process that has two outcomes with the equal probability 0.5.
As an example, assume we have a Stern–Gerlach apparatus with detects the spin of an atom in a vertical direction. If spin down is detected, this generates a 0 in the R sequence, and if spin up is detected, this instead generates a 1 in the R sequence. When n number of atoms have been analyzed we have a R sequence of length n.

P is produced by a reasonably good pseudorandom generator which outputs 0 or 1 with the equal probability 0.5, and we generate n numbers of bits in the P sequence.

Analysis

1. Repetition detection.

An algorithm which detects repetitive sequences of length r << n, analyzes the two sequences R and P. If our hypothesis Q is correct, the algorithm will detect a repetitive sequence in P of length r and never detect a repetitive sequence in R.

2. Information entropy

The maximum information entropy H of a random (or pseudorandom) variable of base 2 is 1, so

0 &lt; H(R) ≤ 1

and

0 &lt; H(P) ≤ 1

(see e.g. Information entropy - example)

Furthermore, if our hypothesis Q is correct, the information entropy of sequence P will be smaller than the information entropy of sequence R, so

H(P) &lt; H(R)

and together we get

0 &lt; H(P) &lt; H(R) ≤ 1

Edit:

Also, if our hypothesis Q is correct, the information entropy of a sufficiently long sequence R should be very close to 1. More accurately, H(R) should approach 1 when longer and longer sequences are analyzed.
 
Last edited:
  • #65
PeterDonis said:
This is a contradiction. An event is a single point in spacetime. It can't be non-local.
There is no contradiction. A local event is different from a non-local event. All quantities are perfectly defined and Lorenz covariant.
Furthermore that simple logic is vindicated by experiment, and does not need fuzzy/bizarre logic.
On the other hand, FLT and the need to connect different event does need new (unnecessary in Occam's sense) physics.

Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place ?
 
  • #66
Boing3000 said:
There is no contradiction. A local event is different from a non-local event. All quantities are perfectly defined and Lorenz covariant.
Not a contradiction, it just makes no sense.
Boing3000 said:
Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place
This doesn't make sense either.
 
  • #67
Boing3000 said:
Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place ?

Its not the way I would explain it.

In QM it's the cluster decomposition property:
https://www.physicsforums.com/threads/cluster-decomposition-in-qft.547574/

But what has this got to do with randomness - that's another issue.

I gave a subtle view before - I now don't think that may have been the best approach in a B level thread. We have had discussions on this before and they tended to just meander, and were eventually shut down by the mentors. But the outcome is really the same - we do not know. It passes all the current tests we have for randomness. But deterministic processes exist that pass all those tests, and other more subtle views (eg BM) also explain it. So the answer is again we do not know. If you wish to discuss EPR, locality etc it should be in another thread. Now I do not like like wielding a big stick, but I would like contributors to think is a different thread more appropriate to discuss such things.

One thing non-mentors may not know is mentors generally do not take action without a discussion between the other mentors, however a thread that goes off track is something that mentors generally agree is at least best split.

Thanks
Bill
 
Last edited:
  • #68
I am still thinking about this because it's fun, and I want to say I wrote my two previous posts before reading the entire thread in detail. I've seen that others have been thinking along the same lines as me previously, @Boing3000 (post 5 and 32), @Stephen Tashi (post 8 and 10) and of course @Nugatory in post 26. I'm sorry if I missed mentioning anyone.

Even though I think Bell tests and entanglement experiments are very interesting, I've bypassed commenting on them since as far as I know they are designed specifically to test locality and realism, and not the inherent randomness in quantum mechanical processes.
Let me be more specific what I mean:

In Bell tests, streams of photon pairs are analyzed two by two, and when detected the photons are destroyed.
This means Bell tests are not measuring any random variable of one system/pair of particles only, they are measuring the statistics of many pairs of particles, where one pair of particles are not dependent upon any other pair. I would love to be corrected if I am wrong here. :smile:

And this means that my initial setup:
DennisN said:
As an example, assume we have a Stern–Gerlach apparatus with detects the spin of an atom in a vertical direction. If spin down is detected, this generates a 0 in the R sequence, and if spin up is detected, this instead generates a 1 in the R sequence. When n number of atoms have been analyzed we have a R sequence of length n.

also is flawed, since it mentioned measuring the spin of different atoms. But the spins of different atoms should ideally be completely independent of each other, so this will not be a good randomness test.

To do the randomness test, that is, test the hypothesis that a quantum mechanical process is random, we need to make repeated measurements on the same particle/system. The generated sequence would then be the variation of a single random variable, which then could be statistically and information theoretically analyzed.

One suggestion could be repeated measurements of the spin of one particle in various directions:
  1. Measure the spin in the vertical direction. Spin down will generate a 0 in the R sequence and spin up will generate a 1 in the in the R sequence. This will also result in that the horizontal spin now is undetermined.
  2. Measure the spin in the horizontal direction. Spin left will generate a 0 in the R sequence and spin right will generate a 1 in the in the R sequence. This will also result in that the vertical spin now is undetermined.
  3. Goto 1 and repeat this many times.
  4. Then do the analyses as I described in my previous post.

Maybe there's a better way to experimentally do what I am thinking of, and if so, I would love to hear it. :smile:
 
Last edited:
  • #69
bhobba said:
But the outcome is really the same - we do not know. It passes all the current tests we have for randomness. But deterministic processes exist that pass all those tests, and other more subtle views (eg BM) also explain it.

A basic problem with discussing how to detect the non-randomness of a pseudorandom process is that "pseudorandom" does not define a particular type of process. For example, if we assume a source of pseudorandom numbers is the output of a linear congruential random number algorithm then "pseudorandom" is something specific. By contrast, if the meaning of "pseudorandom process" only says it is a process whose outputs obey all the statistical properties of the outputs of random process then we have defined the possibility of distinguishing it by statistics out of existence.

If @bhobba 's above remark: "deterministic processes exist that pass all those tests" is taken as the definition of a pseudorandom process then his conclusion: "But the outcome is really the same - we do not know" is correct, as far as statistical analysis goes.

So far in this thread, attempts to make progress involve attributing specific properties to a pseudorandom process.

1. Assume a pseudorandom process is predictable. The exact nature of this predictability has not been defined. Presumably, the general idea is that we reject the assumption that pseudorandom process will pass all statistical tests for randomness - or perhaps some are using the strong assumption that the output of a pseudorandom process can be exactly predicted by someone who knows its history - or perhaps the definition of a pseudorandom process is that there exists an observer of the process who knows what its output will be, even if this knowledge is not shared with the rest of the world.

2. An embellishment of the above idea is considering thought experiments where random and pseudorandom processes are hooked up together. We try to conclude the results imply specific physical consequences ( e.g. faster-than-light communication). The paper linked by @atyy ( Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265 ) has a title suggesting it accomplishes this. (I don't see how - can someone summarize?) The discussions in this thread of Bell's type experiments seem to have similar goals.

3. We can consider the physics of implementing deterministic versus random processes ( as hinted in post #32). A completely naive question is "Which takes less energy to implement, a random process or a deterministic pseudorandom process that produces outputs in the same format?". If we were to straighten out someone who asked this question, what would we say?

The notion of a physically implemented pseudorandom process suggests the existence of hidden variables that describe its state. Discussions of hidden variables tend to focus on whether they exist, but there is also a question of where we find the machinery that generates or responds to them - and how much energy does it take to run the machinery.
 
Last edited:
  • Like
Likes DennisN
  • #70
DennisN said:
[]
Even though I think Bell tests and entanglement experiments are very interesting, I've bypassed commenting on them since as far as I know they are designed specifically to test locality and realism, and not the inherent randomness in quantum mechanical processes.
Let me be more specific what I mean:

.
I have to agree with that. Some people find the following fact interesting and significant : the raw data from a two-channel EPR experiment ( ie using PBSs) is a list of Alice and Bobs detector readings which will each be either (0,1) or (1,0). These may be grouped by the settings ##\alpha=(a,a')## and ##\beta=(b,b')##. Any counting of relative frequencies of detector clicks will give apparently random results, 50/50 odds.
But if we count the number of times Alice and Bob both got (1,0) or (0,1) there are huge deviations from 50/10 if we divide into the four settings categories. The deviations are large enough to be impossible without assuming entanglement.
So from one point of view the results contain no information - from another we find non-random results.
The reason is that when we do the coincidence counting we are extracting the only information there is in the singlet state wave function and averaging out non-existent dof.
For me this is the only connection between EPR and randomness - it's all random except that ##P(coincidence) = \cos(\alpha-\beta)^2##.
 
  • #71
Boing3000 said:
A local event is different from a non-local event.

There is no such thing as a "non-local event". An event is a point in spacetime. Go look at a relativity textbook.

Boing3000 said:
All quantities are perfectly defined and Lorenz covariant.

You can define Lorentz covariant quantities that involve multiple events (multiple points in spacetime): for example, the invariant arc length along a particular spacelike curve. But these quantities do not describe "non-local events". They describe multiple events.

Boing3000 said:
that simple logic is vindicated by experiment

What experiments are you talking about?

Boing3000 said:
Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place ?

No. See above and go look at a relativity textbook.
 
  • Like
Likes bhobba
  • #72
Boing3000 said:
Don't you agree that non-local is...

Wikipedia is not a valid source, and in any case that Wikipedia page is not talking about what the term "non-local" refers to in this discussion.
 
  • #73
Mentz114 said:
I have to agree with that. Some people find the following fact interesting and significant [..]
Thanks for posting! I now also saw that you hinted at this already in post 12,
Mentz114 said:
I don't think Bells theorem has anything to say about randomness.

Stephen Tashi said:
The paper linked by @atyy ( Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265 ) has a title suggesting it accomplishes this. (I don't see how - can someone summarize?) The discussions in this thread of Bell's type experiments seem to have similar goals.
Thanks for posting this! I'm reading the paper now, and I may have some comments about it, so I will likely return to this thread later. I think this topic is very fascinating, so I pray that the thread stays open... :smile:
 
  • #74
I've read the paper (http://arxiv.org/abs/1708.00265) quickly one time, and I have comments on it, which I will post later when I have thought it through more.

But I wanted to post a link to another paper which I found as a link in the paper @atyy posted in his post.
It's pretty funny, since it seems this paper examines what I was talking about in post 68, that is, sequences of measurements on the same system:

Unbounded randomness certification using sequences of measurements (F. J. Curchod et al.)
http://arxiv.org/abs/1510.03394
Abstract:
Unpredictability, or randomness, of the outcomes of measurements made on an entangled state can be certified provided that the statistics violate a Bell inequality. In the standard Bell scenario where each party performs a single measurement on its share of the system, only a finite amount of randomness, of at most 4log2d bits, can be certified from a pair of entangled particles of dimension d. Our work shows that this fundamental limitation can be overcome using sequences of (nonprojective) measurements on the same system. More precisely, we prove that one can certify any amount of random bits from a pair of qubits in a pure state as the resource, even if it is arbitrarily weakly entangled. In addition, this certification is achieved by near-maximal violation of a particular Bell inequality for each measurement in the sequence.

I think I will have very interesting stuff to read and think about this weekend. :smile:
 
Last edited:
  • #75
PeterDonis said:
There is no such thing as a "non-local event". An event is a point in spacetime. Go look at a relativity textbook.
Sadly you keep confusing event with non-local event. I don't think I am going to find in a relativity textbook a definition of non-locality (**) But maybe you have a precise reference in mind ?

PeterDonis said:
You can define Lorentz covariant quantities that involve multiple events (multiple points in spacetime): for example, the invariant arc length along a particular spacelike curve. But these quantities do not describe "non-local events". They describe multiple events.
I am more interested in basic worldline although I know how you hate wikipedia reference even in a B thread level. Upon entanglement two perfectly defined word line forked, and every two event pair with the same proper time value can also be name a non-local event.Because ...

PeterDonis said:
What experiments are you talking about?
... all experiment based on Bell's inequalities do show that the correlation behave practically like there is only one value at anyone time.

PeterDonis said:
No. See above and go look at a relativity textbook.
See above (**) I won't follow that red herring.
Although, again, I would welcome a precise reference on the relativity definition on "non-locality"
 
  • #76
Boing3000 said:
Sadly you keep confusing event with non-local event.

No, you keep using the term "event" incorrectly. There is no such thing as a "non-local event" in relativity. An event is a point in spacetime.

Boing3000 said:
Upon entanglement two perfectly defined word line forked, and every two event pair with the same proper time value can also be name a non-local event.

I don't know where you are getting this from, but it is not correct. To the extent you can model a particle that is entangled with another particle using a worldline from classical relativity, there is no "forking" of worldlines as a result of entanglement.

Boing3000 said:
all experiment based on Bell's inequalities do show that the correlation behave practically like there is only one value at anyone time

Standard QM says that when you make a measurement, you observe one value, yes. But you can't predict in advance which value it will be; you can only predict probabilities.

Boing3000 said:
I would welcome a precise reference on the relativity definition on "non-locality"

There isn't one. I told you to look in a relativity textbook for the correct definition of "event", not "non-locality". Go do it.

A general note: if you make another post along the lines of your previous ones, you will receive a misinformation warning. You really, really, really need to stop posting on this topic until you have taken the time to learn the correct physics. You do not have a good understanding of it now.
 
  • #77
Boing3000 said:
[]
... all experiment based on Bell's inequalities do show that the correlation behave practically like there is only one value at anyone time.
This is the case because the entangled pair share the same wave-function. But there are TWO particles so if something happens to both that is TWO events. If you assume that there is instantaneous communication between the pair (as a working assumption) then that is the non-locality.

Also, be aware that photons have null worldlines so proper time is undefined.
 
  • #78
From my point of view I see nothing mysterious about probability or about probability in QM. For start to get a clearer picture, take Geometric probability for instance see this wolfram. You can see that probability is nothing but expression about relations between objects, the easiest is line-line picking. so probability in that sense is just choosing ALL POSSIBLE line lengths( with some weight distribution, usually normal) and relating it to the main line and the many properties and relations can be deduced. So probability just like in such math NO REASON is given other than the possibilities based on the problem at hand. Even when we describe the problem we say for example " pick two points at random on it " NO REASON is given, what we mean is what I have described earlier.

Now for QM, The situation is pretty much the same, the solution for psi is just the constraints in the equation THAT IS THE REASON FOR RANDOMNESS in case you choose to interpret psi square as a probability. I see that as nothing fundamentally different than our mathematical setup. Especially so since we all seem to agree that QM is fundamental and there is no "deeper" underlying theory.
 
  • #79
ftr said:
Especially so since we all seem to agree that QM is fundamental and there is no "deeper" underlying theory.
Hmm, I agree and don't agree. So you could say my agreement is in a superposition :smile:. For the purpose of this thread and the general policy of this forum I agree that QM is fundamental, since everything else would be hypothetical at best and fringe at worst. But I can not say that there is no deeper underlying theory, since I would consider such a statement unscientific.

But back to the OP, my interest in this thread and exploring the inherent randomness in quantum mechanical processes is not because the idea of a deeper underlying theory nor the idea of hidden variables. I am simply very curious; how random are these processes? Can we quantify the randomness and compare them to pseudorandom processes in order to demonstrate the hypothetical superior randomness of QM? I've never thought about this before, and I got fascinated by the OP question. But I haven't had time nor energy to read the posted papers again, because of the heat wave we have at the moment over here.
 
Last edited:
  • #80
DennisN said:
Hmm, I agree and don't agree. So you could say my agreement is in a superposition :smile:. For the purpose of this thread and the general policy of this forum I agree that QM is fundamental, since everything else would be hypothetical at best and fringe at worst. But I can not say that there is no deeper underlying theory, since I would consider such a statement unscientific.

But back to the OP, my interest in this thread and exploring the inherent randomness in quantum mechanical processes is not because the idea of a deeper underlying theory nor the idea of hidden variables. I am simply very curious; how random are these processes? Can we quantify the randomness and compare them to pseudorandom processes in order to demonstrate the hypothetical superior randomness of QM? I've never thought about this before, and I got fascinated by the OP question. But I haven't had time nor energy to read the posted papers again, because of the heat wave we have at the moment over here.
Since the last bunch of posts I did some reading but mainly to see if there is such a thing as a maximally random string of bits.
I came up with this. Suppose we write the n'th order autocorrelation function ##\rho_n## in terms of the probability ##p_n=(2\rho_n+1)/2## then the Shannon entropy is ##S=\sum ((2\rho_n+1)/2)\ln ((2\rho_n+1)/2)## where ##n## is summed from 1 to N. This is maximised when all the ##\rho_n## are zero and is ##S=N##. This is another way of saying that every outcome is independent giving N degrees of freedom. Is it possible to have a less predictable sequence?
 
  • #81
Aufbauwerk 2045 said:
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.
...

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

See the book by Vitanyi and Li on Kolmogorov complexity. I think most everything you asked about is addressed.
 
  • #82
stevendaryl said:
...Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
Implying likewise that Copenhagen is empirically equivalent to Bohmian Mechanics?
 
  • #83
Lish Lash said:
Implying likewise that Copenhagen is empirically equivalent to Bohmian Mechanics?
I don't think 'randomness' is necessary for any interpretation of QT. QT only gives us probabilities so if we have a two-outcome situation and we calculate the probabilities to be 1/2 we cannot predict an individual outcome but we have an exact probability. There's nothing random about that. When we do the experiment to test this we get a binary string and now we can apply one or more of many tests of 'randomness'. But those test do not come from QT and are independent of it as they must be . If the last condition is not true we would be using a theory to test itself.

It is probably safe to say that there is no 'randomness' in QT but a lot in Nature.
 
Last edited:
  • #84
I don't think that a "truly deterministic operator" and a "truly nondeterministic operator" could co-exist for long in one system because the "truly nondeterministic operator" will influence the "truly deterministic operator". And then the "truly deterministic operator" won't be "truly deterministic" anymore. Thats the reason why i think this question is highly hypothetical and has no "true" answer because either both are "truly deterministic operators" or not.
 
  • #85
Bell test has nothing to do with the original question in this thread, the original question is that how to distinguish between deterministic random sequence, and non deterministic random sequence.

The correct answer is that you can't tell without:
A- Looking inside the boxes
or
B- Wait long enough to see if the sequence repeats, because random number generators have finite sequence length.

Regarding the main question:
What is randomness in QM?
QM does not say randomness is deterministic or non deterministic, only different QM Interpretations say that.
The main stream interpretation in QM (Copenhagen interpretation) is non-deterministic. However there are other deterministic QM interpretations such as the Many Worlds Interpretation and the Pilot Wave Theory (Bohmian mechanics).
 
  • Like
Likes DrChinese
  • #86
Deepblu said:
Wait long enough to see if the sequence repeats, because random number generators have finite sequence length.

Well the digits of pi are computeable, generally considered random (but not yet proven - we do not know for example if all digits occur with the same frequency - called the normal property - but passes all current tests of randomness as far as it has been tested), and because its irrational can't repeat.

So basically the whole thing is filled with unanswered questions. A fields medal probably up for grabs to anyone that solves it.

Thanks
Bill
 
  • #87
bhobba said:
Well the digits of pi are computeable, generally considered random (but not yet proven - we do not know for example if all digits occur with the same frequency - called the normal property - but passes all current tests of randomness as far as it has been tested), and because its irrational can't repeat.

However RNGs do not use irrational numbers, because the point of RNG is to generate a different random sequence when initial conditions (the seed) are changed.
 
  • Like
Likes DennisN
  • #88
Deepblu said:
B- Wait long enough to see if the sequence repeats, because random number generators have finite sequence length.
As @bhobba pointed out, a mathematically well defined psuedorandom number algorithm need not be periodic.

However, a practical psuedorandom number algorithm implemented by humans with finite resources can be modeled by a "finite state machine", which would have a periodic output.

A- Looking inside the boxes

If it takes more than finite resources for humans to implement a non-periodic random number generator, then is it "expensive" for Nature to implement her work-alike genuine random number generators? Do those black boxes have properties that can be measured externally?

For example if we set out to construct a "white box" random number generator implemented by known Natural processes, then is there a lower limit on the energy required to run it? (That may involve the issue of how fast we want it to run.)
 
Last edited:
  • Like
Likes bhobba
  • #89
Deepblu said:
However RNGs do not use irrational numbers, because the point of RNG is to generate a different random sequence when initial conditions (the seed) are changed.

They don't? I gave an example of one that did. What is done in practice, and what can be done in principle are not the same thing.

Thanks
Bill
 
Back
Top