Clarifying the Meaning of "Random" in Quantum Physics

ScientificMind
Messages
48
Reaction score
1
This might be a silly question but when people say that something on the quantum level is completely "random," (except for general probability) does that mean, according to theory at least, if you were to go back in time and repeat an experiment exactly that the results could just as easily be different as the same, or that the results of a given experiment are unpredictable beforehand aside form general likeliness of many different possible events but the results would still be the same in the aforementioned scenario? Or, I suppose, do currently accepted theories not have an answer for this?
 
Physics news on Phys.org
I don't think anyone is thinking about time travel.

What they mean is that if you have N identically prepared systems, the number x with a particular outcome approaches Np for large N, if p is the probability of that outcome.
 
The result of experiment X at time T was x.
In a hypothetical parallel universe, doing the same experiment at time T might result in y.
However this does not alter the fact the in the original universe the result was x.
 
Last edited:
ScientificMind said:
when people say that something on the quantum level is completely "random," (except for general probability) does that mean, according to theory at least, if you were to go back in time and repeat an experiment exactly that the results could just as easily be different as the same
People sometimes say that quantum randomness is fundamental and then I believe they mean what you said. Bet you have to leave out "according to theory" phrase because theory is silent about it.
 
Clarifying the meaning of randomnes in quantum theory as opposed to randomness in classical physics is tricky. Traditionally it has been considered as fundamental and thus quantum theory is considered as an indeterministic theory, but if you read the answer by V50 for instance, he is describing classical probability, since by the Born rule postulate that is the way we must think of predictions in quantum physics. Classical probability is obviously compatible with classical physics and it is not from that point of view definitory of a random theory.

It is often argued that the difference lies in probabilities being the only result obtainable in quantum physics, but that is not exactly true as many results, like those using the time independent Schrodinger equation or many in QFT are not in the form of probabilities. Others center on the lack of trajectories for particles but that is just a side effect of not being "classical particles" so it is kind of tautological to give it as a reason for fundamental randomness of the theory.

More paradoxical features of quantum physics wrt the meaning of randomness and its fundamental or not character: As is well known QM is often split attending to time evolution in a purely deterministic evolution (Schrodinger equation) between measurements that is reversible(unitary) and a random one that is irreversible(non-unitary) related to observation-measurement, with people giving more weight to one or the other kind depending on their interpretation. But ironically for the interpretations that admit this cut, the random part is the one corresponding to classical(and therefore classically deterministic)macroscopic observation. And interpretations like many worlds that deny the cut are purely deterministic like the SE.

So it is important to realize that the presence of randomness per se does not mean all kinds of determinism are discarded, although I tend to think that the specific form of classical determinism is. Causal determinism I would say is not.

On the other hand relativistic quantum field theory in as much a it follows the SR ontology is local and classical deterministic since it is set in Minkowski spacetime. How is that compatible with the quantum part in view of the Bell type quantum experiments outcomes is not clearly explained or even addressed in general.
Also the basic tenets of particle physics when it talks about matter constituents or ultimate building blocks or the distinction between elementary and composite particles according to their internal structure in Democritus atomism fashion follow classical determinism.
 
Last edited:
TrickyDicky said:
On the other hand relativistic quantum field theory in as much as it follows the SR ontology is local and classical deterministic since it is set in Minkowski spacetime. How is that compatible with the quantum part in view of the Bell type quantum experiments outcomes is not clearly explained or even addressed in general.

The latter is explained in the discussion here (and its context, starting at #153 there).
 
ScientificMind said:
This might be a silly question but when people say that something on the quantum level is completely "random," (except for general probability) does that mean, according to theory at least, if you were to go back in time and repeat an experiment exactly that the results could just as easily be different as the same, or that the results of a given experiment are unpredictable beforehand aside form general likeliness of many different possible events but the results would still be the same in the aforementioned scenario? Or, I suppose, do currently accepted theories not have an answer for this?

The "going back in time" is conceptually more or less right, but a better way to put it is "identically prepared" (as Vanadium 50 says above).

Within quantum theory, the theory itself says that even if we prepare systems "identically", the result will usually be different for each "identical" preparation. In the Copenhagen interpretation, this means that a pure state is the complete specification of everything we can know about a single system, but the theory only makes statistical predictions even if the pure state of a single system is completely specified (as bhobba mentions above).

It may be that we will discover that quantum theory is not the most fundamental theory, and there could be a more fundamental theory in which identical preparations do give identical results. Relative to such a more fundamental theory, the "identical" preparations of quantum theory would correspond to non-identical preparations.
 
Last edited:
atyy said:
Relative to such a more fundamental theory, the "identical" preparations of quantum theory would correspond to non-identical preparations.
Not really. ''identically prepared'' means no more and no less than ''prepared in the same pure state'', and hence is relative to the model of the physical system. Of course, only very small and discrete quantum systems can be truly identically prepared. Thus in most cases there is in addition to the randomness according to born's rule another source of unrepeatability, due to our inability to reproduce a state exactly.
 
  • #10
A. Neumaier said:
Not really. ''identically prepared'' means no more and no less than ''prepared in the same pure state'', and hence is relative to the model of the physical system. Of course, only very small and discrete quantum systems can be truly identically prepared. Thus in most cases there is in addition to the randomness according to born's rule another source of unrepeatability, due to our inability to reproduce a state exactly.

I don't think we disagree. You are talking about "identically prepared" within the Copenhagen interpretation, which is what I am referring to by quantum mechanics. By "relative to a more fundamental theory", I mean a theory in which the pure state is not the most complete specification of the state of an individual system, for example Bohmian mechanics. In Bohmian mechanics, quantum theory is not fundamental, and the "identical preparations" of quantum theory corresponds to a distribution over different initial conditions.
 
  • #11
atyy said:
You are talking about "identically prepared" within the Copenhagen interpretation
I am not talking about the Copenhagen interpretation.

The term "identically prepared'' is not specific to an interpretation, since the Born rule, of which it is part, must hold in any interpretation of quantum mechanics that deserves this name. Even in Bohmian mechanics, one can derive the Born rule only if one first gives an explanation what it means in the Bohmian setting to prepare a system in a pure quantum state ##\psi## (in the sense of an operational Born rule). Otherwise we do not have an interpretation of quantum mechanics (with its notion of pure state) but a completely different theory.
 
  • #12
A. Neumaier said:
I am not talking about the Copenhagen interpretation.

The term "identically prepared'' is not specific to an interpretation, since the Born rule, of which it is part, must hold in any interpretation of quantum mechanics that deserves this name. Even in Bohmian mechanics, one can derive the Born rule only if one first gives an explanation what it means in the Bohmian setting to prepare a system in a pure quantum state ##\psi## (in the sense of an operational Born rule). Otherwise we do not have an interpretation of quantum mechanics (with its notion of pure state) but a completely different theory.

Sure. I mean Bohmian mechanics as a completely different theory.
 
  • #13
A. Neumaier said:
The latter is explained in the discussion here (and its context, starting at #153 there).

A. Neumaier said:
Almost every nonlinear deterministic system is chaotic, in a precise mathematical sense of ''almost'' and ''chaotic''. It ultimately comes from the fact that already for the simplest differential equation ##\dot x = ax## with ##a>0##, the result at time ##t\gg 0## depends very sensitively on the initial value at time zero, together with the fact that nonlinearities scramble up things. Look up the baker's map if this is new to you.
So I also find useful to distinguish between a linear classical determinism and a nonlinear determinism. Can you expand on how the nonlinearity enters in quantum microscopic systems?
A. Neumaier said:
All arguments I have seen against hidden variable theories - without exception - assume a particle picture; they become vacuous for fields.

The problems of few particle detection arise because their traditional treatment idealizes the detector (a complex quantum field system) to a simple classical object with a discrete random response to very low intensity field input. It is like measuring the volume of a hydrodynamic system (a little pond) in terms of the number of buckets you need to empty the pond - it will invariably result in integral results unless you model the measuring device (the bucket) in sufficient detail to get a continuously defined response.
I agree with this.
It follows that quantum field theory is not affected by the extended literature on hidden variables.
The problem is that currently QFT as applied to high energy physics and the standard model of particle physics relies on a classical atomistic particles ontology(even to define elementary particles which is its fundamental goal, the search of the universe ultimate constituents or building blocks associated to ever bigger energies) and in that sense it is indeed quite affected by hidden variables literature since atomism is deterministic in the classical sense.
 
  • #14
A. Neumaier said:
The latter is explained in the discussion here (and its context, starting at #153 there).

A. Neumaier said:
All arguments I have seen against hidden variable theories - without exception - assume a particle picture; they become vacuous for fields.
You can replace particles with clicks in detectors and the arguments remains the same. So Bell inequality applies to fields just as well.
This can be easily argued based on this model:
https://www.physicsforums.com/showthread.php?p=2817138#post2817138
 
  • #15
zonde said:
You can replace particles with clicks in detectors and the arguments remains the same.
Clicks in detectors IS what's normally interpreted as the particle picture so it replaces nothing. It is an assumption of the inequalities that goes by the name of local hidden variables a.k.a. classical determinism, which is local and linear. An what is violated in the experiments.
 
  • #16
TrickyDicky said:
The problem is that currently QFT as applied to high energy physics and the standard model of particle physics relies on a classical atomistic particles ontology
Not really. One can interpret everything in QFT in terms of densities and currents only; indeed, this is how much of QFT is related to experimental results. The particle terminology is to a large extent historicial baggage. It is not needed for the interpretation. In typical high-energy experiments, one measures tracks of energy deposits; their interpretation as particle tracks is optional though common.

zonde said:
So Bell inequality applies to fields just as well.
[old & incorrect quick answer - I confused satisfied and violated] Of course. Nobody disputes that. But there is not the slightest argument suggesting that a hidden-variable field theory would have to satisfy the Bell inequalities, while a hidden-variable particle theory provably does so, unless one allows for all sorts of weird behavior that is inconsistent with an intuitive notion of a particle.

[new and valid answer] A hidden-variable particle theory provably satisfies Bell inequalities known to be violated by quantum mechanics, unless one allows for all sorts of weird behavior that is inconsistent with an intuitive notion of a particle. On the other hand, a hidden-variable field theory is so nonlocal from the start that none of the assumptions used to justify Bell type inequalities are satisfied, hence the Bell inequalities cannot be derived.
 
Last edited:
  • Like
Likes bhobba
  • #17
..Box apparatus Color/Hardness 50/50's
-Operational Result

Summary: +22:00 - non corellation
+24:38 - Bells inequality, unpredictable, non deterministic, random. Probability is forced upon us by observations.
+29:00 - Uncertainty Principle
+43:38 - Empirical vs principle argument
+50:19 - Head scratch../unsettling
+1:1:08 - Test/Operational conclusion on superposition
Moral: Deal with it...

 
Last edited:
  • #18
TrickyDicky said:
Clicks in detectors IS what's normally interpreted as the particle picture so it replaces nothing. It is an assumption of the inequalities that goes by the name of local hidden variables a.k.a. classical determinism, which is local and linear. An what is violated in the experiments.
Clicks in detectors is a physical fact (direct observation) and not subject to interpretation.
So if we base inequality argument on clicks of detectors we simply bypass any interpretation about what is causing them.
 
  • #19
A. Neumaier said:
Of course. Nobody disputes that. But there is not the slightest argument suggesting that a hidden-variable field theory would have to violate the Bell inequalities, while a hidden-variable particle theory provably does so, unless one allows for all sorts of weird behavior that is inconsistent with an intuitive notion of a particle.
You lost me here.
What do you mean by "hidden-variable particle theory"?
Do you mean that QM predicts violation of Bell inequalities because it clings to particle idea?
 
  • #20
zonde said:
Clicks in detectors is a physical fact (direct observation) and not subject to interpretation.
So if we base inequality argument on clicks of detectors we simply bypass any interpretation about what is causing them.
Not subject to interpretation? Are you serious? Apples falling down are also physical direct observation facts, interpreting this has produced two different theories by Newton and Einstein, but hey, they are not subject to interpretation according to you.
 
  • #21
TrickyDicky said:
Not subject to interpretation? Are you serious? Apples falling down are also physical direct observation facts, interpreting this has produced two different theories by Newton and Einstein, but hey, they are not subject to interpretation according to you.
Neither Newton's nor Einstein's theory of gravity dispute the physical direct observation fact of apples falling down. They give slightly different reasons why they are falling down.
 
  • #22
A. Neumaier said:
Not really. One can interpret everything in QFT in terms of densities and currents only; indeed, this is how much of QFT is related to experimental results. The particle terminology is to a large extent historicial baggage. It is not needed for the interpretation. In typical high-energy experiments, one measures tracks of energy deposits; their interpretation as particle tracks is optional though common.
This is a true to a certain extent, but the LHC search for the fundamental building blocks and the fact that they are defined depending on internal structure(wich only admits an interpretation in terms of the classical atomistic particle picture) doesn't seem to be just historical baggage judging by the cost of the enterprise and doesn't seem to allow an optional interpretation.
 
  • #23
zonde said:
Neither Newton's nor Einstein's theory of gravity dispute the physical direct observation fact of apples falling down. They give slightly different reasons why they are falling down.
Nothing I said disputes that clics are produced. Only the physical interpretation of those clics which you claim doesn't exist.
 
  • #24
TrickyDicky said:
Nothing I said disputes that clics are produced. Only the physical interpretation of those clics which you claim doesn't exist.
Hmm maybe my statement was sloppy.
But now I do not understand your point. My argument was that we can base Bell type inequality on physical fact that clicks are produced. So we do not have to interpret clicks in any way to produce the inequality.
 
  • #25
TrickyDicky said:
the LHC search for the fundamental building blocks and the fact that they are defined depending on internal structure (which only admits an interpretation in terms of the classical atomistic particle picture)
Your belief that the internal structure only admits a particle interpretation is unfounded.

What is searched for are indicators for a pole in a field correlation that cannot be explained without the presence of a Higgs field and gives the Higgs mass. The latter is just the value of the position of the pole - interpreting it as the mass of a particle is purely historical baggage, not a necessary property of the underlying quantum field theory.

Most of the subatomic ''particles'' live far too short to be ever seen as tracks that would in a semiclassical approximation justify talking about particles. Quarks don't even admit a particle picture in any observational sense since due to confinement they cannot be observed at all.
 
  • Like
Likes bhobba
  • #26
zonde said:
You lost me here.
What do you mean by "hidden-variable particle theory"?
Do you mean that QM predicts violation of Bell inequalities because it clings to particle idea?

I misstated in #16 what I had meant to say; see the updated formulation there for the intended version. To derive Bell's inequality you need to make assumptions that are never satisfied when you start with Maxwell's equations. Thus Bell's inequality doesn't hold. But the quantum result is just based on properties of the Maxwell equation, hence the quantum field approach gives identical results with the experimental findings.
 
  • #27
A. Neumaier said:
Your belief that the internal structure only admits a particle interpretation is unfounded.

What is searched for are indicators for a pole in a field correlation that cannot be explained without the presence of a Higgs field and gives the Higgs mass. The latter is just the value of the position of the pole - interpreting it as the mass of a particle is purely historical baggage, not a necessary property of the underlying quantum field theory.

Most of the subatomic ''particles'' live far too short to be ever seen as tracks that would in a semiclassical approximation justify talking about particles. Quarks don't even admit a particle picture in any observational sense since due to confinement they cannot be observed at all.
Well, the idea of a hierarchy of substructures in a "russian dolls" scheme only seems to admit a local deterministic ontology which is what is violated in
Bell type experiments.

I know the ontology doesn't always coincide with the underlying math derived from experiments which
could be compatible with
only nonlinear fields but the explicit ontology based on free fields(the only ones Well defined rigorously as you know) is the local deterministic I commented above(and the one that drives collider research).
 
  • #28
zonde said:
Hmm maybe my statement was sloppy.
But now I do not understand your argument was that we can base Bell type inequality on physical fact that clicks are produced. So we do not have to interpret clicks in any way to produce the inequality.
Yes you have. It is in the mathematical assumptions of the inequalities by stating that
causality can only act in a linear way , i.e. the outcomes A and B are in Linear combination with vectors a and b.
 
  • #29
TrickyDicky said:
the idea of a hierarchy of substructures in a "russian dolls" scheme
is not part of the LHC's toolkit. Nobody knows how many levels there are below the standatrd model - most physicist seem to think that there are at most two - one where gravity is included, and perhaps one more where supersymmetry appears. Nothing looks like a Russian doll picture.

By the way, the quantum particle ontology is by no means better defined than the quantum field ontology. On the nonrelativistic level they are both well-defined, but including relativity has never been done satisfactorily - neither on the particle level, nor on the field level.

Colliders collide sharply bundled rays of matter fields kept in focus by electromagnetic fields. The debris of the collision is another field whose splashes are recorded by the detectors and then analyzed (in a quite complex way) for signals matching or deviating from the predictions of quantum field theory. That's reality. Particles are at best interpreted into this.
 
  • Like
Likes bhobba
  • #30
A. Neumaier said:
[old & incorrect quick answer - I confused satisfied and violated] Of course. Nobody disputes that. But there is not the slightest argument suggesting that a hidden-variable field theory would have to satisfy the Bell inequalities, while a hidden-variable particle theory provably does so, unless one allows for all sorts of weird behavior that is inconsistent with an intuitive notion of a particle.

[new and valid answer] A hidden-variable particle theory provably satisfies Bell inequalities known to be violated by quantum mechanics, unless one allows for all sorts of weird behavior that is inconsistent with an intuitive notion of a particle. On the other hand, a hidden-variable field theory is so nonlocal from the start that none of the assumptions used to justify Bell type inequalities are satisfied, hence the Bell inequalities cannot be derived.

A. Neumaier said:
I misstated in #16 what I had meant to say; see the updated formulation there for the intended version. To derive Bell's inequality you need to make assumptions that are never satisfied when you start with Maxwell's equations. Thus Bell's inequality doesn't hold. But the quantum result is just based on properties of the Maxwell equation, hence the quantum field approach gives identical results with the experimental findings.

The old answer is indeed usually argued informally, but that does not mean that a relativistic classical field can violate the Bell inequalities.

I am skeptical that you can show that the classical Maxwell equations violate a Bell inequality.

What do you think of the argument given in http://arxiv.org/abs/1407.3610v1 (J. Math. Phys. 56, 032303 2015)?
 
Last edited:
  • #31
atyy said:
I am skeptical that you can show that the classical Maxwell equations violate a Bell inequality.
I argued it in a link already given, A simple hidden variable experiment. The abstract says among others: ''The analysis is very simple and transparent. In particular, it demonstrates that a classical wave model for quantum mechanics is not ruled out by experiments demonstrating the violation of the traditional hidden variable assumptions.''

The paper you mentioned is very technical, so it is not easy to figure out what it actually claims about a hidden variable theory based on fields. In any case if its mathematics is sound it cannot contradict the findings in my paper.
 
  • #32
A. Neumaier said:
is not part of the LHC's toolkit. Nobody knows how many levels there are below the standatrd model - most physicist seem to think that there are at most two - one where gravity is included, and perhaps one more where supersymmetry appears. Nothing looks like a Russian doll picture.
I wasn't implying there had to be a certain definitive last Russian doll, but by talking about "how many levels below" you are highlighting precisely what I mean by Russian doll hierarchycal picture and seem to be endorsing my point. If you really are discarding the classical idea of particle and are rigorous about it you cannot think in terms of atoms carrying a substructure of electrons and hadrons that themselves carry within or not further local deterministic entities.
 
Last edited:
  • #33
TrickyDicky said:
If you really are discarding the classical idea of particle and are rigorous about it you cannot think in terms of atoms carrying a substructure of electrons and hadrons
Instead I can think in terms of fields that get a bit more detailed substructure as one looks at them at shorter distances. This doesn't need particles, an indefinitely deep version of nested substructure is a familiar feature of anything fractal.

But physics does not even seem to be fractal in the conceptual structure of its fields, hence has not really a Russian doll structure. There are just five fields - the electron field, the hadronic field, the gravitational field, the gauge field, and the Higgs field. From a phenomenological point of view the electron field and the hadronic field give matter volume and mass. The electron field describes a kind of fluid responsible for filling the space between the nuclei, which are concentrations of the hadronic field in which most of the mass is concentrated. The gravitational field keeps us bound to the earth. The gauge field has a few components that we perceive as electromagnetism. It is visible in thunderstorms and in compasses; moreover, it keeps matter together on everyday scales. It has a few more components that we can notice only at small scale, and that keep nuclei, protons and neutrons (tiny lumps in the hadronic field) together. For the other fields (except for gravitation) some additional components become visible at tiny scales. The Higgs was last found as it makes its impact only at distances we can now just barely resolve. That's it - nothing more is expected. Supersymmetry would only increase a bit the number of components of the fields, and string theory would change a bit their mathematical description. As you can see, everything apart from the Higgs is already there on the macroscopic scale - the outermost shell of your Russian doll; it just develops some fine structure when you look at it more closely. And as you can also see, we have fields from the largest to the smallest scale - always essentially the same fields.
 
  • Like
Likes julcab12 and TrickyDicky
  • #34
atyy said:
I am skeptical that you can show that the classical Maxwell equations violate a Bell inequality.
After reading Neumaier's paper I don't think that's what the experiment shows. Not without further qualifying what one means by classical Maxwell equation since the paper suggests a strong laser as a way to implement the experiment which would demand nonlinear Maxwell equations in inhomogeneous media and that is enough to violate the inequalities.
 
  • #35
A. Neumaier said:
On the other hand, a hidden-variable field theory is so nonlocal from the start that none of the assumptions used to justify Bell type inequalities are satisfied, hence the Bell inequalities cannot be derived.
It seems that term "nonlocal" has different meaning for different people. Can you be more specific and explain what do you mean by nonlocality of hidden-variable field theory?
A. Neumaier said:
To derive Bell's inequality you need to make assumptions that are never satisfied when you start with Maxwell's equations.
What are these assumptions?
 
  • #36
TrickyDicky said:
Yes you have. It is in the mathematical assumptions of the inequalities by stating that
causality can only act in a linear way , i.e. the outcomes A and B are in Linear combination with vectors a and b.
I am not talking about original Bell inequality but instead about very simple model to which I gave link in my post #14. In that model there are no such assumptions.
 
  • #37
In mathematics there is no such thing as pure randomness. In mathematics we can only approximate such an idea, with some help from statistics. For example, we can write an algorithm for a random number generator. But we'll call the results of such an algorithm "pseudo-random" in order to maintain the idea of pure randomness (which otherwise escapes us).

We might say that pure randomness might express something similar to what mathematics otherwise expresses by the concept of zero. Or even better (but not quite mathematical): the concept of null.

In terms of signal theory, pure randomness might be called noise. But noise is ambiguous because noise (so called) will often have some identifiable cause. If we call it "noise" it is often to distinguish it from some other signal in which we're interested. Such "noise" would really be a signal. One (if we understood the cause) we might filter out in order to concentrate on the other signal in which we're interested.

More interesting is the idea of a kind of fundamental noise. And by this we can suggest the meaning of such might be defined negatively: as "not a signal", not just a signal in which we're not interested, but not a signal at all. The absence of a signal. A zero signal or better: a null signal.

C
 
  • #38
ScientificMind said:
This might be a silly question but when people say that something on the quantum level is completely "random," (except for general probability) does that mean, according to theory at least, if you were to go back in time and repeat an experiment exactly that the results could just as easily be different as the same, or that the results of a given experiment are unpredictable beforehand aside form general likeliness of many different possible events but the results would still be the same in the aforementioned scenario? Or, I suppose, do currently accepted theories not have an answer for this?

Let me know if this is a reasonable answer for a layperson:

There is an epistemological difference from classical physics: in quantum mechanics, there is no way for anyone to precisely predict the outcome of any particular measurement.

Normally when people say that a coin has a 50/50 chance of coming up heads, people take that to mean "for our purposes," or "as far as we can tell." The common intuition** is that a much smarter, better informed entity could know in advance how the coin or dice will fall. Whether we call them demons, psychics, or aliens, lots of discussions about determinism and randomness involve someone or something that does know before a coin is flipped whether it will land face up. Even we mortal humans can expect that as physicists get better theories and tools dice will seem less random, and supercomputers using superfast cameras will someday predict how a coin spinning in the air will land.

Quantum randomness does not have that caveat. When we know as much as there is to know, we will still end up saying 50/50 for "quantum coins." There is (some) disagreement about whether there is a specific predetermined outcome waiting for us to observe it, which is another way of discussing what would happen if we rewound the clock. But there is general agreement that no demon, alien, or supercomputer can know the outcome in advance, even if there is in principle a right answer. When the theory tells you that something is 50/50, like e.g. spin along an axis orthoganol to your last measurement, the idea is that nobody could give you a better guess than that.***
** Folk theory, pre-1950 understanding of classical physics, etc.--meaning, in particular, ignoring chaos.

***The theory does include exact values, but only for things we can never directly measure. That is equivalent to the fairness of a coin: you can't see that a given coin has an exactly "50/50" chance of coming up heads using a microscope, you can only test for it statistically by flipping the coin repeatedly.
 
  • #39
MacroMeasurer said:
But there is general agreement that no demon, alien, or supercomputer can know the outcome in advance, even if there is in principle a right answer. When the theory tells you that something is 50/50, like e.g. spin along an axis orthoganol to your last measurement, the idea is that nobody could give you a better guess than that.

This part is not agreed. If nonlocal hidden variables exist, and a demon or alien has control over them, then the quantum randomness is converted to classical randomness. At present, no human (we believe) has such control, so quantum randomness is believed for practical purposes to create "true" randomness that is useful for making a secure code.

Here is an example of using quantum mechanics to certify that the random number generator is "truly" random.

http://arxiv.org/abs/1009.1567
Secure device-independent quantum key distribution with causally independent measurement devices
Lluis Masanes, Stefano Pironio, Antonio Acin
 
  • #40
Yes, the issue pivots on whether there is, or is not, regardless of whether we can know such or not, hidden variables (otherwise known as demons) that create what we otherwise nominate as true randomness - in which case true randomness would be a fiction. It would be, instead, what we'd otherwise call "pseudo-random" (or classical randomness) in the sense that such would have some sort of structure or formula, for which only the demon had the keys.

But we can also entertain the equally valid idea of no such demon.

And we can formalise this non-demonic randomness to some extent, in the sense that we can characterise what it's statistical properties would be like. For example, we can say the sum of all random numbers between -1 and 1 would sum to exactly 0. It would require an infinite sum, but we can assume such, ie. we don't have to physically carry out the sum. We can play with the idea of the purely random and not find ourselves completely bereft of some intellectual ground on which to express such.

C
 
  • #41
TrickyDicky said:
the paper suggests a strong laser as a way to implement the experiment which would demand nonlinear Maxwell equations in inhomogeneous media
The laser doesn't need to be so strong so that one would need these. Just strong enough so that the classical approximation is adequate. At this strength the Maxwell equations (which without qualification always mean the linear ones) are satisfied quite accurately in air, typically much more accurately than one can perform Bell-type experiments. And typical beam splitters do this very efficiently as modeled, too; otherwise it would be hardly possible to analyze optical quantum experiments.
 
  • #42
A. Neumaier said:
[..] a hidden-variable field theory is so nonlocal from the start that none of the assumptions used to justify Bell type inequalities are satisfied, hence the Bell inequalities cannot be derived.
"none of the assumptions" is pretty strong, only a single one suffices and I find it difficult to put my finger on that, so to say. ..
Could you please give an example of one assumption that clearly isn't valid for a hidden-variable field theory?
 
  • #43
harrylin said:
"none of the assumptions" is pretty strong, only a single one suffices and I find it difficult to put my finger on that, so to say. ..
Could you please give an example of one assumption that clearly isn't valid for a hidden-variable field theory?
I was exaggerating; I meant the assumption that a particle travels along exactly one path. This is clearly impossible for a solution of Maxwell's equation. A single solution can contain several rays whose neighborhoods have a significant energy density, and indeed, according to classical optics, a beam splitter produces such a solution.
 
  • #44
A. Neumaier said:
I misstated in #16 what I had meant to say; see the updated formulation there for the intended version. To derive Bell's inequality you need to make assumptions that are never satisfied when you start with Maxwell's equations. Thus Bell's inequality doesn't hold. But the quantum result is just based on properties of the Maxwell equation, hence the quantum field approach gives identical results with the experimental findings.

I'm not sure what you're saying here. It seems to me that the assumptions behind Bell's inequality can be formulated in a way that is independent of the distinctions between particles and fields.

Roughly speaking, let's assume SR (GR causes complications that seem irrelevant). Pick a rest frame. Pick a time interval \delta t. Now, relative to that rest frame, partition space into cubes of size c \delta t.

Let \mathcal{R} be the set of all possible records, or histories, of a single cube during a single time interval. So classically, a record might consist of an identification of what particles were in the cube during that interval, what their positions and momenta were at times during the interval, what the values of various fields at different points within the cube were at times within that interval.

From classical probability and special relativity, we would expect that if \mathcal{R} was a complete description of local conditions within a cube during an interval, then the behavior of the universe can be described by a grand transition function \mathcal{T} : \mathcal{R}^{27} \times \mathcal{R} \rightarrow [0,1]. The meaning of this is that \mathcal{T}(\vec{r}, r') is the probability that a cube will have record r' for the next interval, given that it and its neighboring cubes have record \vec{r} during the current interval. So this is just saying that the behavior of a cube during the next interval is dependent on the behavior of its neighboring cubes during this interval. Note, I'm allowing the behavior to be nondeterministic, but I'm requiring the probabilities to be dependent only on local conditions. Also, I'm being sloppy about talking about probabilities, since in general there could be (and would be) uncountably many possible records. So I should be talking about probability distributions, instead of probabilities.

The two assumptions that (1) \mathcal{R} is a complete description of the conditions within a cube during an interval, and (2) Einstein causality, imply that knowledge of conditions in distant cubes can give you no more information about future possibilities for one cube than knowing about conditions within neighboring cubes. Putting that more coherently: I pick a particular cube. If I know about conditions in that cube and all its neighboring cubes during one interval, then I can make a probabilistic prediction about conditions in that cube during the next interval. Those probabilistic predictions cannot be changed by more knowledge of conditions in non-neighboring cubes. This basically makes the universe into a gigantic 3D cellular automaton (except for the fact that the records are pulled from an uncountably infinite set, while the theory of cellular automata require the records to be taken from a finite set; I'm not sure how important that distinction is for the purposes of this discussion).

Completeness is an important assumption here. Obviously, if my records are incomplete---that is, there are microscopic details that I failed to take into account--then it is possible that additional correlations between distant cubes could be implemented by those microscopic details, which would give the erroneous appearance of nonlocality. If I took those microscopic details into account, then local information would be sufficient to predict the future of a cube.

Bell's theorem applied to the EPR experiment shows that it can't be described this way, and that it can't be fixed by assuming microscopic details that we failed to take into account. In the EPR experiment, no matter how fine-grained your descriptions of the cubes, information about conditions in distant cubes can tell you something about the future behavior of this cube.

But getting back to your point about Maxwell's equations, it seems to me that there is nothing about my "cellular automaton" model of the universe that would prevent the use of the electromagnetic field as part of the record for a cube.
 
Last edited:
  • #45
stevendaryl said:
getting back to your point about Maxwell's equations, it seems to me that there is nothing about my "cellular automaton" model of the universe that would prevent the use of the electromagnetic field as part of the record for a cube.
A single solution of the classical Maxwell equations may after some time (e.g., after passing a beam splitter) have significant energy in many of your cells simultaneously, even if it is localized at time ##t=0## in a single one. This is contrary to your assumed particle behavior. You must assume that upon a split the particle remains in a single cell though with a certain probability only, while the wave remains a single object divided into two beams.
 
  • #46
A. Neumaier said:
I meant the assumption that a particle travels along exactly one path. This is clearly impossible for a solution of Maxwell's equation. A single solution can contain several rays whose neighborhoods have a significant energy density, and indeed, according to classical optics, a beam splitter produces such a solution.
Multiple paths from the source does not add anything new to the argument. You can check this using the model I gave. This simple model that does not use any specific assumptions:
https://www.physicsforums.com/showthread.php?p=2817138#post2817138
If you say that you can get around action at a distance please point out how one should modify this model to get expected correlations without action at a distance.
 
  • #48
A. Neumaier said:
The specific assumption the setting you linked to assumes is locality. But Maxwell's equations are not local in Bell's (or your) sense. They are exactly as nonlocal as quantum mechanics itself.
You mean that two measurements happen at the same place? Basically that distance is an illusion? And you derived that philosophical crap from Maxwell's equations?
I probably misunderstood you. Right?
 
  • #49
zonde said:
You mean that two measurements happen at the same place? Basically that distance is an illusion? And you derived that philosophical crap from Maxwell's equations?
Where did you get all that? Do you understand how a simple plane wave doesn't fulfill locality as defined in the model you linked(and that you claimed made no assumptions)?
 
  • #50
TrickyDicky said:
Do you understand how a simple plane wave doesn't fulfill locality as defined in the model you linked
No I do not understand. This is not about simply correlated outcomes. It's about changes caused over the distance. From that post: "In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone."
Pay attention to word "changes". If by rotation of her detector Miss A can cause changes in coded message that comes out of Mr B's detector then it's not simple correlation between two places but clearly action at a distance. And I do not understand how you can get "action at a distance" type nonlocality out of simple plane wave (not given any time for it to propagate to the distant place).
 

Similar threads

Replies
16
Views
3K
Replies
2
Views
2K
Replies
14
Views
2K
Replies
7
Views
3K
Replies
5
Views
948
Replies
0
Views
8K
Back
Top