I Quantum mechanics is random in nature?

  • #101
I'm glad you made that point as the physical laws are in fact also uncaused or truly unexplainable phenomenon... the speed of light for instance . Another argument for God -in this light (no pun intended)- would be the "Fine-tuned" or "Goldilocks principle" of many universal constants and phenomenon.
 
Physics news on Phys.org
  • #102
DrChinese said:
Clearly, there are tremendous differences between the classical examples you give and the quantum ones. Classical systems do not feature non-commuting observables. Non-commuting observables not only have specific limits in their precision, those limits can be seen in experiments on entangled pairs. So if you don't see the conceptual difference between these, you need to consider more experiments.
This is an example of what I consider the blurring of lines between theory and reality. There is no question that classical and quantum theories are different, as you and David Lewis (post #96) point out. The experiments on entangled pairs show that classical determinism (= realism) is wrong (assuming locality). But in reality if I flip a coin in a wind tunnel I'll get different and random looking results. The classicist says the initial conditions changed, but is incapable of measuring or controlling them.

Delta Kilo says that "in reality" one can't tell the difference between quantum randomness and the hypothetical classical (lack of knowledge) randomness.
 
  • #103
DrChinese said:
Clearly, there are tremendous differences between the classical examples you give and the quantum ones.
Of course they are hugely different. But they are both examples of spontaneous symmetry breaking and I was only referring to the conceptual source of randomness in both cases. I just don't see the need to look any further than the unknown state of the environment. So the fact that some measurement are inherently random does not surprise me at all. It it actually the other way around: it is surprising that some measurements are less random than they should have been according to classical view.
 
  • #104
mikeyork said:
Vanhees, you say "An observable can never have a determined value that's in the continuous part of the corresponding operator." in response to me. But I said deterministic (meaning in a definite eigenstate) not determined (i.e.observed).
Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state. This is possible only for true eigenvalues of the self-adjoint operator, i.e., such eigenvalues for which normalizable eigenvectors exist, and these eigenvectors are in the discrete part of the spectrum.
 
  • #105
drschools said:
I'm glad you made that point as the physical laws are in fact also uncaused or truly unexplainable phenomenon... the speed of light for instance .

Hmmmm.

See the following:
http://www2.physics.umd.edu/~yakovenk/teaching/Lorentz.pdf

My view is symmetry. Its almost, but not quite magic.

If this is your first exposure I highly recommend Landau:
https://www.amazon.com/dp/0750628960/?tag=pfamazon01-20
'If physicists could weep, they would weep over this book. The book is devastingly brief whilst deriving, in its few pages, all the great results of classical mechanics. Results that in other books take take up many more pages. I first came across Landau's mechanics many years ago as a brash undergrad. My prof at the time had given me this book but warned me that it's the kind of book that ages like wine. I've read this book several times since and I have found that indeed, each time is more rewarding than the last. The reason for the brevity is that, as pointed out by previous reviewers, Landau derives mechanics from symmetry. Historically, it was long after the main bulk of mechanics was developed that Emmy Noether proved that symmetries underly every important quantity in physics. So instead of starting from concrete mechanical case-studies and generalising to the formal machinery of the Hamilton equations, Landau starts out from the most generic symmetry and dervies the mechanics. The 2nd laws of mechanics, for example, is derived as a consequence of the uniqueness of trajectories in the Lagragian. For some, this may seem too "mathematical" but in reality, it is a sign of sophisitication in physics if one can identify the underlying symmetries in a mechanical system. Thus this book represents the height of theoretical sophistication in that symmetries are used to derive so many physical results.'

Magic - no - like I said it seems like it but isn't. However sorting out the real physical assumptions is both rewarding and illuminating.

Start a new thread if interested.

Thanks
Bill
 
Last edited by a moderator:
  • #106
Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state.

You should know that a state vector that is a superposition in one basis can be an eigenstate in another. In the case of momentum and position the relationship between bases is a Fourier transform.
 
  • Like
Likes bhobba
  • #107
vanhees71 said:
Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state. This is possible only for true eigenvalues of the self-adjoint operator, i.e., such eigenvalues for which normalizable eigenvectors exist, and these eigenvectors are in the discrete part of the spectrum.
mikeyork said:
You should know that a state vector that is a superposition in one basis can be an eigenstate in another
It is a <understatement>safe bet</understatement> that vanhees knows this. He's stressing the "discrete part of the spectrum" because applying the same principle to the continuous spectrum, as in
In the case of momentum and position the relationship between bases is a Fourier transform.
is a bit trickier because the "eigenstates" are not physically realizable. First-year QM texts oversimplify the mathematical subtleties here, but if you google for "rigged Hilbert space" you'll get more of the story.
 
  • Like
Likes vanhees71 and bhobba
  • #108

Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state. This is possible only for true eigenvalues of the self-adjoint operator, i.e., such eigenvalues for which normalizable eigenvectors exist, and these eigenvectors are in the discrete part of the spectrum.

You should know that a state vector that is a superposition in one basis can be an eigenstate in another
It is a <understatement>safe bet</understatement> that vanhees knows this. He's stressing the "discrete part of the spectrum" because applying the same principle to the continuous spectrum, as in
In the case of momentum and position the relationship between bases is a Fourier transform.
is a bit trickier because the "eigenstates" are not physically realizable.

Ok. I get that. But when you write "physically realizable" are you not confounding an underlying fundamental reality with observability? That is,the fundamental reality may be a definite eigenstate, but the information of an "observer" (either in preparing or detecting a state) is realizable only to a specific precision.
 
  • #109
Delta Kilo said:
it is surprising that some measurements are less random than they should have been according to classical view.
Could you give a simple example of what you are talking about here?
 
  • #110
mikeyork said:
Ok. I get that. But when you write "physically realizable" are you not confounding an underlying fundamental reality with observability? That is,the fundamental reality may be a definite eigenstate, but the information of an "observer" (either in preparing or detecting a state) is realizable only to a specific precision.
Within quantum theory generalized eigenstates, which are not in the Hilbert space (but in the dual of the nuclear space, where the unbound self-adjoint operators are defined), do not represent physical states. This is immediately clear also from the usually used heuristic point of view since the states are not normalizable. Take the momentum eigenstates. In position representation they are the plane waves,
$$u_{\vec{p}}(\vec{x})=\frac{1}{(2 \pi)^{3/2}} \exp(\mathrm{i} \vec{x} \cdot \vec{p}).$$
They are obviously not normalizable since the integral over their modulus squared is infinity. They are rather "normalized to a ##\delta## distribution":
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} u_{\vec{p}'}^*(\vec{x}) u_{\vec{p}}(\vec{x})=\delta^{(3)}(\vec{p}-\vec{p}'),$$
which clearly underlines the fact that these generalized eigenfunctions are to be interpreted as distributions (in the sense of generalized functions) rather than functions.
 
  • #111
If my understanding of the issue is up to date, this basically comes down to proving a negative. If you could find a (mathematically) deterministic framework that predicted QM experiments, you could rule out randomness (for which I'm assuming non-determinisic is the operating definition in this context).

The hidden variable was one attempt at demonstrating determinism (and seems to have failed); I don't know if that rules out all possibility of determinism or not, my intuition is to doubt it does.
 
Last edited:
  • #112
Things that are more or less probable such as the decay of a fissile atom around it's measured half life, are not the same as 'random';
There is in fact a non randomness that makes the half life what it is measured to be.
If events were completely random then no meaningful measurement of anything is possible.
 
  • #113
I prefer the term "probabilistic" to random.

The probabilities are well defined. The outcome of a single event is not.
 
  • Like
Likes bhobba
  • #115
Pythagorean said:
If you could find a (mathematically) deterministic framework that predicted QM experiments, you could rule out randomness (for which I'm assuming non-determinisic is the operating definition in this context).

Its more subtle than that.

Bohmian Mechanics (BM) is deterministic. Randomness comes from lack of knowledge - not because its inherently random.

One of the big advantages of studying interpretations is you learn exactly what the formalism says which often is not what is at first thought.

Again it must be emphasized no interpretation is better than any other. This does not mean I am a proponent of BM (I am not - my view is pretty much the same as Vanhees - but that means 4/5ths of bugger all ie precisely nothing) it simply means what appeals to my sense of 'beauty'.

Thanks
Bill
 
  • Like
Likes Pythagorean
  • #116
They are obviously not normalizable since the integral over their modulus squared is infinity. They are rather "normalized to a [delta distribution]
which clearly underlines the fact that these generalized eigenfunctions are to be interpreted as distributions (in the sense of generalized functions) rather than functions.

A limiting distribution with a unique value for which it is non-zero and a vanishing standard deviation. This would not normally be considered "random" although I see your mathematical point. Why do you consider it important to a physicist (rather than a mathematician)?

Also, my original point regarding superpositions being eigenstates in another basis still stands for discrete variables even if you consider the momentum/position example to be a bad one.
 
  • #117
mikeyork said:
Why do you consider it important to a physicist (rather than a mathematician)?

I am pretty sure Vanhees doesn't.

Rigged Hilbert Spaces are just as important to applied mathematicians as physics (without delving into the difference - that requires another thread) eg:
http://society.math.ntu.edu.tw/~journal/tjm/V7N4/0312_2.pdf

And that is just applied math - in pure math it has involved some of the greatest mathematicians of all time eg Grothendieck

Thanks
Bill
 
  • #118
  • #120
Pythagorean said:
If my understanding of the issue is up to date, this basically comes down to proving a negative. If you could find a (mathematically) deterministic framework that predicted QM experiments, you could rule out randomness (for which I'm assuming non-determinisic is the operating definition in this context).

The hidden variable was one attempt at demonstrating determinism (and seems to have failed); I don't know if that rules out all possibility of determinism or not, my intuition is to doubt it does.
Of course, it doesn't rule out all deterministic models, but all the ones that are local in the interactions (in the sense of relativistic QFT). Since there is no consistent non-local theory of relativistic QT today and also no convincing no-go theorem either, it's totally open, whether one day one might find a non-local deterministic theory in accordance with all observations today described by QT.
 
  • Like
Likes Pythagorean and bhobba
  • #121
I repeat: Random is not mathematically defined. People are bantering the term around in different ways. Is the out come of coin flip in a wind tunnel random?
If I hand you an extremely long sequence of 0s and 1s how do you tell if it is random? Is Champernowne's sequence random?
Measurements in QM are random variables (google it). The variance of a measurement is 0 iff the state being measured is an eigenvector of the measurement operator.
 
  • #122
Well, not necessarily. Take the energy of an excited (nonrelativistically approximated) hydrogen atom, which is ##n^2##-fold degenerate. So you have for the general energy-dispersion free state
$$\hat{\rho}_n=\sum_{l,m} P_{lm} |nlm \rangle \langle nlm|.$$
For such a state the energy of the atom is determined to be ##E_n##, and the energy's standard deviation is ##0##. Note that ##E_n## is a true eigenvalue of the Hamiltonian and thus it can be determined, but the state is not necessarily a pure state represented by an eigenstate.

Anyway, this is not the main point of your criticism but the question about randomness. Of course, you cannot say whether a given sequence is "random". All "random numbers" produced by computers are only pseudo-random numbers since they are somehow calculated with an algorithm that produces sequences which look random according to some given probability distribution.

To our understanding the probabilities in quantum theory are truly "random" in the sense that the corresponding values of observables, for which the prepared state is not dispersion free, are "really" undetermined and "irreducibly" random with the probabilities for a specific outcome given according to Born's rule. Of course, also this you can only verify on sufficiently large ensembles with a given significance (say 5 standard deviations for a discovery in the HEP community).

The same is true for the "randomness" in classical statistical physics. Assuming that flipping a coin in a wind tunnel is in principle deterministic, because the motion of the coin is described accurately by deterministic laws (mechanics of a rigid body and aerodynamics, including the mutual interaction). Of course, if the motion of the entire system is completely known (even the exact knowledge of the initial state is enough), you'd be able to predict the outcome of the experiment. Nevertheless we cannot control the state of the entire system so precisely that we can predict with certainty the outcome of a specific coin flip in the wind tunnel, and thus we get a "random sequence" due to the uncertainty in setting up the initial conditions of macroscopic systems. In my view there is not so much difference between the "irreducible randomness" of quantum mechanics and the "classical randomness" due to the uncontrollability of initial states of macroscopically deterministic systems.
 
  • #123
vanhees71 said:
Well, not necessarily. Take the energy of an excited (nonrelativistically approximated) hydrogen atom, which is ##n^2##-fold degenerate. So you have for the general energy-dispersion free state
$$\hat{\rho}_n=\sum_{l,m} P_{lm} |nlm \rangle \langle nlm|.$$
For such a state the energy of the atom is determined to be ##E_n##, and the energy's standard deviation is ##0##. Note that ##E_n## is a true eigenvalue of the Hamiltonian and thus it can be determined, but the state is not necessarily a pure state represented by an eigenstate.

Anyway, this is not the main point of your criticism but the question about randomness. Of course, you cannot say whether a given sequence is "random". All "random numbers" produced by computers are only pseudo-random numbers since they are somehow calculated with an algorithm that produces sequences which look random according to some given probability distribution.

To our understanding the probabilities in quantum theory are truly "random" in the sense that the corresponding values of observables, for which the prepared state is not dispersion free, are "really" undetermined and "irreducibly" random with the probabilities for a specific outcome given according to Born's rule. Of course, also this you can only verify on sufficiently large ensembles with a given significance (say 5 standard deviations for a discovery in the HEP community).

The same is true for the "randomness" in classical statistical physics. Assuming that flipping a coin in a wind tunnel is in principle deterministic, because the motion of the coin is described accurately by deterministic laws (mechanics of a rigid body and aerodynamics, including the mutual interaction). Of course, if the motion of the entire system is completely known (even the exact knowledge of the initial state is enough), you'd be able to predict the outcome of the experiment. Nevertheless we cannot control the state of the entire system so precisely that we can predict with certainty the outcome of a specific coin flip in the wind tunnel, and thus we get a "random sequence" due to the uncertainty in setting up the initial conditions of macroscopic systems. In my view there is not so much difference between the "irreducible randomness" of quantum mechanics and the "classical randomness" due to the uncontrollability of initial states of macroscopically deterministic systems.
Thanks for your response.
In your 1st paragraph I was indeed referring to pure states, but that is not necessary since your density operator ρn is an "eigenvector" of the Hamiltonian.

In paragraph 2 I'm glad to see you put quotation marks around random.

In paragraph 3 the statement that probabilities are random is nonsense. The random variable W =1 with probability 1/2 and -1 with probability 1/2 is exactly the same as the random variable one gets by measuring √½|0⟩ + √½|1⟩ with the Pauli operator Z. There is nothing random (whatever it means) about the probability 1/2. Now if we leave theory and step into a quantum optics lab and measure polarized photons with polarization angle 45º with a polarization analyzer set at 0º then we'll get a sequence of 1s and -1s that will look like the flips of a fair coin with 1 on one side and -1 on the other. Running statistical tests on the sequence will seem to indicate an iid sequence of Ws justifying once again the validity of the theory of QM. The word "random" need not appear anywhere (random variable should be thought as a single word and is function from a probability space to R.). If you wish to use it be my guest, but realize that it is an undefined, intuitive, vague, and an oft misleading concept.

In paragraph 4 you refer to "irreducible randomness" and "classical randomness". The latter usually means "randomness" due to "lack of knowledge" as you say (your age is random to me, but not to you). Would you say that "irreducible randomness" is "randomness" with no cause, a disproof of an omniscient God, or what? What if you knew the initial conditions of the big bang?
I like your use of the word "Assuming" in the 2nd sentence.
 
  • #124
In my naive opinion, Quantum Mechanics says that randomness is intrinsic to nature in the small. First of all, it says that quantum mechanical measurement of states are random variables. A repeated measurement of exactly the same sate will generally not be the same answer but will have a probability distribution, Secondly it says that states evolve in time according to a stochastic process.

This randomness of measurement is not because of slight uncertainties in initial conditions. Quantum mechanics says that exactly the same state when measured will produce a random variable.

Whether this theory is true or not is a metaphysical question in my opinion. The theory works famously and will be questioned when there are experiments that it cannot explain. While I have no idea how Bohmian mechanics works, it seems that it is a different theory which may or may not explain better than Quantum Mehanincs.

To me the question of randomness is not the core question. Rather it is whether one can describe Nature causally and whether this causal explanation gives a clue to the true workings of the world. But the idea of causality may be different than the ideas that arose in Classical Physics.
 
Last edited:
  • #125
The usual concept of an experiment to test a probabilistic theory is to (repeatedly) make preparations and then observe an outcome, so there is a concept of time involved - at least to the extent that the concept of time involves a "before" and "after". We do the preparations before observing the result.

I'm curious whether the theories of relativity introduce any complications into this picture. If observer A thinks he made the preparations before the outcome happened, does observer B always agree ?
 
  • #126
Zafa Pi said:
In paragraph 4 you refer to "irreducible randomness" and "classical randomness". The latter usually means "randomness" due to "lack of knowledge" as you say (your age is random to me, but not to you). Would you say that "irreducible randomness" is "randomness" with no cause, a disproof of an omniscient God, or what? What if you knew the initial conditions of the big bang?
I like your use of the word "Assuming" in the 2nd sentence.
I don't discuss about semantics. I call things "random" in the usual common sense as it is understood by everybody.

I also argue in the realm of quantum theory that there the "randomness" for the outcome of measurements of observables is "irreducible" also in the usual sense as quantum theory is understood in the minimal statistical interpretation, which is the only interpretation one needs in physics and which is not in contradiction with other fundamentals of physics, particularly the relativistic spacetime strucure and its implied causality structure. The knowledge of the exact initial state of the entire universe is a contradiction in itself since to the best of our knowledge only a tiny part of the universe is observable for us in principle. Also the quantum theory of gravity is not understood yet. So I don't talk about this in this anyway weird philosophical discussion since it's hopeless to get a clear idea about what we are talking if one discusses things which aren't even understood on a scientific level. Then an understanding in a philosophical sense is impossible and also completely useless.

Knowing the "initial state" of a quantum system, i.e., preparing the quantum system in this state at a time ##t## does not imply that its observable are all determined. QT tells you that this is impossible. The komplete knowledge of the state, i.e., the preparation of the system in a pure state implies that you know the statistical properties for the outcome of precise measurements of its observables, no more no less. So what I mean with "irreducible randomness" according to QT is exactly this notion of state within QT: The system's observable really have no determined values but with a certain probability you find certain possible values (in the spectrum of the representing self-adjoing operator) when measuring them. This is in accordance with any observations in nature so far and that's why we take QT as the best theory about the description of nature we have today.
 
  • #127
Zafa Pi said:
... you refer to "irreducible randomness" and "classical randomness". The latter usually means "randomness" due to "lack of knowledge" as you say (your age is random to me, but not to you). Would you say that "irreducible randomness" is "randomness" with no cause, a disproof of an omniscient God, or what? What if you knew the initial conditions of the big bang? ...

The question of whether there is "irreducible randomness" in QM - as I think has been pointed out already - is one of interpretation. There are nonlocal interpretations - such as Bohmian Mechanics - that assert that suitable knowledge of initial conditions (the big bang in your example) would allow one to predict the future state of any observables to any precision. So that means quantum randomness is due to lack of knowledge of initial conditions, much like the penny in the wind tunnel.

But most would say that there is no amount of knowledge of initial conditions that would allow you to know the value of non-commuting observables. As far as anyone knows, it is randomness without a cause.

So it seems as if the answer to these questions is a matter of personal choice or preference. If you then tie defining "true randomness" to the situation, then you could equate that to the "uncaused" interpretation. Then you are left with answering whether randomness due to lack of initial condition is "true randomness" - or not.
 
  • Like
Likes bhobba
  • #128
DrChinese said:
So it seems as if the answer to these questions is a matter of personal choice or preference. If you then tie defining "true randomness" to the situation, then you could equate that to the "uncaused" interpretation. Then you are left with answering whether randomness due to lack of initial condition is "true randomness" - or not.

Are you saying that even though one can model Quantum Mechanical systems deterministically, the uncertainty principle prevents any measurement that would allow predicting the future of a path?
 
  • #129
I think part of the problem is the entire verbal language of QM was historically set up to try to swipe the elephant in the room under the carpet. I am talking about measuring apparatus. Every time "observable" is mentioned, there must be a corresponding measuring apparatus involved, otherwise the observable is not defined. Saying "particle does not have defined position between measurements" basically amounts to "there is no outcome reported by the measuring apparatus when no measurement has taken place" - a tautology.

It is nothing short of a miracle that the entire effect of all these complicated measuring apparatuses (apparatii?) can be described by a few simple operators. But, as far as I understand it, the operator is not "hard-coded" into the system. Instead it emerges statistically from the complex interaction of countless internal states, much like normal distribution comes out of nowhere in central limit theorem.

Now the initial state of measuring apparatus is necessarily unknown. I would call it random, and I don't care if it is "true" randomness or only "apparent" due to our lack of knowledge, the result is the same FAPP. Funnily enough, as soon as we try to control this initial state, the device ceases to be measuring apparatus and becomes yet another quantum system which then starts behaving weirdly and requires yet another measuring device to tell us what is going on with the first one (micromirrors getting momentum-entangled with photons, fullerens going through both slits etc). So the randomness is unavoidable, it is inherent in the nature of a measuring apparatus.

What I'm trying to say, there is enough randomness in our measurement devices and in the environment in general to explain randomness of quantum measurement results. And, by invoking Occam's razor, there is no need to postulate inherent randomness "built-in" into foundations of QM. It should just come out by itself from unitary evolution coupled with the assumption of environment having large number of interacting degrees of freedom in unknown initial state, or in other words, from decoherence. Basically, Born Rule should be derived rather then postulated.
 
  • #130
lavinia said:
Are you saying that even though one can model Quantum Mechanical systems deterministically, the uncertainty principle prevents any measurement that would allow predicting the future of a path?

I'm not a Bohmian, so I don't really accept that interpretation. Channeling others who accept Bohmian Mechanics (and I beg forgiveness if I explain poorly):

In principle, it would be possible to simultaneously predict the value of 2 non-commuting observables. However, they would be quick to say that practical considerations prevent one from placing an actual system in a state in which that could be done. As a result, the uncertainty principle emerges and there is no practical difference between theirs and non-deterministic interpretations.
 
  • Like
Likes lavinia
  • #131
Delta Kilo said:
...Now the initial state of measuring apparatus is necessarily unknown. I would call it random, and I don't care if it is "true" randomness or only "apparent" due to our lack of knowledge, the result is the same FAPP. Funnily enough, as soon as we try to control this initial state, the device ceases to be measuring apparatus and becomes yet another quantum system which then starts behaving weirdly and requires yet another measuring device to tell us what is going on with the first one (micromirrors getting momentum-entangled with photons, fullerens going through both slits etc). So the randomness is unavoidable, it is inherent in the nature of a measuring apparatus.

What I'm trying to say, there is enough randomness in our measurement devices and in the environment in general to explain randomness of quantum measurement results.

Ah, but your premise is demonstrably false! :smile:

You cannot place 2 different quantum systems in identical states such that non-commuting observables will have identical outcomes. But you can place 2 different observers in an ("unknown") state in which they WILL yield (see) the same outcome to identical quantum measurements. Let's get specific:

We have a system consisting of 2 separated but entangled photons such that their polarization is unknown but identical (Type I PDC for example). Observing the photons' individual polarizations by the 2 *different* observers - at the same angle - always yields the same results! Therefore, none - and I mean none - of the outcome can be attributed to the state of the observer unless there is something mysterious being communicated from observer to observer. Obviously, it is not the interaction between the observed and the observer (as you hypothesize), else the results would be different in some trials.

If the observers contributed to the uncertainty - to the randomness - then that would show up in experiments such as above. It doesn't. Put another way: your premise seems superficially reasonable, but fails when you look closer. Randomness is not due to "noise" (or anything like that) which is part of (or local to) the observer.
 
Last edited:
  • #132
Delta Kilo said:
What I'm trying to say, there is enough randomness in our measurement devices and in the environment in general to explain randomness of quantum measurement results.

At the risk of making a statement I have no real qualifications to be making, I don't agree that this can explain Bell's experiment.
 
  • #133
DrChinese said:
I'm not a Bohmian, so I don't really accept that interpretation. Channeling others who accept Bohmian Mechanics (and I beg forgiveness if I explain poorly):

In principle, it would be possible to simultaneously predict the value of 2 non-commuting observables. However, they would be quick to say that practical considerations prevent one from placing an actual system in a state in which that could be done. As a result, the uncertainty principle emerges and there is no practical difference between theirs and non-deterministic interpretations.

OK. So just to understand better would the practical considerations be similar to those that Heisenberg posited that any attempt to determine position would perturb its momentum? If so it would seem that theoretically once could never measure both.
 
  • #134
lavinia said:
OK. So just to understand better would the practical considerations be similar to those that Heisenberg posited that any attempt to determine position would perturb its momentum? If so it would seem that theoretically once could never measure both.

I don't think I can answer this to someone like Demystifier's satisfaction. Hopefully he or someone more qualified than I can answer this.

But I think the concept is: IF you knew the starting p and q of every particle in a closed system (i.e. the entire universe), THEN you could predict future p and q for all with no uncertainty. It is then the inability to know all starting p's and q's which ultimately leads to the uncertainty relations in the Bohmian view.
 
  • #135
rootone said:
Things that are more or less probable, such as the decay of a fissile atom around its measured half life, are not the same as 'random'

In principle, no amount of knowledge, measurements or computational resources can predict with certainty whether or when a fission will occur.
 
Last edited:
  • #136
DrChinese said:
Ah, but your premise is demonstrably false! :smile:
Grinkle said:
At the risk of making a statement I have no real qualifications to be making, I don't agree that this can explain Bell's experiment.
Sorry, either I was not clear enough (which is quite likely) or you are trying to read way too much into what I wrote. I'm not trying to explain the weirdness of QM as some statistical process in the measuring apparatus. Yes, I'm well aware that two spacelike separated particles sometimes can only be described by a single non-separable wavefunction. And yes, when one of those particles interact with the measuring apparatus we must treat it as the whole system of two entangled particles interacting with it, even though the other particle might be light years away. It is simply impossible to write an interaction hamiltonian involving one particle but not the other.
Exactly how this happens I'm not prepared to discuss, my gut feeling is non-relativistic QM is ill-equipped to answer it and I'm not yet at the level to talk about QFT where there is no spoon everything is different yet again.

Anyway, all I'm saying is every time when there is random output in QM there just happens to be a thermal bath conveniently located nearby and therefore randomness in QM is emergent phenomena which does not need to be hardwired into the theory at the fundamental level.
 
  • #137
Delta Kilo said:
Anyway, all I'm saying is every time when there is random output in QM there just happens to be a thermal bath conveniently located nearby and therefore randomness in QM is emergent phenomena which does not need to be hardwired into the theory at the fundamental level.

OK, but it is hardwired into the mathematical formalism of QM. That fact seems to me enough to answer the question "Is quantum mechanics random in nature?" (which is the thread title - just sayin'). Clearly that fact does not preclude the possibility that some more fundamental theory with some other mathematical formalism but without the baked-in randomness could also exist. It will necessarily be either non-local or non-EPR-realistic, but it need not have baked-in randomness.

So far, so good... But until we have a candidate theory to consider, "so far" isn't very far at all.
 
  • #138
DrChinese said:
The question of whether there is "irreducible randomness" in QM - as I think has been pointed out already - is one of interpretation. There are nonlocal interpretations - such as Bohmian Mechanics - that assert that suitable knowledge of initial conditions (the big bang in your example) would allow one to predict the future state of any observables to any precision. So that means quantum randomness is due to lack of knowledge of initial conditions, much like the penny in the wind tunnel.

But most would say that there is no amount of knowledge of initial conditions that would allow you to know the value of non-commuting observables. As far as anyone knows, it is randomness without a cause.

So it seems as if the answer to these questions is a matter of personal choice or preference. If you then tie defining "true randomness" to the situation, then you could equate that to the "uncaused" interpretation. Then you are left with answering whether randomness due to lack of initial condition is "true randomness" - or not.
By perusing (in the skimming sense) the posts of this thread it appears your 2nd paragraph is valid while your 1st sentence is rarely confirmed. Here's a little dialogue:
A: I'm flipping this coin in this wind tunnel and getting random results.
B: They're not really random, it's just that you don't know the initial conditions.
A: There no initial conditions.
B: Of course there are you just don't know them.
A: Nobody knows them or can find a way of knowing them because they don't exist.
B: Yes they do. It's you lack of knowledge.
A: No they don't. It's pure "Random Without a Cause" (not to be confused with the James Dean movie)
C: God knows the initial conditions.
D: Hold on, there is no God.
C: Yes there is
D: No there ain't ...

DrChinese I'll bet you 2 bucks that my coin in the wind tunnel is just as random as the measurement of a pound of identically (tee hee) prepared photons exiting a polarization analyzer.
People in this thread bandy the term "random" like the Jesuits did "God", no one defines it and everyone thinks they know what it is, yet disagree. The term random is no more necessary to QM than God was to Laplace. But, by God, don't let me rain the random parade.
 
  • #139
DrChinese said:
The question of whether there is "irreducible randomness" in QM - as I think has been pointed out already - is one of interpretation. .

:smile::smile::smile::smile::smile::smile::smile::smile::smile:

Again, I mentioned it before, but will repeat for emphasis, one of the main reasons for studying QM interpretations is to disentangle what the formalism is really saying - its sometimes not obvious at a first brush.

Thanks
Bill
 
  • #140
The problem with quantum theory is that there is a physics part, used to explain objective observations in nature, and a plethora of socalled "interpretations" which try to extend somehow the philosophical ideas about it beyond natural sciences. There's no point for a physicist to get involved in this, because it's beyond the methodology of physics. Which of these additional elements of interpretation you follow is a question of personal believe (for some even a religion). It is irrelevant for physics.

The pure physical part of QT together with very fundamental assumptions about the relativistic space-time structure and locality of interactions, which is basis of an unprecedented success of explaining comprehensively all known facts about nature, tells us that there is an irreducible randomness in nature. The complete determination of the state does not determine the values of all observables characterizing the described system. Any additions to the minimal statistical interpretation are just philosophical speculation with no scientific empirical basis so far.

As was stressed before, that doesn't rule out that one day one finds an even more comprehensive scientific theory of nature, and one finds limits of applicability of QT, but that won't come very probably not from philosophical speculations but from observations reproducibly contradicting the present theory or an ingeneous mathematical (not philosophical!) development to solve one of the problems with "the Standard Model", like a consistent description of gravity and/or dark matter.
 
  • Like
Likes bhobba
  • #141
Nugatory said:
OK, but it is hardwired into the mathematical formalism of QM.
It is hardwired into Born rule only. This is the only place where apparent randomness is generated, everything else follows from that. At the moment it is simply postulated, and the way it is usually done does not allow internal state of the measuring apparatus to enter into the picture, thus creating an impression that randomness is conjured out of nothing.

Nugatory said:
Clearly that fact does not preclude the possibility that some more fundamental theory with some other mathematical formalism but without the baked-in randomness could also exist.
I think the formalism is fine as it. All it takes is to demote Born rule from postulate into a theorem and show that input from the environment/apparatus is necessary for the measurement to take place. That's it. This will banish randomness from the rules and move it to initial conditions instead, just like with the explanation of 2nd law of thermodynamics.

Nugatory said:
So far, so good... But until we have a candidate theory to consider, "so far" isn't very far at all.
Well, attempts have been made to derive Born rule. I understand there is no consensus, but there has been progress in studying decoherence, mesoscopic states etc.
Like this Nobel Prize winning work of 20 years ago:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.4887
S. Haroche et al said:
A mesoscopic superposition of quantum states involving radiation fields with classically distinct phases was created and its progressive decoherence observed. The experiment involved Rydberg atoms interacting one at a time with a few photon coherent field trapped in a high Q microwave cavity. The mesoscopic superposition was the equivalent of an “atom+measuringapparatus” system in which the “meter” was pointing simultaneously towards two different directions—a “Schrödinger cat.” The decoherence phenomenon transforming this superposition into a statistical mixture was observed while it unfolded, providing a direct insight into a process at the heart of quantum measurement.
(emphasis mine) Here "superposition" = deterministic (not random) input, "statistical mixture" = random output, "decoherence phenomenon" is responsible for creating randomness during measurement. Clearly there are rules governing this evolution and Born rule has to be the consequence of these rules.
 
  • #142
Delta Kilo said:
Well, attempts have been made to derive Born rule. I understand there is no consensus, but there has been progress in studying decoherence, mesoscopic states etc.
Like this Nobel Prize winning work of 20 years ago:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.4887

(emphasis mine) Here "superposition" = deterministic (not random) input, "statistical mixture" = random output, "decoherence phenomenon" is responsible for creating randomness during measurement. Clearly there are rules governing this evolution and Born rule has to be the consequence of these rules.
Concerning the status of ideas how one could derive the Born rule from the other postulates of QT, see (it's another Nobel laureate, by the way :-)):

S. Weinberg, Lectures on Quantum Theory, Cambridge University Press (2012)

BTW one of the best QT textbooks of the recent years. As always with all textbooks by Weinberg following the "no-nonsense approach" to physics, which is most important when it comes to the discussion of interpretations, although I don't share Weinberg's opinion that the issue on interpretation is undecided today. As I emphasized in this thread (and already many times before in this forum) I think that there is nothing unsolved and that the physically relevant interpretation is given by how it is used in scientific work to analyze and describes the outcomes of experiments or, more general, any kind of observations in nature. Philosophical speculations are irrelevant for physics!
 
  • #143
Delta Kilo said:
Anyway, all I'm saying is every time when there is random output in QM there just happens to be a thermal bath conveniently located nearby and therefore randomness in QM is emergent phenomena which does not need to be hardwired into the theory at the fundamental level.

And all I am saying is that we can rule out the thermal bath as the source of quantum randomness, unless the spacelike separated thermal baths possesses global (non-local) attributes. But yes, there is a thermal bath nearby.
 
  • #144
DrChinese said:
And all I am saying is that we can rule out the thermal bath as the source of quantum randomness, unless the spacelike separated thermal baths possesses global (non-local) attributes. But yes, there is a thermal bath nearby.
I looked at your argument again and didn't find it convincing.

DrChinese said:
We have a system consisting of 2 separated but entangled photons such that their polarization is unknown but identical (Type I PDC for example). Observing the photons' individual polarizations by the 2 *different* observers - at the same angle - always yields the same results!
The results are the same for both observers but different from one run to the next.. This is an example of randomness induced (I presume) by the unknown internal state of the measuring apparatus.

DrChinese said:
Therefore, none - and I mean none - of the outcome can be attributed to the state of the observer unless there is something mysterious being communicated from observer to observer.
Well, we know from Bell's theorem that it cannot be attributed to the state of individual photon either. In other words there must be something mysterious being communicated from one photon to another. In which case one observer, being presented with superposition, can randomly choose one outcome and it will be mysteriously communicated from one photon to another.The other observer will then be presented with a resulting pure state and won't have any choice but to agree with the first one.

DrChinese said:
If the observers contributed to the uncertainty - to the randomness - then that would show up in experiments such as above. It doesn't.
I don't see why it should.
Uncertainty and randomness are two different notions. Uncertainty tells whether the system object+measuring apparatus is initially in pure pointer state or in superposition. Randomness then chooses which particular pointer state from those present in superposition it is going to evolve to. The observer contributes to the latter but not to the former.
 
  • #145
Delta Kilo said:
Well, we know from Bell's theorem that it cannot be attributed to the state of individual photon either. In other words there must be something mysterious being communicated from one photon to another. In which case one observer, being presented with superposition, can randomly choose one outcome and it will be mysteriously communicated from one photon to another.The other observer will then be presented with a resulting pure state and won't have any choice but to agree with the first one.

We don't know that something is communicated from photon to photon or not, especially considering that the photons don't need to have ever co-existed or have ever shared a common light cone). You are accurate that the total measurement context could be:

SpatiallySeparatedAlice+EntangledPhotonA+SpatiallySeparatedBob+EntangledPhotonB+[who knows what else]

In this case, the observers must be in some kind of nonlocal contact. So you are actually stating the Bohmian interpretation. They call "randomness induced by the unknown state of the observers/universe" a pilot wave.
 
  • #146
Well, there's always a common cause of photons being entangled. I guess what you are after is entanglement swapping, but also for this you first need entangled photons, which are produced in some local process in this entangled state (nowadays usually via parametric downconversion).
 
  • #147
Delta Kilo said:
All it takes is to demote Born rule from postulate into a theorem and show that input from the environment/apparatus is necessary for the measurement to take place. That's it. This will banish randomness from the rules and move it to initial conditions instead.

Does a Geiger counter placed near a radioactive atom register both decayed and not decayed at the same time, or does the presence of the Geiger counter cause the state to be either one or the other?
 
  • #148
A Geiger counter counts decays and not "non-decays". The Geiger counter doesn't cause the decay but registers it. Maybe I don't understand the question right, but isn't this obvious?
 
  • Like
Likes davenn
  • #149
vanhees71 said:
A Geiger counter counts decays and not "non-decays". The Geiger counter doesn't cause the decay but registers it. Maybe I don't understand the question right, but isn't this obvious?

To my reading, this was David Lewis' point. If it wasn't his point, then I prefer my reading of his post. :-P
 
  • Like
Likes Nugatory
  • #150
David Lewis said:
Does a Geiger counter placed near a radioactive atom register both decayed and not decayed at the same time, or does the presence of the Geiger counter cause the state to be either one or the other?
Well, the first part of your question is a matter of interpretation, the answer would be yes in MWI and no in BM.
The second part I guess is also yes, except of course a brick will serve just as well.
If an atom decays somewhere in the interstellar space far away from everything else, then, until the pieces actually hit something else (and it may take a while), it exists in superposition of decayed and non-decayed. When an atom or a decay product collides with another particle it gets entangled with it. It still remains in superposition but now it involves another atom as well. Eventually the number of other particles involved becomes sufficiently large for the process to be irreversible but it will take much longer. (well, that is how I understand it)
 
  • Like
Likes David Lewis

Similar threads

Back
Top