# How do we know it's random?

## Main Question or Discussion Point

Is there a reason other than statistics that forces randomness into quantum mechanics? Have people just done test after test and found the positions of things, etc, to be random? Is it still possible that there is some sort of particle or process or thing that is small or insignificant enough that with trillions of them around or doing what they do they could give a result that looks very close to what randomness should look like? Like saying planets are always circular. Really we see this as being from quintabillions of atoms with forces acting in such a way to CAUSE the spherical planets. Could something be causing this "randomness" or if true pseudo-randomness?

Related Quantum Physics News on Phys.org
Hi,

The answer is no. What you are talking about are hidden variables. Bell showed that no hidden variable theory can explain all quantum phenomena (Bell particularly meant entanglement here).

Jurgen

Without knowing what Bell did to show this, I don't see how any experiment could show this for certain...

Plus, why does entanglement say that the outcome is randomness?

randomness basically explains alot of phenomena we see in experimentation, and is a big part in the copenhagen interpretation. There are other more deterministic interpretations of quantum mechanics, but if I'm not wrong, very few of them have had any popularity.

But if you ask me, I think whether randomness is really a fact of nature, stuff like that, and disputes over QM should be in the philosophy forum or something.

well, I think it is impossible to make a real "one by one" experiment with QM systems. If so, you could do an individual experiment on an electron.
But there are lots of statistical experiments that shows the correctness of QM. fpr example the diffraction phenomena of electron. We treat with lots of electrons. But we can still speak about "one electron" in our extended theorem.
ok???!!!

Fredrik
Staff Emeritus
Gold Member
jvangael said:
The answer is no. What you are talking about are hidden variables. Bell showed that no hidden variable theory can explain all quantum phenomena (Bell particularly meant entanglement here).
Gerard 't Hooft doesn't think that Bell inequalities and EPR-type experiments is a good enough reason do dismiss the possiblility of an underlying deterministic theory.

This is a quote from an article he wrote, with the title "http://www.arxiv.org/PS_cache/hep-th/pdf/0104/0104219.pdf [Broken]":

Of course I am aware of the numerous studies regarding the difficulties in devising hidden variable theories for quantum mechanics. Deterministic theories appear to lead to the famous Bell inequalities 2 , and the Einstein-Rosen-Podolsky paradox 3 . There are various possible reasons nevertheless to continue along this avenue. Generally speaking, we could take the attitude that every “no-go theorem” comes with some small-print, that is, certain conditions and assumptions that are considered totally natural and reasonable by the authors, but which may be violated in the real world. Certainly, physics at the Planck scale will be quite alien to us, and therefore, expecting some or several of the “natural” looking conditions to be violated is not so objectionable. More specifically, one might try to identify some of such conditions. One example might be the following: turning some apparatus at will, in order to measure either the x- or the y -component of an electron’s spin, requires invariance of the device under rotations, allowing the detector to rotate independently of the rest of the scenery. This is unlikely to be totally admissible in terms of Planck scale variables.

Last edited by a moderator:
misogynisticfeminist said:
randomness basically explains alot of phenomena we see in experimentation, and is a big part in the copenhagen interpretation.
Shouldn't there be some sort of experimentally shown "accuracy" of the randomness? I mean, couldn't you say how sure in some sort of measurement that the randomness is really randomness?

misogynisticfeminist said:
But if you ask me, I think whether randomness is really a fact of nature, stuff like that, and disputes over QM should be in the philosophy forum or something.
I agree that a lot of discussions about randomness in QM should be in the philosophy forums but I'm only talking about the experiments that were done to make people think things are random.

jcsd
Gold Member
From a purely formal point of view quantum mechnics is inherently random (i.e. the postulates of quantum mechanics talk speicfically about probality), but as has been mentioned earlier this doesn't necessarily mean it cannot be explained by deterministic explnations, i.e. so-called hidden variable theories (HVTs).

What Bell did was to show that for any HVT to consistently explain the quantum formalism it must be a non-local hiddenvariables theory (NLHVT).There are NLHVTs such as Bohm's pilot wave theory that that do explain the quantum formalism in a consistent manner, but NLHVTs are problematic in that they are very hard to square with special relativty, even though quantum mechnaics itself is relatively (excuse the pun) easy to square with special relativity, plus any hidden variables theory must postulate the existance of artifacts that can never be experimentally tested for.

Chronos
Gold Member
Testing for randomness is a simple statistical matter. When you see things like gaussian distributions in photon dispersal patterns, it is pretty obvious the pattern is random even without doing the math.

I'd strongly disagree with the idea that 'testing for randomness is a simple statistical matter' as it isn't. You can't say anything is random, you can only postulate that it is. Just becuase a distribution looks random or normal for that matter will ultimately be derived from deterministic relationships, and the only reason we use the work random is becuase our models can't cope with the level of detail necessary to build a fully inclusive model.

A great example is flipping a coin. Some people would call the outcome random. But if you know enough about the force applied, starting position, direction of movement and any other forces acting upon the coin during motion then you can accurately predict the outcome, therefore is it actually a random event.

Quantum mechanics makes use of probability distributions because current physical techniques and knowledge dont know enough about dynamics and the atomic level..... well thats just my opinion I guess/

jcsd
Gold Member
davidmerritt said:
I'd strongly disagree with the idea that 'testing for randomness is a simple statistical matter' as it isn't. You can't say anything is random, you can only postulate that it is. Just becuase a distribution looks random or normal for that matter will ultimately be derived from deterministic relationships, and the only reason we use the work random is becuase our models can't cope with the level of detail necessary to build a fully inclusive model.

A great example is flipping a coin. Some people would call the outcome random. But if you know enough about the force applied, starting position, direction of movement and any other forces acting upon the coin during motion then you can accurately predict the outcome, therefore is it actually a random event.

Quantum mechanics makes use of probability distributions because current physical techniques and knowledge dont know enough about dynamics and the atomic level..... well thats just my opinion I guess/

Yep such statsical tests can never prove conclusively that it is random.

My opinion is that if it is objectively random then we should probably call it random; what QM say is that it's not just that we DON'T know enough to describe it deterministically, it's that we can NEVER know enough to decsribe it determinsitically, not even in principle, so by any objective standard it is random.

ZapperZ
Staff Emeritus
davidmerritt said:
I'd strongly disagree with the idea that 'testing for randomness is a simple statistical matter' as it isn't. You can't say anything is random, you can only postulate that it is. Just becuase a distribution looks random or normal for that matter will ultimately be derived from deterministic relationships, and the only reason we use the work random is becuase our models can't cope with the level of detail necessary to build a fully inclusive model.

A great example is flipping a coin. Some people would call the outcome random. But if you know enough about the force applied, starting position, direction of movement and any other forces acting upon the coin during motion then you can accurately predict the outcome, therefore is it actually a random event.

Quantum mechanics makes use of probability distributions because current physical techniques and knowledge dont know enough about dynamics and the atomic level..... well thats just my opinion I guess/
As like everything else, this thing keeps popping back like a bad zit.

If you equate the "randomness" in flipping a coin with the "randomness" in QM, then you have not understood QM. Why? Because there is a DISTINCT difference between the two, and a MEASURABLE one at that!

If I have the superposition of two basis state such as

|psi> = a1|u1> + a2|u2>

before a measurement, the system is simultaneously in both basis state! How do I know this? Hydrogen molecule, the SQUID experiments, NH3, etc.. etc. The presence of an energy gap between the parity of bonding and antibonding states clearly indicate that the Schrodinger Cat-type situation exists! There is NO analogous situation such as this in coin flipping. You do not get an "energy gap" of the superposition of "head" and "tail" outcomes that intermix to create an unusual, non-classical observation! Coin-flipping is a completely different beast than QM superposition.

Now, upon a direct measurement of the system, you will get either a state corresponding to |u1> or |u2>, but NOT both in a single measurement. The fact that you cannot unambiguously determine which state will be measured IS what is commonly cited as the "randomness" in QM. The formulation makes NO such determination, nor does it explain/describe the mechanism of which state will be the one that is measured.

While we know what all the possible outcomes are, and while we can in fact deterministically obtain the average result of many, many such measurement (where most of our world operates), we have no way of determining the outcome of a single measurement. Now you may speculate all you want that QM is incomplete, not random, deterministic, etc... etc.. But they are all just speculation because we have (i) no formulation that is accepted that explain the "underlying" nature of such measurement and (ii) no experimental observation to indicate that there IS one.

I really do not care if people believe QM is inherently random or not. However, please, please, please... do NOT equate the apparent randomness in QM with the randomness of coin-tossing, dice-throwing, etc, etc. Give us physicists SOME credit into having realized this already and that we have thought about it enough to know these two are NOT the same!

Zz.

ZapperZ, I just want you to know that when I started this thread I didn't have an intent of convincing myself it isn't random or deterministic. I was trying to figure out why it's thought to be random. I can't imagine how someone could tell the difference between a truely random and a coin flipping experiment or even that there could be a difference. It's probably because I don't understand much about QM yet, but I can't imagine any way of knowing and I've never heard an explanation that made me think that we could know. Again, I have much to learn before I can argue one way or the other. If anyone could explain the difference in different words, maybe it would help.

ZapperZ
Staff Emeritus
TheDonk said:
ZapperZ, I just want you to know that when I started this thread I didn't have an intent of convincing myself it isn't random or deterministic. I was trying to figure out why it's thought to be random. I can't imagine how someone could tell the difference between a truely random and a coin flipping experiment or even that there could be a difference. It's probably because I don't understand much about QM yet, but I can't imagine any way of knowing and I've never heard an explanation that made me think that we could know. Again, I have much to learn before I can argue one way or the other. If anyone could explain the difference in different words, maybe it would help.
I actually have no issues with your question. I am just a bit annoyed with the coin-flipping connection being force onto QM's picture.

When people use that, they seem to forget that the PHYSICS itself indicates that we can know, in principle, all the mechanics of a coin toss and thus, it is NOT random. We only put in randomness due to our "laziness" of trying to know all those intricate detail.

Such "underlying" principle is complete ABSENT in QM. There are NO PHYSICS to tell us that, even in principle, and even if we're not lazy, there is an underlying mechanism that causes one outcome to turn up instead of the other. The superposition is REAL, and the consequences of such superposition have been verified! I have explained this at length in my previous post.

It is the absence of such underlying mechanism that causes us to, at the very least, indicate that we have no way of telling the outcome of a single measurement. If this is called "randomness", so be it. However, do not be fooled into thinking that this "randomness" is identical to coin-flipping, because it isn't!

Zz.

ZapperZ said:
It is the absence of such underlying mechanism that causes us to, at the very least, indicate that we have no way of telling the outcome of a single measurement. If this is called "randomness", so be it. However, do not be fooled into thinking that this "randomness" is identical to coin-flipping, because it isn't!
Does this imply that there is no "rule" about the distribution of where the particle is when measured? I always assumed that because people were saying it's random that it had a normal distribution. I'm sure statistics have shown that it looks normal but is there any principal in QM against it not normal?

ZapperZ said:
The superposition is REAL, and the consequences of such superposition have been verified! I have explained this at length in my previous post.

It is the absence of such underlying mechanism that causes us to, at the very least, indicate that we have no way of telling the outcome of a single measurement. If this is called "randomness", so be it. However, do not be fooled into thinking that this "randomness" is identical to coin-flipping, because it isn't!

Zz.
I have to add some comments to ZapperZ post concerning the coin-flipping explanations as I do not agree with this apparently extreme position concerning QM and classical statistics even if I agree with core of ZapperZ’s previous post.

The common mistakes, I have encountered with QM concern the interpretation of the born rules (the statistics and measurements) and the basic interpretations (out of the context of application) of several theorems (bell’s theorem, no-go theorem, etc …).

The classical randomness of a coin-flipping may be described by QM. We can always do that: this is the de brooglie-Bohm (DBM) modelling. It is the selection of a peculiar observable (the coin faces) with a fixed quantum state, say 1/sqrt(2)(|odd>+|even>) (or the density matrix |odd><odd|+|even><even| corresponding to a different QM state) that defines the statistics. The observation of the results gives the known statistics (50%, 50%). In this experiment, we have the same results both in classical and QM (property of the observables that induce a probability law on their spectrum set).
Now, we can apply the DBM model and say (quickly), yes my coin as an unknown trajectory q(t) that complies with the statistics. Therefore, what davidmerritt says in its post is also correct to some extent (i.e. we must know the context).
This is what I want to underline. It is a matter of choice and context. We must not forget that these two descriptions (QM and “DBM Classical”) give the same statistical results because we are using the same observable (the coin faces) and the born rules.
One important result of the born rules (and QM measurement) comes from the simple fact that any measuring experiment the values of an observable gets *only* one result (e.g. odd or exclusive even for the coin face observable): this is the connection between classical and quantum probability.
It is a very strong assumption. It also means that to measure (to see) a superposition of states we need the adequate observable (and the adequate apparatus to measure it). Thus, the measurement of a superposition a states of a coin requires a specific experiment that is different from looking at the faces once the coin has stopped.

Seratend.

Concerning the deterministic or statistical approach of physics, in my modest opinion, it is only a matter of choice of 2 equivalent mathematical models (selecting a measure or a probability law to compute fields or whatever we want).

We are used to considering physics with the a priori deterministic approach: this is due, I think, to the Newton mechanics model that is taught first.
However, it is important to understand that is an a priori selection. And any deterministic model may be re-thought as a statistical model: this is the application of the weak large number law:
The random variable sn=1/N sum_i si where xi are independent variables with the same variance and mean value converges to <x> when N becomes large. We have a deterministic result from a statistical source.

For example, let’s take the coulombian interaction at a point ro: V(ro)=sum_i V(ri-ro).
Now let’s assume (for the demo purpose) that the ri sources are independent random variables.
We have: V(ro)=n.(1/n.sum_iV(ri-ro))=n.sn

Thus si=V(ri-ro) are also independent random variables. Now when n becomes large we have V(ro)=n.<V(ro)> where <V(ro)> is the mean value of any random variable V(ri-ro).

Now, with this result, we have the “deterministic” coulombian interaction value at point ro that is now the result of random sources. <V(ro)> is now the local density source that is constant in our current model: we have to multiply <V(ro)> by the number of sources to recover any V(ro).
For example, we can choose V(ro)=k/(ra-ro)=n.<V(ro)>, n large. This is a statistical model of a coulombian interaction at a point ro created by a "deterministic" point charge. It is easy to generalise to any distribution of charges (the superposition of different sets of random variables with a given mean value : V=n1<V1(ro)> +...+ nk<Vk(ro)>=V1(ro)+...+Vk(ro))

Thus, we can model the deterministic results of em by statistical results: we are just applying the same mathematical model: a set with a sigma-algebra and a measure.

Therefore to the question “how do we know it is random?”, well, I can say it is a simple point of view.

Seratend.

ZapperZ
Staff Emeritus
seratend said:
The classical randomness of a coin-flipping may be described by QM. We can always do that: this is the de brooglie-Bohm (DBM) modelling. It is the selection of a peculiar observable (the coin faces) with a fixed quantum state, say 1/sqrt(2)(|odd>+|even>) (or the density matrix |odd><odd|+|even><even| corresponding to a different QM state) that defines the statistics. The observation of the results gives the known statistics (50%, 50%). In this experiment, we have the same results both in classical and QM (property of the observables that induce a probability law on their spectrum set).
Now, we can apply the DBM model and say (quickly), yes my coin as an unknown trajectory q(t) that complies with the statistics. Therefore, what davidmerritt says in its post is also correct to some extent (i.e. we must know the context).
This is what I want to underline. It is a matter of choice and context. We must not forget that these two descriptions (QM and “DBM Classical”) give the same statistical results because we are using the same observable (the coin faces) and the born rules.
One important result of the born rules (and QM measurement) comes from the simple fact that any measuring experiment the values of an observable gets *only* one result (e.g. odd or exclusive even for the coin face observable): this is the connection between classical and quantum probability.
It is a very strong assumption. It also means that to measure (to see) a superposition of states we need the adequate observable (and the adequate apparatus to measure it). Thus, the measurement of a superposition a states of a coin requires a specific experiment that is different from looking at the faces once the coin has stopped.

Seratend.
I will admit that I didn't quite get the point you are trying to make.

The classical randomness of coin-flipping is the randomness in the 'statistics' of the outcome. It is not due to an underlying randomness in the mechanics or the dynamics of coin-flipping. We can't say that for a "quantum coin-flipping". While there ARE distinct, orthorgonal states that are well-defined before measurement (|head> and |tail>), unlike the classical coin-flipping, these states are (i) mixing with one another to produce very non-classical effects and (ii) no one can use anything to definitely make a prediction of a single outcome.

I very much hesitate to use the word "randomness" to describe this. The word carries stronger connotations to the system which, I believe, isn't accurate. However, if "randomness" as used in this context means "no physical means to make definite predictions of a single outcome", then I'll use that word. However, the issue here is that there is definitely a clear distinction between classical coin-flipping and QM coin-flipping, no matter what kind of QM interpretation one uses to look at it.

Zz.

ZapperZ said:
I will admit that I didn't quite get the point you are trying to make.
The classical randomness of coin-flipping is the randomness in the 'statistics' of the outcome. It is not due to an underlying randomness in the mechanics or the dynamics of coin-flipping. We can't say that for a "quantum coin-flipping". Zz.
Note that, as you say, we have just a coin-flipping experiment: the output experiment result and the probability law that only describes the probability output of the experiment. The coin-flipping statistics does not suppose that the coin has travelled classically along the table to stop at the end of the experiment, just that we have a probability law with a 50/50% distribution on the 2 possible outcomes. This is exactly what says the born rules.
The coin-flipping experiment outputs may be viewed as the measurement of the observable “coin faces”, we still have the same probability distribution as a classical probability.

ZapperZ said:
We can't say that for a "quantum coin-flipping". While there ARE distinct, orthorgonal states that are well-defined before measurement (|head> and |tail>), unlike the classical coin-flipping, these states are (i) mixing with one another to produce very non-classical effects and (ii) no one can use anything to definitely make a prediction of a single outcome.
It is one of the point I want to underline when we speak about QM: we need to distingish the unitary evolution of the states and the born rules: this is the same thing as in statistical Newtonian mechanics. We have an equation of motion that modifies the probability density over the time (Liouville equation) in classical mechanics and in QM we have the Schroedinger equation. These equations are the deterministic part. Now both have the same source of randomess: the initial or final distribution law (it is only boundary conditions). Both do not explain the source of the probability distribution, just the deterministic update of it during time.

Now, in my opinion a common mistake in QM is to see the QM state more than a QM probability state: a probability distribution for the spectrum of an observable. When you say that there are distinct QM states before the measurement, you are in fact saying that you have an initial probability distribution for any given observable.
As in classical probability, you are just considering two initial probability distributions (the distribution associated to |head> and the distribution associated to |tail>). In the case of the measurement of the coin face, these two initial density probabilities correspond to the probability law “100% head”, 100% tail”, but you must not forget that you do not know the initial velocity state probability distribution for the classical probability distribution or the initial interaction in QM case (before the coin becomes a “free falling” system).

Now once the coin is thrown, you have a deterministic mechanical evolution in both models (SE or Newton equation – I am not saying they are the same, just they are deterministic), at the end of the time evolution, you have a new probability distribution law that depends on a not well know initial probability distribution (the source of “randomness”).
Only this final distribution counts in the experiment results not the initial distribution. Thus, whatever the quantum effects occur during the path of the coin, this is not important, only the final probability distribution may be analysed in this experiment. We still have the final statistical result 50% head, 50% tail: thus we must have at final state |psi> that verifies these 2 results. Using the born rules, once again, implies that the probability distribution over the spectrum (the classical probability) of the coin face observable is 50/50 whatever entangled the state |psi> is.

ZapperZ said:
I very much hesitate to use the word "randomness" to describe this. The word carries stronger connotations to the system which, I believe, isn't accurate. However, if "randomness" as used in this context means "no physical means to make definite predictions of a single outcome", then I'll use that word. However, the issue here is that there is definitely a clear distinction between classical coin-flipping and QM coin-flipping, no matter what kind of QM interpretation one uses to look at it.

Zz.
If you accept to separate the source of the randomess from its deterministic time evolution, you will have a simpler view of why QM model is different from the classical statistical mechanics: both models does not explain the source of randomness, just its probability distribution evolution in time.

Seratend.

ZapperZ
Staff Emeritus
seratend said:
Note that, as you say, we have just a coin-flipping experiment: the output experiment result and the probability law that only describes the probability output of the experiment. The coin-flipping statistics does not suppose that the coin has travelled classically along the table to stop at the end of the experiment, just that we have a probability law with a 50/50% distribution on the 2 possible outcomes. This is exactly what says the born rules.
The coin-flipping experiment outputs may be viewed as the measurement of the observable “coin faces”, we still have the same probability distribution as a classical probability.

It is one of the point I want to underline when we speak about QM: we need to distingish the unitary evolution of the states and the born rules: this is the same thing as in statistical Newtonian mechanics. We have an equation of motion that modifies the probability density over the time (Liouville equation) in classical mechanics and in QM we have the Schroedinger equation. These equations are the deterministic part. Now both have the same source of randomess: the initial or final distribution law (it is only boundary conditions). Both do not explain the source of the probability distribution, just the deterministic update of it during time.

Now, in my opinion a common mistake in QM is to see the QM state more than a QM probability state: a probability distribution for the spectrum of an observable. When you say that there are distinct QM states before the measurement, you are in fact saying that you have an initial probability distribution for any given observable.
As in classical probability, you are just considering two initial probability distributions (the distribution associated to |head> and the distribution associated to |tail>). In the case of the measurement of the coin face, these two initial density probabilities correspond to the probability law “100% head”, 100% tail”, but you must not forget that you do not know the initial velocity state probability distribution for the classical probability distribution or the initial interaction in QM case (before the coin becomes a “free falling” system).

Now once the coin is thrown, you have a deterministic mechanical evolution in both models (SE or Newton equation – I am not saying they are the same, just they are deterministic), at the end of the time evolution, you have a new probability distribution law that depends on a not well know initial probability distribution (the source of “randomness”).
Only this final distribution counts in the experiment results not the initial distribution. Thus, whatever the quantum effects occur during the path of the coin, this is not important, only the final probability distribution may be analysed in this experiment. We still have the final statistical result 50% head, 50% tail: thus we must have at final state |psi> that verifies these 2 results. Using the born rules, once again, implies that the probability distribution over the spectrum (the classical probability) of the coin face observable is 50/50 whatever entangled the state |psi> is.

If you accept to separate the source of the randomess from its deterministic time evolution, you will have a simpler view of why QM model is different from the classical statistical mechanics: both models does not explain the source of randomness, just its probability distribution evolution in time.

Seratend.
OK, so I'm getting even MORE confused than before of what you are saying. Let's first get a few things clear:

1. The TIME EVOLUTION of the Schrodinger wavefunction is deterministic. I don't think I've said anything contrary to that.

2. Then the only possible source of discrepancy between my explanation and yours, is the "preparation" of the state. At least, this is what I have gathered.

If that's the case, let's examine this.

In the QM case, let's say we have a Schrodinger Cat-type situation with a superposition of two orthorgonal states before a measurement. Now, what would be the equivalent or analogous situation for a classical case? My answer to that would be a coin that has been flipped and tumbling through the air and before it lands for the rest of the world to see the outcome.

Would this be an acceptable comparison?

If it is, then I clearly do not see how you can argue that, just because they both evolve "deterministically" with time, that they are identical to each other. The classical flipping is being described with only one thing in mind : that the outcome can only be EITHER head or tail. I cannot, for example, make a measurement (before it lands) of a non-commuting observable to see a result that tells me that there is this weird mixture of "head+tail" outcome. I can, however, do that for the QM case. I can look at the energy difference of the bonding-antibonding states, which clearly is the result of the mixing of these two results.

To me, that in itself indicates a profound difference between the "dynamics" of the classical and QM coin-flipping. At no time in the classical case is there any ambiguity about the "either-or" nature of the evolution of the system. Yet, in the QM case, there is! Only upon measurement of that particular observable is the ambiguity of that observable removed. This indicates that they are not of the same beast.

Zz.

ZapperZ said:
OK, so I'm getting even MORE confused than before of what you are saying. Let's first get a few things clear:

1. The TIME EVOLUTION of the Schrodinger wavefunction is deterministic. I don't think I've said anything contrary to that.

2. Then the only possible source of discrepancy between my explanation and yours, is the "preparation" of the state. At least, this is what I have gathered.
Ok, let’s try to precise the discussion. First, I am not saying that you are wrong, but that the arguments you use, and many other people may lead to think that states are somewhat “real”.

Secondly, I am saying that the coin-flipping experiment statistical results may be interpreted in both QM or Newtonian statistics formalism: we cannot distinguish them.

Last, a quantum state defines the "classical" probability distribution over the spectrum of an observable: given a state |psi>, if you choose an observable A (spectrum {a1,...,an,...}, you have the probability density prob(A=a)=|psi(a)|^2=|<a|psi>|^2. It's mathematics and it is important not to forget it.

Each time, an experiment “looks” at an observable, we are dealing with this probability density describing the outcomes statistics. This is what I call the born rules. We are not looking at the evolution of the state, but the statistical results of a given state with a given observable.
We thus have to specify the observable in QM in order to speak about statistical results, i.e. we have to specify the probability distribution (we choose the observable).
Moreover, the statistics of this probability distribution does not care about what has arrived before it is defined. It only describes the outcomes with this given probability distribution (and thus the given observable).

Now, we have the same stuff with statistical Newtonian mechanics, except that we often select the q observable to describe the statistics of an experiment (even if we can analyse the q Fourier transform statistics).

ZapperZ said:
If that's the case, let's examine this.

In the QM case, let's say we have a Schrodinger Cat-type situation with a superposition of two orthorgonal states before a measurement. Now, what would be the equivalent or analogous situation for a classical case? My answer to that would be a coin that has been flipped and tumbling through the air and before it lands for the rest of the world to see the outcome.

Would this be an acceptable comparison?
Now, you are using a kind of Copenhagen interpretation that sometimes leads to some paradoxes. I am trying to remove the interpretation from the mathematical facts (even if I use sometimes implicit interpretations ; ).

When you say that you have a Schrodinger cat-type state (call it |S_cat>=|live>+|dead>) before a measurement, in fact, you are saying that you have a state |S_cat> where the probability distribution for the observable |S_cat><S_cat| (rewritten |live+death><live +death| for the ones who prefer in that form) is 100%. That means that if you made the measurements tests on this hypothetic observable you always get 100% the result S_cat=live+death and not 50/50%.

It is important to understand the formal aspect of this approach is totally separated from the interpretation “why can’t I see this state”.
We have a formal tool: the axioms of classical QM, a self-consistent mathematical tool and the questions that this model does not try to answer: why we can’t observe this state. QM nor statistical newtonian mechanics explain the source of the probability distribution just the update in time and the outcomes in experiments.
When we ask this question, we are in fact asking why we can’t build an experimental apparatus that measures the observable |s_cat><s_cat|. This point is under investigation with the “decoherence program”. However it does not question the fact that the statistics of the state |s_cat> for the observable |s_cat><s_cat| is always 100% and that it is a very simple classical probability law.

Now, if I take an other observable (A, spectrum ai), we have:

|s_cat>=sum_ai<ai|S_cat>|ai>

We thus have the probability distribution p(A=ai)= |<ai|S_cat>|^2.

Thus, depending on the relation between the A and |S_cat><S_cat| observable, I may have any type of probability distributions.
Now, If I select the classic view, I select the observable q (implicit) thus I select the initial probability distribution p(Q=q)=|<q|S_cat>|^2 for this state.
Because I don’t know how to explain the |s_cat> state in the Q basis, I do not know the probability distribution exactly, but it is “known” (through this formal definition) and it can be far from 50/50%.

If I take the observable HT= |head><head|+|tail><tail|, we have the probability law:
P(HT=tail) =|<tail|s_cat>|^2

(P(HT=head),P(HT=tail)) is therefore the classical probability distribution of an |s_cat> state for the observable HT.

ZapperZ said:
If it is, then I clearly do not see how you can argue that, just because they both evolve "deterministically" with time, that they are identical to each other. The classical flipping is being described with only one thing in mind : that the outcome can only be EITHER head or tail.
Every experiment result is based on the exclusivity of outcomes. This is the essence of QM probability. If you accept that, you accept that the coin-flipping experiment is just a formal “QM probability” with the observable |head><head|+|tail><tail|.
When I say QM probability, I say the formal mathematical tool.

Now when you say “The classical flipping is being described with only one thing in mind : that the outcome can only be EITHER head or tail”, you can say quantum measurement experiment is being described with only one thing in mind that we have only a *single outcome* too. This outcome can be viewed as a superposition of states only *relatively* to a different non-commuting observable, that’s all. The rest is interpretation relatively to to a peculiar observable.
ZapperZ said:
I cannot, for example, make a measurement (before it lands) of a non-commuting observable to see a result that tells me that there is this weird mixture of "head+tail" outcome.
It depends on the interactions that are available to construct the measurement apparatus.
However If you assume it is possible to construct it, you just have to build this observable: |head+tail><head+tail| and find if possible a source for this probability distribution (that may be impossible to get, I have only probabilities to say that ;).

ZapperZ said:
I can, however, do that for the QM case. I can look at the energy difference of the bonding-antibonding states, which clearly is the result of the mixing of these two results.
What you see is an exclusive outcome of the spectrum of a given measured observable. That’s all.

For example, when we have a classical signal s(t) with a probability distribution p(t). We can instead measure the frequency probability distribution (with some restrictions).

ZapperZ said:
To me, that in itself indicates a profound difference between the "dynamics" of the classical and QM coin-flipping. At no time in the classical case is there any ambiguity about the "either-or" nature of the evolution of the system. Yet, in the QM case, there is! Only upon measurement of that particular observable is the ambiguity of that observable removed. This indicates that they are not of the same beast.
I think that you are sometimes mixing the unitary evolution with the statistical outcomes. Statistical outcomes are always exclusive. state superposition are only states that define a probability law for a given observable that's all. The rest is interpretation.

ZapperZ said:
Only upon measurement of that particular observable is the ambiguity of that observable removed. This indicates that they are not of the same beast.
Zz.
The ambiguity of that observable is a matter of interpretation. Once again, we only have a probability distribution for a given observable and we only see a single outcome on a given experiment.
Each time you are evaluating the results of an experiment, you are using the probability law of the measured observable. There is formally no difference between this QM result and the result a classical probability distribution. The difference comes from the interpretation philosophy, it does not change this formal result.

Seratend.

ZapperZ
Staff Emeritus
seratend said:
The ambiguity of that observable is a matter of interpretation. Once again, we only have a probability distribution for a given observable and we only see a single outcome on a given experiment.
Each time you are evaluating the results of an experiment, you are using the probability law of the measured observable. There is formally no difference between this QM result and the result a classical probability distribution. The difference comes from the interpretation philosophy, it does not change this formal result.

Seratend.
OK... I just finished pulling all of my hair out and I'm completely bald now....

I have a system which is described as the superposition of two states |1> and |0>. If I make a mesurement (just ONE measurement) of the operator (call this operator A) corresponding to determining the state, I will get either the eigenvalue corresponding to |1> or the eigenvalue of |0>.

Is there a problem so far? And did I just imposed some "CI" interpretation on this? Isn't this part of the "formal" formulation of QM?

However, before a measurement of that operator, I have a "superposition" of both basis, and not only that, a degenerate superposition of these basis corresponding to the odd and even states, i.e.

|psi_odd> = |0> - |1>
|psi_even> = |0> + |1>

[ignoring normalization factors]

If I remove the degeneracy, there will be an energy gap between those two. I can measure this "deterministically". It is not a "statistics", nor does it depend on an outcome of any single measurement. Unlike the measuremement of operator A that can have two different outcomes, there is only ONE outcome of the energy difference measurement. Thus, when you say "Each time you are evaluating the results of an experiment, you are using the probability law of the measured observable", that isn't true here. NO matter how many times I make the measurement of this energy difference, the result is the same, because the energy difference measurement isn't a superposition, even if the basis states are![1] The system is STILL in the ambiguous mixture of |0> and |1>, or else the even-odd difference makes no sense.

Again, all of this is happening BEFORE I make my operator A measurement. If I simply look at the outcome of A, I cannot tell the difference between a classical coin-toss, and a QM coin-toss. But this is not what I have been trying to compare! I am trying to compare the description of the system BEFORE the measurement of observable A. This is where the classical system and the quantum system differ profoundly! Nowhere in the classical system is there such a thing as what I have described above. The heads and tail outcome never form a dynamical evolution that somehow depends on the "even-odd" combination of the two, unless I slept through all the classical mechanics classes.

Furthermore, there is nothing, in principle, to prevent me from preparing the identical situation classically and always get the same outcome. I could take a coin, put it into some contraption that can accurately flip the coin with the same force and torque, and let it land exactly the same way on some well-known surface, etc.. etc.. There is nothing in the classical dynamics that indicates that I cannot do that and obtain the identical results each and every time as long as the initial conditions are all identical. There are no statistics here. Again, you can't do that in a QM system. Identically prepared system can still gives different outcomes.

Zz.

[1] J.R. Friedman et al., Nature v.406, p.43 (2000).

ZapperZ said:
OK... I just finished pulling all of my hair out and I'm completely bald now....
I envy you to be able to do that! ;). (I have to respect my remaining hair)

ZapperZ said:
I have a system which is described as the superposition of two states |1> and |0>. If I make a mesurement (just ONE measurement) of the operator (call this operator A) corresponding to determining the state, I will get either the eigenvalue corresponding to |1> or the eigenvalue of |0>.

Is there a problem so far? And did I just imposed some "CI" interpretation on this? Isn't this part of the "formal" formulation of QM?
Sorry, by I do not understand the acronym “CI”. Ok, Copenhagen interpretation : ).

You are using the statement “determining the state” that sometimes may lead to dangerous interpretations even with classical probability (it is mainly the problem of CI: we may think that the state is “real” and thus deduce from the “collapse” other strange things).
I prefer, when I do not forget it, to use the word outcome: If I make a measurement of the operator A, I get (“see”) the outcome 0 or exclusive 1 (I am removing as far as possible any implicit interpretation) that belongs to the spectrum of the measured observable. I am using the formal probability language (I am not requiring any interpretation, just consistent logic with experiment results).

However, despite the implicit interpretations, the sentence is almost ok for me. Do not forget that you are also saying that the outcome “a” condition defines a system with the state |a> (i.e. the "corresponding to the state “|a>” " assertion).
That means that if a measurement on the observable A has given the outcome a, then we know the statistics of a next measurement outcome on this system (given by probability distribution attached to the state |a> and the observable B selected for this “next” measurement): this is conditional probability. And when we are doing Statistical CM, we always forget that an outcome in an experiment corresponds in fact to an event (we stay with a probability density, even if it is the delta distribution).

ZapperZ said:
However, before a measurement of that operator, I have a "superposition" of both basis, and not only that, a degenerate superposition of these basis corresponding to the odd and even states, i.e.

|psi_odd> = |0> - |1>
|psi_even> = |0> + |1>

[ignoring normalization factors]

If I remove the degeneracy, there will be an energy gap between those two. I can measure this "deterministically".

It is not a "statistics", nor does it depend on an outcome of any single measurement. Unlike the measuremement of operator A that can have two different outcomes, there is only ONE outcome of the energy difference measurement.

Thus, when you say "Each time you are evaluating the results of an experiment, you are using the probability law of the measured observable", that isn't true here. NO matter how many times I make the measurement of this energy difference, the result is the same, because the energy difference measurement isn't a superposition, even if the basis states are![1] The system is STILL in the ambiguous mixture of |0> and |1>, or else the even-odd difference makes no sense.
It is really important to separate the interpretation form the logical formulation of the theory.

I do not see what you call “degenerate superposition”. You have defined a new observable based on the |odd>, |even> states that's ok for me.
Now you are defining a new observable energy (call it E). We can assume or not that |odd> and |even> states are eigenvectors for the same eigenvalue:
E|odd>=e_coin|odd>
E|even>=e_coin|even>

Or is it
E|0>= e_coin|0>
E|1>= e_coin|1>?

However I will try to answer just with what I think is a formal view.

What you say is that you have a system with a given state |psi>: if we make a measurement on a set of systems with this state |psi> with any observable (e.g. A), we will get the probability distribution |<a|psi>|^2 for the statistics.
If I select, formally, A=|psi><psi|, I may use the conditional probability to define the probability distribution of subsequent measurements on different observable (selection of samples with the outcome psi).
Remark, that with this formal approach I do not need time, just conditions to define the statistics of outcomes: that’s what we always do in QM for the statistics computation. However, we prefer to say “we prepare the system in the state |psi>”, rather that saying “being given the conditional state |psi>”.

Now, you have a conditional input state |psi>, where you perform a measurement on the observable energy (call it E). We thus have a probability distribution on this observable p(E=e)=|<e|psi>|^2 for the outcomes.

The eigenvalue may be degenerated or not, this not the problem. I am not trying to attach the eigenvalue to a “reality” or whatever you want: this is the interpretation domain/philosophy. I am just using the formal statement: I have single outcomes in a measurement experiment.

I may have the probability distribution p(E=e)=100% for the measurements on the observable E with the input state |psi>. I still have single outcomes and for these outcomes, I have the new state (hyp: e is a non degenerated vector of E)

|psi_cond_e>= <psi|e>|e>=|e>= <odd|e>|odd>+ <even|e>|even>

That now defines a new probability distribution for the observable OE (odd,even). This is simply a conditional probability distribution (or conditional state if you prefer) based on the condition that the “previous” measurement gives the outcome e.

I still have single outcomes with different classical probability distributions: the probability distributions imposed by the spectrum of the selected observable and the input conditions.

ZapperZ said:
Again, all of this is happening BEFORE I make my operator A measurement. If I simply look at the outcome of A, I cannot tell the difference between a classical coin-toss, and a QM coin-toss. But this is not what I have been trying to compare! I am trying to compare the description of the system BEFORE the measurement of observable A.
Trying to compare a system before the measurement of an observable A means nothing if you do not explain it.
As far as I have understood your explanation, you are speaking about a measurement of the observable E with single outcomes, ok. Thereafter, you seem to say that there is something incompatible with the measurement of the observable A or that I prevent something, this point I do not understand (see above my previous question concerning the degenerate superposition).

Note that I am not trying to define a common probability space for all observables. I am just saying that any observable defines with the state |psi> a single probability space with the probability distribution p(a)da=|<a|psi>|^2.da. This is a formal reformulation of the statement of QM measurements (we try to avoid physical/phylosophical interpretations).

If you accept this statement, you can say that the coin-toss statistics are classical statistics. Nevertheless, do not forget that we are evaluating “one statistics at a time”.

I am not saying that an experiment outcome is something real, deterministic or whatever you want, this is interpretation.

You can say that a state |psi> gives 2 different statistics for 2 different observables (non commuting observables). But, saying that these statistics “exist” before the experiment is only interpretation.

Once again, you can say that you have a system in a state |psi>. But you can only measure statistics on a given observable (or set of commuting observables). In addition, this statistics (for this observable and this state) is the same as the one coming from a classical probability law.

ZapperZ said:
This is where the classical system and the quantum system differ profoundly! Nowhere in the classical system is there such a thing as what I have described above. The heads and tail outcome never form a dynamical evolution that somehow depends on the "even-odd" combination of the two, unless I slept through all the classical mechanics classes.
You are mixing again the time evolution of the “probability law” and the statistics of the probability law.

QM says that we may have a measurement experiment with a given statistics (the statistics of the outcomes (odd/even)). This statistics is given by the couple (|state>,Observable equivalent to define a probability law p(a)da). Remark that we are not using time, we are just using input conditions (that may require to define a time to get the |state>).
CM also says we may have a measurement experiment with a given statistics. The statistics is given by the probability law rho(q)dq, for the observable q.

Now, I am saying: just take rho(q)dq=p(a)da => the statistics of a classical outcomes are the same as the the QM outcomes.

I am not saying that the time evolution of the probability law of classical mechanics is the same as the QM: they are different.

Now, If I am considering the time evolution, I am saying that if a classical mechanical system has the same statistical outcomes as QM, them there exists an initial probability distribution that leads to this statistics (instead of taking an initial condition, I am using a “final” condition).
Therefore, I am not saying that the initial probability distribution is the same as the initial QM, they are surely different (or not ;): we have to calculate the SE and CM time evolution of the states).

Now, due to the peculiar form of the statistical CM time evolution equations, it may be difficult to find a “probability source” that leads to the allowed probability density of QM. But, this not my problem. Formally, I can do that.
I have the same problem as with QM: we need to find the “probability sources” to verify the statistics and both theories does not explain how to make a probably source (the source of randomness), just how to use it (time evolution).

ZapperZ said:
Furthermore, there is nothing, in principle, to prevent me from preparing the identical situation classically and always get the same outcome. I could take a coin, put it into some contraption that can accurately flip the coin with the same force and torque, and let it land exactly the same way on some well-known surface, etc.. etc.. There is nothing in the classical dynamics that indicates that I cannot do that and obtain the identical results each and every time as long as the initial conditions are all identical. There are no statistics here.
Yes, you have. You have an initial probability distribution p(q)dq=delta(q-qo)dq <=> P(Q=qo)=100%.
After you have the statistical CM time evolution that says that the final probability distribution is p(q)dq=delta(q-q1)dq.

As I have said in the previous posts: do not mix the statistics with the different deterministic time evolution of both mechanics. The time evolution only tells that if you start with a given probability distribution, you end with 100% confidence, with another probability distribution.
Now, we have the main difference between QM and CM: if you take the observable Q, the probability distribution evolution over the time is different. But this does not prevent one to choose different initial probability distributions to get at the “time” of measurement the same probability distribution: I can describe the coin flipping results (the statistics of the head/tail outcomes) either with CM or QM.

ZapperZ said:
Again, you can't do that in a QM system. Identically prepared system can still gives different outcomes. Zz.
Let’s take the hydrogen atom in the eingenstate |e>. Any energy measurement of the atoms in this state will give the same result. Moreover, if you apply the deterministic SE time evolution, you always get the state |e> (up to a phase) and thus you still have for subsequent energy measurements the same statistics: 100% of outcomes E=e.

We are speaking about statistics not about the different deterministic time evolution of the statistics.

Seratend.

ZapperZ
Staff Emeritus
It appears that there are MANY aspect of this discusson that are simply not getting through to one another...

seratend said:
Let’s take the hydrogen atom in the eingenstate |e>. Any energy measurement of the atoms in this state will give the same result. Moreover, if you apply the deterministic SE time evolution, you always get the state |e> (up to a phase) and thus you still have for subsequent energy measurements the same statistics: 100% of outcomes E=e.

We are speaking about statistics not about the different deterministic time evolution of the statistics.

Seratend.
This, I think, is the crux of the matter. I can make an energy measurement of a particular transition, and ALWAYS get the same value. But I cannot, for example, make a position measurement and always get the same value.

I can flip a coin using a contraption that I mentioned earlier, under ideal, identical condition, and ALWAYS get ALL the outcomes that I can get identically, each time. There is nothing in classical mechanics, in principle, to say that I will not always get head, having the coin land in the exact position, after 12 bounces, and with the head point North. You don't get this in the H atom.

The example of the superpostion of states with odd and even parity that I gave came right out of the SQUID paper of Stony Brook that I cited. The states that I generated were identical to the ones they used. In fact, it is identical to the one Tony Leggett used.[1] The degeneracy between the odd and even parity states were removed using an external magnetic field, the identical way one removes the generacy of orbital/spin angular momentum quantum number. The "energy" measurement is the energy gap between the even-odd states. This number is well-defined and not a statistics.

In all of this, the major (and original reason why I intruded into this thread) is the objection to the suggestion that, just because the outcome of QM measurement is "statistical" in nature, and the flipping of a coin is also "statistical" in nature, then these two are the same thing. Therefore, since coin-flipping is actually deterministic since classical mechanics can actually describe such a thing, it is only because we are ignorant of the fine details of the dynamics that we impose a statistical description to this event. Conclusion? QM must be that way too... that there must be some "underlying" description that we don't know of and thus, QM simply reflects out ignorance of the underlying dynamics.

My question is, do you subscribe to such a view?

Zz.

[1] A.J. Leggett, J. Phys.: Condens. Matt v.14, p.415 (2002).

randomness is just an something we dont know the equation to
there is an equation to everything some are just too complex for us to figure out
for instance the equation of worldlines