What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
Click For Summary
The discussion addresses confusion surrounding Young's experiment on wave-particle duality, particularly regarding the behavior of light as both waves and particles. Participants note that when photons are sent through a double-slit apparatus, they create an interference pattern, even when sent one at a time, contradicting claims that observation alters this behavior. The conversation highlights the misconception that observing light forces it to behave like particles, while in reality, it continues to exhibit wave-like properties unless specific measurements are taken. Additionally, the role of detectors and their influence on the results is debated, with references to classical interpretations of quantum phenomena. Overall, the discussion emphasizes the complexities and misunderstandings inherent in quantum mechanics and the interpretation of experimental outcomes.
  • #31
nightlight said:
You can't make the prediction of the sharp kind of correlations that can violate the Bell's inequality. You can't get them (and nobody else has done it so far) from QED and using the known properties of the detectors and sources. You get them only using the QM measurement theory with its non-dynamical collapse.

I wonder what you understand by QED, if you don't buy superpositions of states...

cheers,
Patrick.
 
Physics news on Phys.org
  • #32
vanesch Yes, but why are you insisting on the experimental parameters such as efficiencies ? Do you think that they are fundamental ?

For the violation of inequality, definitely. The inequality is purely enumerative type of mathematical inequality, like a pigeonhole principle. It depends in an essential way on the fact that a sufficient percentage of result slots are filled in for different angles so they cannot be rearranged to fit multiple (allegedly required) correlations. Check for example a recent paper by L. Sica where he shows that if you take a three finite arrays A[n], B1[n] and B2[n], filled with numbers +1 and -1 and you form the cross-correlation expressions (used for Bell's inequality): E1=Sum(A[j],B1[j])/n, E2=Sum(A[j],B2[j])/n and E12=Sum(B1[j],B2[j])/n, then no matter how you fill the numbers or how big the arrays are, they always satisfy the Bell's inequality:

| E1 - E2 | <= 1 - E12.

So no classical data set could violate this purely enumerative inequality, i.e. the QM prediction of violation means that if we were to turn the apparatus B from angle B1 to B2, the set of results on A must be always strictly different than what it was for B position B1 . Similarly, it implies that in the actual data, the sets of results on fixed A position, and two B positions, B1 and B2, the two sequences of A results must be strictly different from each other (they would normally have roughly equal numbers of +1's and -1's in each array, so the arrangement would have to be different between the two; they can't be the same even accidentally if the inequality is violated).

For the validity of this purely enumerative inequality, it is essential that the sufficient number of array slots is filled with +1 and -1, otherwise (e.g. if you put 0's in sufficient number of slots) the inequality doesn't hold. Some discussion of this result are in the later papers by L. Sica and by A. F. Kracklauer.

That would have interesting consequences. If you claim that ALL experiments should be such that the raw data satisfy Bell's inequalities, that gives upper bounds to detection efficiencies of a lot of instruments, as a fundamental statement. Don't you find that a very strong statement ? Even though for photocathodes in the visible light range with today's technologies, that boundary is not reached ? You seem to claim that it is a fundamental limit.

There is a balancing or tradeoff between the "loopholes" in the experiments. The detection efficiency can be traded for lower polarization resolution (by going to higher energy photons). Also, the detectors can be tuned to a higher sensitivity, but then the dark current rises, blurring the results and producing more "background" and "accidental" concidences have to be subtracted (which is another big no-no for a loophole free test).

For massive particles, similar tradeoffs occur as well -- if you want better detection efficiency by going to higher energies, the spin measurement resolution drops. For ultra-relativistic particles, which are detectable almost 100%, the Stern-Gerlach doesn't work at all any more and the very lossy Compton scattering spin resolution must be used.

You can check a recent paper by Emilio Santos for much more discussion of these and other tradeoffs (also in his earlier papers). Basically, there is a sort of "loophole conservation" phenomenon, so that squeezing one out makes another one grow. Detector efficiency is just one of the parameters.

But what is wrong with the procedure of taking the "toy" predictions of the ideal experiment, apply efficiency factors (which can very easily be established also experimentally) and compare that with the data ? After all, experiments don't serve to measure things but to falsify theories.

Using the toy model prediction as a heuristic to come up with an experiment is fine, of course. But bridging the efficiency losses to obtain the inequality violation by the data requires the additional assumptions such as "fair sampling" (which Santos discusses in the paper above and which I had discussed in detail in an earlier thread here, where the Santos' paper was discussed as well). After seeing enough of this kind of find-a-better-euphemism-for-failure games, it all starts to look more and more like perpetuum mobile inventors thinking up the excuses, blaming ever more creatively the real world's "imperfections" and "non-idealities" for the failure of their allegedly 110% overunity device.
 
  • #33
I wonder what you understand by QED, if you don't buy superpositions of states...

Where did I say I don't buy superposition of states? You must be confusing my rejection of your QM measurement theory version of "magical" Born rule and its application to the Bell's system with my support for different Born rule (non-collapsing, non-fundamental, approximate, apparatus dependent rule) to jump to conclusion that I don't believe in superposition.
 
  • #34
nightlight said:
For the violation of inequality, definitely. The inequality is purely enumerative type of mathematical inequality, like a pigeonhole principle.

Good. So your claim is that we will never find raw data which violates Bell's inequality. That might, or might not be the case, depending on whether you take this to be a fundamental principle and not a technological issue. I personally find it hard to believe that this is a fundamental principle, but as it stands experimentally currently it cannot be negated (I think ; I haven't followed the most recent experiments on EPR type stuff). I don't know if it has any importance at all that we can or not have these raw data. The point is that we can have superpositions of states which are entangled in bases that we would ordinary think are factorized.

However, I find your view of quantum theory rather confusing (I have to admit I don't know what you call quantum theory, honestly: you seem to use different definitions for terms than what most people use, such as the Born rule or the quantum state, or an observable).
Could you explain me how you see the Hilbert state formalism of say, 10 electrons, and what you understand by a state of the system, and what you call the Born rule in that case ?
Do you accept that the states are all the completely assymetrical functions in 10 3-dim space coordinates and 10 spin-1/2 states, or do you think this is not adequate ?

cheers,
Patrick.
 
  • #35
So your claim is that we will never find raw data which violates Bell's inequality.

These kind of enumerative constraints have been tightening in recent years and the field of Extremal set theory has been very lively lately. I suspect it will eventually be shown that, purely enumeratively (a la Sperner's theorem or Kraft's inequalities), the fully filled in arrays of results satisfying perhaps a larger set of correlation constraints from the QM prediction (not just the few that Bell had used), will yield a result that the measure of inequality violating sets converges to zero as the constraints (e.g. for different angles) are added in the limit of infinite arrays.

That would then put the claims of experimental inequality violation at the level of perpetuum mobile of the second kind, i.e. one could look at such claims as if someone claimed that he can flip the coin million times and get regularly the first million binary digits of Pi. If he can, he is almost surely cheating.

That might, or might not be the case, depending on whether you take this to be a fundamental principle and not a technological issue. I personally find it hard to believe that this is a fundamental principle,

Well, the inequality is of the enumerative kind of mathematical result, like a pigeonhole principle. Say, someone claimed they have a special geometric layout of holes and a special pigeon placement ordering which allows them to violate the pigeonhole principle inequalities, so they can put > N pigeons in N holes without having any hole with multiple pigeons. When asked to show it, the inventor brings out a gigantic panel of holes arranged in strange ways and in strange shapes, and starts placing pigeons jumping in some odd ways across the pannel, taking longer and longer to pick a hole as he went on, as if calculating where to go next.

After some hours and finishing about 10% of the holes, he stops and proudly proclaims, here, it is obvious that it works, no double pigeons in any hole. Yeah, sure, just some irrelevant, minor holes left due to the required computation. If I assume that holes filled are a fair sample of all the holes[/color] and I extrapolate proportionately the area I filled to the entire board[/color], you can see that continuing in this manner, I can put exactly 5/4 N pigeons without having to share a hole. And this is only a prototype algorithm. That's just a minor problem which will be solved as I refine the algorithm performance. After all, it would be implausible that an algorithm which worked so well so far, in just rough prototype form, would fail when polished to full strength.

This is precisely the kind of claims we have for the Bell's inequality tests with 10% of coincidences filled in, then claiming to violate the enumerative inequality for which the fillup of at least 82% is absolutely vital to even begin looking at it as a constraint. As Santos argues in the paper mentioned, the ad hoc "fair sampling" conjecture used to fast-talk over the failure, is a highly absurd assumption in this context (see also the earlier thread on this topic).

And often heard invocation of implausibility of QM failing with better detectors is as irrelevant as the pigeonhole algorithm inventor's assertion of implausibility of better algorithm failing -- it is a complete non-sequitur in the enumerative inequality context. Especially recalling the closed loop, self-serving circular nature of the Bell's No-LHV QM "prediction"[/color] and the vital tool used to prove it, the collapse postulate[/color] (the non-dynamical non-local evolution, vaguely specified suspension of dynamics), which in turn was introduced into the QM for the sole reason to "solve" the No-HV problem with measurement. And the sole reason it is still kept is to "solve" the remaining No-LHV problem, the one resulting from the Bell's theorem, which in turn requires the very same collapse in a vital manner to violate the inequality.

Since nothing else needs either of the two pieces, the two only serve to prop each other up while predicting no other empirical consequences for testing except for causing each other (if Bell's violation were shown experimentally, that would support the distant system-wide state collapse; no other test for the collapse exists).

The point is that we can have superpositions of states which are entangled in bases that we would ordinary think are factorized.

The problem is not the superposition but the adequacy of the overall model of (which the state is part of), and secondarily, attributing the particular state to a given preparation.

However, I find your view of quantum theory rather confusing (I have to admit I don't know what you call quantum theory, honestly: you seem to use different definitions for terms than what most people use, such as the Born rule or the quantum state, or an observable).

The non-collapse version of Born rule (as an approximate operational shortcut) has a long tradition. If you have learned just one approach and one perspective fed from a textbook, with usual pedagogical cheating on proofs and skirting of opposing or different approaches (to avoid confusing a novice), then yes, it could appear confusing. Any non-collapse approach takes this kind of Born rule, which goes back to Schroedinger.

Could you explain me how you see the Hilbert state formalism of say, 10 electrons, and what you understand by a state of the system, and what you call the Born rule in that case ? Do you accept that the states are all the completely assymetrical functions in 10 3-dim space coordinates and 10 spin-1/2 states, or do you think this is not adequate ?

For non-relativistic approximation, yes, it would be antisymmetrical 10*3 spatial coordinate function with Coulomb interaction Hamiltonian. This is still an external field approximation, which doesn't account for self-interaction of EM and the fermion fields. Asim Barut had worked out a scheme which superficially looks like self-consistent effective field methods (such as Hartree-Fock), but underneath it is an updated old Schroedinger's idea of treating matter fiels and EM fields as two interacting fields in a single 3D space. The coupled Dirac-Maxwell equations form a nonlinear set of PDEs. He shows that this treatment is equivalent to conventional QM 3N dimensional antisymmetrized N fermion equations, but with a fully interacting model (wich doesn't use the usual external EM field or external charge current approximations). With that approach he and his graduate students had reproduced the full non-relativistic formalism and the leading orders of radiative corrections of perturbative QED, without needing renormalization (no point particles, no divergences). You can check a http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= on this topic on the KEK server.

Regarding the Born rule for this atom, the antisymmetrization and the high interaction make the concept of individual electron here largely meaningless for any discussion of the Bell correlations (via Born rule). In Barut's approach this manifests in assuming no point electrons at all but simply using the single fermion matter field (normalized to 10 electron charge; he does have separate models of charge quantisation, these are stil somewhat sketchy) which has the identical scattering properties as the conventional QM/QED models, i.e. Born rule is valid in the limited, non-collapsing sense. E.T Jaynes has a similar perspective (see his paper `Scattering of Light by Free Electrons'; unfortunately, both Jaynes and Barut have died few years ago, so if you have any questions or comments, you'll probably have to wait a while since I am not sure that emails can go there and back).
 
Last edited by a moderator:
  • #36
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.

Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.

Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.

You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different[/color] than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].

Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be. To paraphrase a kid's response when told that thermos bottle keeps the hot liquids hot and cold liquids cold -- "How do it know?"[/color]

Or, another twist, you take 99 different angles for B and ontain sets of data A1[n],B1[n]; A2[n],B2[n]; ... A99[n],B99[n]. Now you're ready for the angle B100. This time the A100[n] has to keep rearranging itself to avoid matching all 99 previous arrays Ak[N].

Then you extend the above and, say, collect r=2^n data sets for 2^n different angles (they could be all the same angle, too). This time at the next angle, B_(2^n+1), the data for A_(2^n+1)[n] would have to avoid all 2^n arrays Ak[n], which it can't do. So you get that in each test there would be one failed QM prediction, for at least one angle, since that Bell inequality would not be violated.

Then you take 2^n*2^n previous test,... and so on. As you go up, it gets harder for the inequality violator, its negative test count has a guaranteed growth. Also, I think this therem is not nearly restraining enough and the real state is much worse for the inequality violator (as simple computer enumerations suggest when counting the percentages of violation cases for the finite data sets).

Or, you go back and start testing, say, angle B7 again. Now the QM magician in heaven has to allow new A7[n] to be the same as the old A7[n], which was prohibited up to that point. You switch B to B9, and now, the QM magician, has to disallow again the match with A7[n] and allow the match with old A9[n], which was prohibited until now.

Where is the memory for all that[/color]? And what about the elaborate mechanisms or the infrastructure needed to implement the avoidance scheme? And why? What is the point of remembering[/color] all that stuff? What does it (or anyone/anything anywhere) get in return?

The conjectured QM violation of of Bell's inequality basically looks sillier and sillier[/color] once these kind of implications are followed through. It is not any more mysterious or puzzling but plainly ridiculous[/color].

And what do we get from the absurdity? Well, we get the only real confirmation for the collapse since Bell's theorem uses collapse to produce the QM prediction which violates the inequality. And what do we need the collapse for? Well, it helps "solve" the measurement problem. And why is there the measurement problem? Well, because Bell's theorem shows you can't have LHVs to produce definite results. Anything else empirical from either. Nope. What a deal.

The collapse postulate first lends a hand to prove Bell's QM prediciton which in turn, via LHV prohibition, creates a measurement problem which then the collapse "solves" (thank you very much). So the collapse postulate creates a problem then solves it. What happens if we take out collapse postulate all together. No Bell's theorem, hence no measurement problem, hence no problem at all. Nothing else is sensitive to the existence (or the lack) of the collapse but the Bell's inequality experiment. Nothing else needs the collapse. It is a parasitic historical relic in the theory.
 
Last edited:
  • #37
nightlight said:
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.

Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.

Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.

You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different[/color] than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].

Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be.

I'm not sure I understand what you are at. I will tell you what I make of what is written out above, and you correct me if I'm not understanding it right, ok ?

You say:
Let us generate a first sequence of couples:
S1 = {(a_1(n),b_1(n)) for n running from 1:N, N large}
Considering a_1 = (a_1(n)) as a vector in N-dim Euclidean space, we can require a certain correlation between a_1 and b_1. So a_1 and b_1 are to have an angle which is not 90 degrees.

We next generate other sequences, S2, S3... S_M with similar, or different, correlations. I don't know Sica's theorem, but what it states seems quite obvious, if I understand so: a_1, a_2 ... are to be considered independent, and hence uncorrelated, or approximately orthogonal in our Euclidean space. The corresponding b_2, b_3 etc... are also to be considered approximately orthogonal amongst themselves, but are correlated with similar or different correlations (angles different from 90 degrees in that Euclidean space) with their partner a_n. Sounds perfectly all right to me. Where's the problem ? You have couples of vectors making an angle (say 30 degrees) which are essentially orthogonal between pairs in E_N. Of course for this to hold, N has to be much much larger than the number of sequences M. So you have a very high dimensionality in E_n. I don't see any trouble, moreover, this is classical statistics ; where's the trouble with Bell and consorts ? It is simply an expression about the angles between the pairs, no ?

cheers,
patrick.
 
  • #38
vanesch I don't know Sica's theorem, but what it states seems quite obvious,

It is the array correlation inequality statement I gave earlier which also had the Sica's preprint link.

if I understand so: a_1, a_2 ... are to be considered independent, and hence uncorrelated, or approximately orthogonal in our Euclidean space. ... Sounds perfectly all right to me. Where's the problem ?

You were looking for a paradox or a contradiction[/color] while I was pointing at the peculiarity for a specific subset[/color] of sequences. The most probable and the average behavior is, as you say, the approximate orthogonality among a1, a2,... or among b1, b2,... There is no problem here.

The Sica's result is stronger, though -- it implies that if the two sets of measurements (A,B1) and (A,B2) satisfy Bell's QM prediction then it is necessary that a1 and a2 vectors in E_N be explicitly different[/color] -- the a1 and a2 cannot be parallel, or even approximately parallel[/color].

With just these two data sets, the odds of a1 and a2 being nearly parallel are very small, of the order 1/2^N. But if you have more than 2^N such vectors, they cannot all[/color] satisfy the QM prediction requirement that each remain non-parallel to all others. What is abnormal about this, is that this means that at least one test is guaraneed[/color] to fail, which is quite different from what one would normally expect of a statistical prediction: one or more finite tests may[/color] fail (due to a statistical fluctuation). Sica's result implies that at least one test must fail, no matter how large the array sizes.

What I find peculiar about it is that there should be any requirement of this kind at all between the two separate sets of measurements. To emphasize the peculiarity of this, consider that each vector a1 and a2 will have roughly the 50:50 split between +1 and -1 values. So it is the array ordering convention[/color] for individual results in two separate experiments that is constrained by the requirement of satisfying the alleged QM prediction.

The peculiarity of such kind of constraint is that the experimenter is free to label individual pairs of detection results in any order he wishes, i.e. he doesn't have to store the results of (A,B2) into A2[] and B2[] arrays so that array indices follow temporal order. He can store the first pair of detection results into A2[17], B2[17], the second pair at A2[5], B2[5], ... etc. This labeling is purely a matter of convention and no physics should be sensitive to such labeling convention[/color]. Yet, the Sica's result implies that there is always a labeling convention for these assignements which yields negative result for the test of Bell's QM prediction (i.e. it produces the classical inequality).

Let's now pursue the oddity one more step. The original labeling convention for the experiment (A,B2) was to map the larger time of detection to larger index. But you could have mapped it the opposite way i.e. the larger time to smaller index. The experiments should still succeed, i.e. violate the inequality (with any desired certainty, provided you pick N large enough). Now, you could have ignored the time of detection and used a random number generator to pick the next array slot to put the result in. You still expect the experiment to almost always succeed. It shouldn't matter whether you use a computer generated pseudo-random generator or flip a coin. Now, the the array a1 is equivalent to a sequence of coin flips, as random as one can get. So we use that sequence for our labeling convention to allocate next slot in the arrays a2,b2. Now with a1 used as the labeling seed, you can make a2 parallel to a1, 100% of the time. Thus there is a labeling scheme for experiments (A,B2), (A,B3),... which makes all of them always fail the Bell's QM prediction test.

Now, you may say, you are not allowed to use a1 for your array labeling convention in experiment (A,B2). Well, Ok, so this rule for the labeling must be added to the QM postulates[/color] since it doesn't follow from the existent postulates. And we now have another postulate that says, roughly:

COINCIDENCE DATA INDEX LABELING POSTULATE: if you are doing a photon correlation experiment and your setup has the efficiency above 82%, and you desire to uphold the collapse postulate[/color], you cannot label the results in any order you wish. Specifically, your labeling algorithm cannot use the data from any other similar photon correlation experiment which has one polarizer axis parallel to your current test and which has the setup efficiency above 82%. If none of the other experiment's axis is parallel to yours, or if their efficiency is below 82%, then you're free to use the data of said experiment in your labeling algorithm. Also, if you do not desire to uphold the collapse postulate, you're free to use any labeling algorithm and any data you wish.[/color]

That's what I see as peculiar. Not contradictory or paradoxical, just ridiculous.
 
Last edited:
  • #39
nightlight said:
vanesch I don't know Sica's theorem, but what it states seems quite obvious,

It is the array correlation inequality statement I gave earlier which also had the Sica's preprint link.

I'll study it... even if I still think that you have a very peculiar view of things, it merits some closer look because I can learn some stuff too...

cheers,
Patrick.
 
  • #40
vanesch said:
I'll study it... even if I still think that you have a very peculiar view of things, it merits some closer look because I can learn some stuff too...

cheers,
Patrick.

Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.
If you have 2 series of measurements, (a,b) and (a',b'), and you REORDER the second stream so that a = a', then of course the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.
So the point of the paper escapes me completely.

Look at an example.
Suppose we had some Bantum Theory, which predicts that <a.b> = 0, <a.b'> = 1 and <b.b'> = 1. You cannot have any harder violation of equation (3). (Quantum theory is slightly nicer).
Now, Bantum theory also only allows you to confront two measurements at a time.

First series of experiments: a and b:
(1,1), (1,-1),(-1,1),(-1,-1),(1,1), (1,-1),(-1,1),(-1,-1)

Clearly, we have equal +1 and -1 in a and in b, and we have <a.b> = 0.

Second series of experiments: a and b':
(1,1),(1,1),(-1,-1),(-1,-1),(1,1),(1,1),(-1,-1),(-1,-1),

Again, we have equal amount of +1 and -1 in a and b', and <a.b'> = 1.
Note that I already put them in order of a.

Third series of experiments: b and b':
(1,1),(1,1),(-1,-1),(-1,-1),(1,1),(1,1),(-1,-1),(-1,-1)

Note that for the fun of it, I copied the previous one. We have <b.b'> = 1, and an equal amount of +1 and -1 in b and b'.

There is no fundamental reason why we cannot obtain these measurement results, is there ? If experiments confirm this, Bantum theory is right. Nevertheless, |<ab> - <ab'>| <? 1 - <b.b'>
or |0 - 1| < 1 - 1 or 1 < 0 ?


cheers,
Patrick.
 
  • #41
vanesch said:
If you have 2 series of measurements, (a,b) and (a',b'), and you REORDER the second stream so that a = a', then of course the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.

I can even add that (a,b) and (a',b') have absolutely nothing to do with (b,b') as a new measurement. I have seen this kind of reasoning to refute EPR kinds of experiments or theoretical results several times now, and they are all based on a fundamental misunderstanding of what exactly quantum theory, as most people accept it, really predicts.

This doesn't mean that these discussions aren't interesting. However, you should admit that your viewpoint isn't so obvious as to call the people who take the standard view blatant idiots. Some work on the issue can be done, but one should keep an open mind. I have to say that I have difficulties seeing the way you view QM, because it seems to jump around certain issues in order to religiously fight the contradiction with Bell's identities. To me, they don't really matter so much, because the counterintuitive aspects of QM are strongly illustrated in EPR setups, but they are already present from the moment you accept superposition of states and the Born rule.

I've seen up to now two arguments: one is that you definitely need the projection postulate to deduce Bell's inequality violation, which I think is a wrong statement, and second that numerically, out of real data, you cannot hope to violate systematically Bell's inequality, which I think is also misguided, because a local realistic model is introduced to deduce these properties.

cheers,
Patrick.
 
Last edited:
  • #42
vanesch Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.

... the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>[/color], because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> [/color]by quantum theory.


The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure[/color] where he brings out that question and explicitly preserves <b.b'>[/color]. Additional analysis of the same question is in his later paper. It is not necessary to change the sum <b1.b2> even though individual elements of the arrays b1[] and b2[] are reshuffled. Namely there is a great deal of freedom when matching a1[] and a2[] elements since any +1 from a2[] can match any +1 from a1[], allowing thus for [(N/2)!]^2 ways to match N/2 of +1's and N/2 of -1's between the two arrays. The constraint from <b1.b2> requires only that the sum is preserved in the permutation, which is a fairly weak constraint.

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).
 
  • #43
nightlight said:
The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure[/color] where he brings out that question and explicitly preserves <b.b'>[/color].

Ok, I can understand that maybe there's a trick to reshuffle the b2[] in such a way as to rematch <b.b'> that would be present if there was a local realistic model (because that is hidden in Sica's paper, see below). I didn't check it and indeed I must have missed that in Sica's paper. However, I realized later that I was tricked into the same reasoning error as is often the case in these issues, and that's why I posted my second post.
There is absolutely no link between the experiments [angle a, angle b] and [angle a, angle b'] on one hand, and a completely new experiment [angle b, angle b']. The whole manipulation of series of data tries to find a correlation <b.b'> from the first two experiments, and because there is a notational equivalence (namely the letters b and b') between the second datastreams of these first two experiments, and the first and second datastream of the third experiment. So I will now adjust the notation:
First experiment: analyser 1 at angle a, and analyser 2 at angle b, results in a data stream {(a1[n],b1[n])}, shortly {Eab[n]}
Second experiment: analyser 1 at angle a, and analyser 2 at angle b', results in a datastream {(a2[n],b'2[n])} , shortly {Eab'[n]}.
Third experiment: analyser 1 at angle b, and analyser 2 at angle b', results in a datastream {(b3[n],b'3[n])}, shortly {Ebb'[n]}.

There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold. The confusion seems to come from the fact that one tries to construct a <b.b'> from data that haven't been generated in the Ebb' condition, but from the Eab and Eab' conditions. Indeed, if these b and b' streams were to be predetermined, this reasoning would hold. But it is the very prediction of standard QM that they aren't. So the Ebb' case has the liberty of generating completely different correlations than those of the Eab and Eab' cases.

That's why I gave my (admittedly rather silly) counter example in Bantum mechanics. I generated 3 "experimental results" which

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.
Contrary to what Sica writes below his equation (3) in the first paper, he DID introduce an underlying realistic model, namely that the correlations of b and b' in the case of Eab and Eab' have anything to do with the correlations of Ebb'.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).

I'll give it a deeper look ; indeed it excaped me, it must be what phrase starting on line 14 of p6 alludes to (I had put a question mark next to it!), but even if I agree with it, the point is moot, because, as I said, THIS correlation between the b and b' trains (in my notation b1 and b'2) should a priori have nothing to do with the correlation between b3 and b'3. In fact, I now realize that you can probably fabricate every thinkable correlation between b1 and b'2 that is compatible with (3) by reshuffling b'2[n], so this correlation is not even well-defined. Nevertheless, by itself the result is interesting, because it illustrates very well a fundamental misunderstanding of standard quantum theory (at least I think so :-). I think I illustrated the point with my data streams in Bantum theory ; however, because i tricked them by hand you might of course object. If you want to, I can generate you a few more realistic series of data which will ruin what I think Sica is claiming when he writes (lower part of p9) "However, violation of the inequality by the correlations does imply that they cannot represent any data streams that could possibly exist or be imagined".

cheers,
Patrick.
 
  • #44
vanesch I can even add that (a,b) and (a',b') have absolutely nothing to do with (b,b') as a new measurement.

You may need to read the part two of the paper where the connection is made more explicit, and also check the original Bell's 1964 paper (especially Bell's Eq's (8) and (13) which utilize the perfect correlations for the parallel and anti-parallel apparatus orientations, to move implicitly between the measurements on B and A, in the QM or in the classical model; these are essential steps for the operational interpretation of the three correlations case).


I have seen this kind of reasoning to refute EPR kinds of experiments or theoretical results several times now, and they are all based on a fundamental misunderstanding of what exactly quantum theory, as most people accept it, really predicts.

That wasn't a refutation of Bell's theorem or experiments, merely a new way to see the nature of the inequality. It is actually quite similar to an old visual proof of Bell's theorem by Henry Stapp from late 1970s (I had it while working on masters thesis on this subject, it was an ICTP preprint that my advisor brought back from a conference in Trieste).

However, you should admit that your viewpoint isn't so obvious as to call the people who take the standard view blatant idiots.

I wouldn't do that. I was under the spell for quite a few years, even though I did masters degree on the topic and have read quite a few papers and books at the time and had spent untold hours discussing it with the advisors and the colleagues. It was only after leaving the academia and forgetting about it for few years, then happening to get involved a bit helping my wife (also a physicist, but experimental) with some quantum optics coincidence setups, that it struck me as I was looking at the code that processed the data -- wait a sec, this is nothing like I imagined. All the apparent firmness of assertions, such as A goes here, B goes there, ... in textbooks or papers, rang pretty hollow.

On the other hand, I do think, it won't be too long before the present QM mystification is laughed at by the next generation of physicists. Even the giants like Gerard 't Hooft are ignoring the Bell's and other no-go theorems and exploring the purely local sub-quantum models (the earlier heretics such as Schroedinger, Einstein, de Broglie, later Dirac, Barut, Jaynes,.. weren't exactly midgets, either).

...Bell's identities. To me, they don't really matter so much, because the counterintuitive aspects of QM are strongly illustrated in EPR setups, but they are already present from the moment you accept superposition of states and the Born rule.

There is nothing odd about superposition at least since Maxwell and Faraday. It might surprise you, but plain old classical EM fields can do the entanglement, GHZ state, qubits, quantum teleportation, quantum error correction,... and the rest of the buzzwords, just about all but the non-local, non-dynamical collapse - that bit of magic they can't do (check papers by Robert Spreeuw, also http://remote.science.uva.nl/~spreeuw/lop.htm ).

I've seen up to now two arguments: one is that you definitely need the projection postulate to deduce Bell's inequality violation, which I think is a wrong statement

What is your (or rather, the QM Measurement theory's) Born rule but a projection -- that's where your joint probabilities come from. Just recall that the Bell test can be viewed as a preparation of a system B in, say, pure |B+> state, which appears as a sub-ensemble of B for which A produced the (-1) result. The state of A+B which is a pure state initially collapses into a mixture rho = 1/2 |+><+|x|-><-| + 1/2 |-><-|x|+><+| from which one can identify the sub-ensemble of a subsystem, such as |B+> (without going to mixture the statements such as "A produced -1" are meaningless since the initial state is a superposition and spherically symmetrical). Unitary evolution can't do that without non-dynamical collapse (see von Neumann's chapter on measurement process and his infiite chain problem and why you have to have it).

You may imagine that you never used state |B+> but you did since you used the probabilities for B system (via the B subspace projector within your joint probabilities calculation) which only that state can reproduce (the state is unique once all the probabilities are given, according to Gleason).

Frankly, you're the only one ever to deny using collapse in deducing Bell's QM prediction. Only a suspension of the dynamic evolution can arrive from the purely local PDE evolution equations [/color](the 3N coordinate relativistic Dirac-Maxwell equations for the full system, including the aparatus) to a prediction which prohibits any purely local PDE based mechanism[/color] from reproducing such prediction. (You never explained how can local PDEs do it without suspension of dynamics; except for trying to throw into the equations, as a premise, the approximate, explicitly non-local/non-relativistic instantaneous potentials.)


and second that numerically, out of real data, you cannot hope to violate systematically Bell's inequality, which I think is also misguided,

I never claimed that the Sica's result rules out, mathematically or logically, the experimental confirmation of the Bell's QM prediction. It only makes those predictions seem stranger than one would have thought from usual descriptions.

The result also sheds light on the nature of Bell's inequalities -- they are enumerative constraints, a la pigeonhole principle. Thus the usual euphemisms and excuses used to soften the plain simple fact of over three decades of failed tests, are non-sequiturs (see couple messages back for the explanation).
 
Last edited by a moderator:
  • #45
nightlight said:
What is your (or rather, the QM Measurement theory's) Born rule but a projection -- that's where your joint probabilities come from.

Yes, that was indeed the first question I asked you: to me, the projection postulate IS the Born rule. However, I thought you aimed at the subtle difference between calculating probabilities (Born rule) and the fact that AFTER the measurement, the state is assumed to be the eigenstate corresponding to the measurement result, and I thought it was the second part that you were denying, but accepting the absolute square of inproduct as the correct probability prediction. As you seem to say yourself, it is very difficult to disentangle both!

The reason why I say that you seem to deny superposition in its general sense is that without the Born rule (inproduct squared = probability) the Hilbert space has no meaning. So if you deny the possibility for me to use that rule on the product space, this means you deny the existence of that product space, and hence the superpositions of states such that the result is not a product state. You need that Born rule to DEFINE the Hilbert space. It is the only link to physical results. So in my toy model in 2x2 D hilbert space, I can think of measurements (observables, Hermitean operators) which can, or cannot factor into 1 x A or A x 1, it is my choice. If I choose to have a "global measurement" which says "result +1 for system 1 and result -1 for system 2", then that is ONE SINGLE MEASUREMENT, and I do not need to use any fact of "after the measurement, the state is in an eigenstate of...". I need such kind of specification in order for the product space to be defined as a single Hilbert space, and hence to allow for the superposition of states across the products. Denying this is denying the superposition.
However, you need a projection, indeed, to PREPARE any state. As I said, without it, there is no link between the Hilbert space description and any physical situation. The preparation is here the singlet state. But in ANY case, you need some link between an initial state in Hilbert space, corresponding to a physical setup.

Once you've accepted that superposition of the singlet state, it should be obvious that unitary evolution cannot undo it. So, locally (meaning, acting on the first part of the product space), you can complicate the issue as much as you like, there's no way in which you can undo the correlation. If you accept the Born rule, then NO MATTER WHAT HAPPENS LOCALLY, these correlations will show up, violating Bell's inequalities in the case of 100% efficient experiments.

cheers,
Patrick.
 
  • #46
vanesch There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold.

The whole point of reshuffling was to avoid the need for an underlying model (the route Bell took) in order to get around the non-commutativity of b,b' and the resulting counterfactuality (thus having to do the third experiment; as any Bell inequality test does) when trying to compare their correlations in the same inequality (that was precisely the point of the Bell's original objection to von Neumann's proof). See both Sica's papers (and the Bell's 1964 paper, also useful the related 1966 paper [Rev. Mod. Phys 38, 447-52] on von Neumann's proof) to see how much work it took to weave the logic around the counterfactuality and the need for either the third experiment or for the underlying model.
 
  • #47
nightlight said:
vanesch There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold.

The whole point of reshuffling was to avoid the need for an underlying model (the route Bell took) in order to get around the non-commutativity of b,b' and the resulting counterfactuality (thus having to do the third experiment; as any Bell inequality test does) when trying to compare their correlations in the same inequality (that was precisely the point of the Bell's original objection to von Neumann's proof).

Ah, well, I could have helped them out back then :smile: :smile:. It was exactly the misunderstanding of what exactly QM predicts that I tried to point out. The very supposition that the first two b[n] and b'[n] series should have anything to do with the result of the third experiment means that 1) one didn't understand what QM said and didn't say, but also 2) that one supposes that these series were drawn from a master distribution, which would have yielded the same b[n] and b'[n] if we happened to have choosen to measure those instead of (a,b) and (a,b'). This supposition by itself (which comes down to saying that the <b1[n],b'2[n]> correlation (which, I repeat, is not uniquely defined by Sica's procedure) has ANYTHING to do with whatever should be measured when performing the (b,b') experiment IS BY ITSELF a hidden-variable hypothesis.

cheers,
Patrick.
 
  • #48
Yes, that was indeed the first question I asked you: to me, the projection postulate IS the Born rule.

In the conventional textbooks QM measurement axiomatics the core empirical essence of the original Born rule (as Schoredinger intepreted it in his founding papers; Born introduced a related rule as a footnote for interpreting the scattering amplitudes) is hopelessly entangled with the Hilbert space observables, projectors, Gleason's theorem, etc.

However, I thought you aimed at the subtle difference between calculating probabilities (Born rule) and the fact that AFTER the measurement, the state is assumed to be the eigenstate corresponding to the measurement result, and I thought it was the second part that you were denying,

Well, that part, the non-destructive measurement, is mostly von Neumman's high abstraction which has little relevance or physical content (any discussion on that topic is largely a slippery semantic game, arising from the overload of term "measurement" and shifting its meaning between the prepartion and detection -- that whole topic is empty). Where is the photon A in Bell's experiment after it triggered the detector? Or for that matter any photon after detection in any Quantum Optics experiment?

but accepting the absolute square of inproduct as the correct probability prediction.

That is an approximate and limited operational rule. Any claimed probability of detection ultimately has to check against the actual design and the settings of the aparatus. Basically, the presumed linear response of an apartus to the Psi^2 is a linear approximation to a more complex non-linear response (e.g. check the actual photodetectors sensitivity curves, they are sigmoid with only one section approximately linear). Talking of "probability of detecting a photon" is somewhat misleading, often confusing, and it is less accurate than talking about degree and nature of response to an EM field.

In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.

The reason it is detached from the time and the dynamics[/color] is precisely in order to empower it with the magic capability of suspending the dynamics[/color], producing the "measurement" result with such and such probability, then resuming the dynamics. And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates. By the time student gets through all of it, his mind will be too numbed, his eyes too glazed to notice that emperor wears no trousers.

The reason why I say that you seem to deny superposition in its general sense is that without the Born rule (inproduct squared = probability) the Hilbert space has no meaning.

It is a nice idea (linearization) and a useful tool taken much too far. The actual PDEs and the integral equations formulations of the dynamics are mathematically much richer modelling medium then their greately impoverishing abstraction, the Hilbert space.

Superposition is as natural with any linear PDEs and integral equations as it is with Hilbert space. On the other hand, the linearity is almost always an approximation[/color]. The QM (or of QED) linearity is an approximation for the more exact interaction between the matter fields and EM field. Namely the linearization arises from assuming that the EM fields are "external" [/color](such as Coulomb potential or an external EM fields interacting with the atoms) and that the charge currents giving rise to quantum EM fields are external[/color]. Schroedinger's original idea was to put the Psi^2 (and its current) as the source terms in the Maxwell equations, obtaining thus the coupled non-linear PDEs. Of course, at the time and in that phase that was much too ambitious project which never got very far. It was only in late 1960s that Jaynes picked up the Schroedinger's idea and developed somewhat flawed "neoclassical electrodynamics". That was picket in mid-1980s by Asim Barut which worked out more accurate "self-field electrodynamics" which reproduces not only the QM but the leading radiative corrections of QED, without ever quantizing (which amounts to linearizing the dynamics then adding the non-linearities via perturbative expansion) the EM field. He viewed the first quantization not as some fancy change of classical variables to operators, but as a replacement of the Newtonian-Lorentz particle model with the Fraday-Maxwell type matter field model, resolving thus the particle-field dichotomy (which was plagued with the point-particle divergencies). Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

On the other hand, I think the future will favor neither, but rather the completely different, new modelling tools (physical theories are models) more in tune with the technology (such as Wolfram's automata, networks, etc). The Schroedinger, Dirac and Maxwell equations can already be rederived as macroscopic approximation of the dynamics of simple binary on/off automata (see for example some interesting papers by Garnet Ord). These kind of tools are hugely richer modelling medium than either PDEs or Hilbert space.

So if you deny the possibility for me to use that rule on the product space, this means you deny the existence of that product space, and hence the superpositions of states such that the result is not a product state.

I am only denying that this abstraction/generalization automatically applies that far in such simple-minded, uncritical manner in the Bell experiment setup. Just calling all Hermitean operators observable, doesn't make them so, much less at the "ideal" level. The least one needs to do in the modelling in this manner the Bell's setup is to include projectors to no_coincdince and no_detection subspaces (these would be from orbital degrees of freedom) so the prediction has some contact with the reality instead of bundling all the unknowns it into the engineering parameter "efficiency" so all the ignorance can be swept under the rug, while wishfully and imodestly labeling the toy model "ideal"[/color]. What a joke. The most ideal model is one that predicts the best what actually happens[/color] (which is 'no violation') and not the one which makes the human modeler appear in the best light or in control of the situation the most.
 
  • #49
nightlight said:
[/color](the 3N coordinate relativistic Dirac-Maxwell equations for the full system, including the aparatus) to a prediction which prohibits any purely local PDE based mechanism[/color] from reproducing such prediction. (You never explained how can local PDEs do it without suspension of dynamics; except for trying to throw into the equations, as a premise, the approximate, explicitly non-local/non-relativistic instantaneous potentials.)

I don't know what 3N coordinate relativistic Dirac-Maxwell equations are. It sounds vaguely as the stuff an old professor tried to teach me instead of QFT. True QFT cannot be described - as far as I know - by any local PDE ; it should fit in a Hilbert state formalism. But I truly think you do not need to go relativistically in order to talk about Bell's stuff. In fact, the space-like separation is nothing very special to me. As I said before, it is just an extreme illustration of the simple superposition + Born rule case you find in almost all QM applications. So all this Bell-stuff should be explainable in simple NR theory, because exactly the same mechanisms are at work when you calculate atomic structure, when you do solid-state physics or the like.

cheers,
Patrick.
 
  • #50
nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.

You must be kidding. The time evolution is in the state in Hilbert space, not in the Born rule itself.

And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates.

Well, the decoherence program has something to say about this. I don't know if you are aware of this.

It is a nice idea (linearization) and a useful tool taken much too far. The actual PDEs and the integral equations formulations of the dynamics are mathematically much richer modelling medium then their greately impoverishing abstraction, the Hilbert space.

Superposition is as natural with any linear PDEs and integral equations as it is with Hilbert space. On the other hand, the linearity is almost always an approximation[/color]. The QM (or of QED) linearity is an approximation for the more exact interaction between the matter fields and EM field.

Ok this is what I was claiming all along. You DO NOT ACCEPT superposition of states in quantum theory. In quantum theory, the linearity of that superposition (in time evolution and in a single time slice) is EXACT ; this is its most fundamental hypothesis. So you shouldn't say that you are accepting QM "except for the projection postulate". You are assuming "semiclassical field descriptions".


Namely the linearization arises from assuming that the EM fields are "external" [/color](such as Coulomb potential or an external EM fields interacting with the atoms) and that the charge currents giving rise to quantum EM fields are external[/color]. Schroedinger's original idea was to put the Psi^2 (and its current) as the source terms in the Maxwell equations, obtaining thus the coupled non-linear PDEs.

I see, that's indeed semiclassical. This is NOT quantum theory, sorry. In QED, but much more so in non-abelian theories, you have indeed a non-linear classical theory, but the quantum theory is completely linear.

Of course, at the time and in that phase that was much too ambitious project which never got very far. It was only in late 1960s that Jaynes picked up the Schroedinger's idea and developed somewhat flawed "neoclassical electrodynamics".

Yeah, what I said above. Ok, this puts the whole discussion in another light.

That was picket in mid-1980s by Asim Barut which worked out more accurate "self-field electrodynamics" which reproduces not only the QM but the leading radiative corrections of QED, without ever quantizing (which amounts to linearizing the dynamics then adding the non-linearities via perturbative expansion) the EM field. He viewed the first quantization not as some fancy change of classical variables to operators, but as a replacement of the Newtonian-Lorentz particle model with the Fraday-Maxwell type matter field model, resolving thus the particle-field dichotomy (which was plagued with the point-particle divergencies). Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

Such semiclassical models are used all over the place, such as to calculate effective potentials in quantum chemistry. I know. But I consider them just as computational approximations to the true quantum theory behind it, while you are taking the opposite view.

You tricked me into this discussion because you said that you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

cheers,
Patrick.
 
  • #51
I’ve read this interesting discussion and I’d like to add the following comments.

Bell’s inequalities and more precisely EPR like states help in understanding how quantum states behave. There are many papers in arxiv about theses inequalities. Many of them show how classical statistics can locally break these inequalities even without the need to introduce local (statistical) errors in the experiment.

Here are 2 examples extracted form arxiv (far from being exhaustive).

Example 1 : quant-th/0209123 Laloë 2002 extensive paper on QM interpretation questions (to my opinion against local hidden variable theory, but open mind => lot of pointers and examples)
Example 2: quant-ph/0007005 Accardi 2000 (and later). An Example of how a classical probability space can break bell inequalities (contextual).

The approach of Nightlight, if I have correctly understood, is another way (I’ve missed it: thanks a lot for this new possibility): instead of breaking the inequalities, the “statistical errors” (some events not counted by the experiment, or the way the experiment data is calculated), if included in the final result, force the experiment to follow the bell inequalities. This is another point of view on what is “really” going on with the experiment.

All of these alternative examples use a classical probability space, i.e. the Kolmogorov axiomatization, where one take the adequate variables such that they can violate the Bell’s inequalities (and now, a way to enforce them).

Now, if the question is to know whether the bell’s inequalities experiments are relevant or not, one conservative approach is to try to know (at least feel, and the best, demonstrate), if in “general”, “sensible” experiments (quantum or classical or whatever we want) are most likely to break the bell’s inequalities or not. If the answer is no, then we must admit that aspect type of experiments have detected a rare event and that the leaving “statistical errors” seem not to help (in breaking the inequalities). If the answer is yes, well, we can say what we want :).

The papers against bell’s inequalities experiments, to my modest opinion, demonstrate that a sensible experiment is more likely to detect the inequalities breaking so that we can say what we want! That’s a little bit disappointing, because in this case we still not know if any quantum state may be described by any “local” classical probability space or not. I really prefer to get an good and solid explanation.

To end, I did not know before the Sica’s papers. But, I would like to understand the mechanism he (and Nightlight) used in order to force the Bell’s inequality matching. I follow Vanesh reasoning without problem, but the Nighlight one is a little bit more difficult to understand: where is the additional freedom used to enforce the inequality.

So, let's try to understand this problem in the special case of the well known Aspect et al experiment 1982, phys.rev. letters (where only very simple mathematics are used). I like to use a particular case before making a generalisation; it is easier to see where the problem is.

First let’s take 4 ideal discrete measurements (4 sets of data) of an Aspect type experiment with no lost sample during the measurement process.

If we take the classical expectations formulas with have :

S+= E(AB)+E(AB’)=1/N sum_i1 [A(i1)B(i1)]+ 1/N sum_i2 [A(i2)B’(i2)]
= 1/N sum_i1_i2[A(i1)B(i1)+ A(i2)B’(i2)] (1)

Where A(i1),B(i1) is the data collected by the first experiment and A(i2),B(i2) the data collected by the second experiment. With N --> ∞ (we also take the same sample number for each experiment).

In our particular case A(i1) is the result of the spin measurement of photon 1 on the A (same name as the observable) axis (+1 if spin |+>, -1 if spin |->) while B(i1) is the result of the spin measurement of photon 2 on the B axis (+1 if spin |+>, -1 if spin |->).
Each ideal measurement (given by label i1 or i2) thus gives two spin results (the two photons must be detected).
Etc … For the other measurement cases.

We thus have the second equation:

S-= E(A’B)-E(A’B’)=1/N sum_i3 [A’(i3)B(i3)]- 1/N sum_i4 [A’(i4)B’(i4)]
= 1/N sum_i3_i4[A(i3)B(i3)- A(i4)B(i4)] (2)

Labelling equation (1) or (2), ie, changing the ordering of label i1,i2,i3,i4 does not change the result (sum is commutative).

Now, If we want get the inequality S+=|E(AB)+E(AB’)|≤ 1+E(BB’), we first need to make a filter to the rhs equation (1), otherwise A cannot be factorized: we must select a subset of experiment samples with A(i1)=A(i2).

If we take a large samples number N, equation (1) is not changed with this filtering and we get:

|S+|= |E(AB)+E(AB’)|= 1/N |sum_i1_i2[A(i1)B(i1)+ A(i2)B’(i2)] |
= 1/N |sum_i1[A(i1)B(i1)+ A(i1)B’(i1)] |=
≤1/Nsum_i1 |[A(i1)B(i1)+ A(i1)B’(i1)]|

We then used the simple inequality |a.b+a.c|≤ 1+ a.c (|a|,|b|,|c| ≤1) for each label i1

|S+|= |E(AB)+E(AB’)| ≤1+1/N sum_i1[B(i1)B’(i1)] (3)

Remind that B’(i1) is the data of the second experiment relabelled with a subset of label i1. Now this re-labelling has a freedom because we may have several experiment results (50%) where A(i1)=A(i2).

So in equation (3) |sum_i1[B(i1)B’(i1)]]| depends on the artificial label order.

We also have almost the same inequality for equation (2)

|S+|= |E(A’B)-E(A’B’)|= 1/N |sum_i3_i4[A’(i3)B(i3)- A’(i4)B’(i4)] |
= 1/N |sum_i3[A’(i3)B(i3)- A’(i3)B’(i3)] |=
≤1/Nsum_i3 |A’(i3)B(i3)- A’(i3)B’(i3)|

We then used the simple inequality |a.b-a.c|≤ 1- a.c (|a|,|b|,|c| ≤1)

|S+|= |E(A’B)-E(A’B’)| ≤1-1/N sum_i3[B(i3)B’(i3)] (4)

So in equation (4) |sum_i1[B(i3)B’(i3)]]| depends on the artificial label ordering i3.

Now, we thus have the bell inequality:

|S=S++S-|≤ S++ S-= 2+1/N sum_i1_i3[B(i1)B’(i1)-B(i3)B’(i3)] (5)

where sum_i1_i3[B(i1)B’(i1)-B(i3)B’(i3)] depends on the labelling order we have used to filter and get this result.

I think that (3) and (4) may be the labelling order pb remarked by Nighlight in this special case.

Up to know, we have only spoken of collection of measurement results of values +1/-1.

Now if B is a random variable that depends only on the local experiment apparatus (the photon polarizer) we have B=B(apparatus_B, hv) where hv is the local hidden variable, we should have:

1/N.sum_i1[B(i1)B’(i1)= 1/N.sum_i3B(i3)B’(i3)] = <BB’> when N--> ∞.
(so we have the Bell inequality |S|≤2).

So, now I can use the Nighlight argument, ordering of B’(i1) and B’(i3) is totally artificial then the question is: should I got 1/N.sum_i1[B(i1)B’(i1)<> 1/N.sum_i3B(i3)B’(i3)] or the equality?

Moreover, Equation (5) seems to show that this kind of experiments are more likely to see a violation of a bell inequality as B(i1),B’(i2),B(i3)B’(i4) comes from 4 different experiments.


Seratend

P.S. Sorry if some minor mistakes are left.
 
  • #52
Sorry, my post was supposed to follow this one:


nightlight said:
vanesch Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.

... the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>[/color], because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> [/color]by quantum theory.


The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure[/color] where he brings out that question and explicitly preserves <b.b'>[/color]. Additional analysis of the same question is in his later paper. It is not necessary to change the sum <b1.b2> even though individual elements of the arrays b1[] and b2[] are reshuffled. Namely there is a great deal of freedom when matching a1[] and a2[] elements since any +1 from a2[] can match any +1 from a1[], allowing thus for [(N/2)!]^2 ways to match N/2 of +1's and N/2 of -1's between the two arrays. The constraint from <b1.b2> requires only that the sum is preserved in the permutation, which is a fairly weak constraint.

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).

Seratend,
It takes time to answer :)
 
  • #53
seratend said:
Example 2: quant-ph/0007005 Accardi 2000 (and later). An Example of how a classical probability space can break bell inequalities (contextual).

I only skimmed quickly at this paper, but something struck me: he shows that Bell's equality can also be satisfied with a non-local model. Bell's claim is the opposite: that in order NOT to satisfy the equality, you need a non-local model.

A => B does not imply B => A.

The "reduction" of the "vital assumption" that there is one and only one underlying probability space is in my opinion EXACTLY what is stated by local realistic models: indeed, at the creation of the singlet state with two particles, both particles carry with them the "drawing of the point in the underlying probability universe", from which all potential measurements are fixed, once and for all.

So I don't really see the point of the paper! But ok, I should probably read it more carefully.

cheers,
Patrick
 
  • #54
vanesch you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap[/color], in a kind of slippery way, without explaining how, why and when does it stop functioning (macroscopic device ? consciousness? a friend of that consciousness? decoherence? consistent histories? gravity? universe branching? jhazdsfuty?) and how it resumes. That is a tacit recognition that linear evolution such as the linear Schrodinger or Dirac equations, don't work correctly throughout. So when the limitation of the linear approximation reaches a critical point, the probability mantras get chanted, the linear evolution is stopped in a non-dynamic, abrupt way, and temporarily substituted with a step-like, lossy evolution (the projector) to a state in which it ought to be, then when in the safe waters again, the probability chant stops, and the regular linear PDE resumes. The overall effect is at best analogous to a piecewise linear approximation[/color] of a curve which, all agree, cannot be a line.

So this is not a matter who is for and who is against the linearity -- since no one is "for" with the only difference that some know that. The rest believe they are in the "for" group and they despise the few who don't believe so. If you believe in projection postulate, you believe in temporary suspension of linear evolution equations, however ill-defined it may be. [/color]

Now that we agreed we're all against linearity, what I am saying is that this "solution," the collapse, is an approximate stop-gap measure, due to intractability of already known non-linear dynamics, which in principle can produce collapse-like effects when they're called for, except in a lawful and clean way. The linearity would hold approximately as it does now, and no less than it does now, i.e. it is analogous to smoothing the sharp corners of the piecewise linear approximation with a mathematically nicer and more accurate approximation.

While you may imagine that non-linearity is a conjecture, it is the absolute linearity that is a conjecture, since non-linearity is a more general scheme. Check von Neumman's and Wigner's writings on the measurement problem to see the relation between the absolute linearity and the need for the collapse.

A theory cannot be logically coherent if it has an ill-defined switch between the two incompatible modes of operation, the dynamic equations and the collapse (which grew out of the similarly incoherent seeds, the Bohr's atom model and the first Plank's theory). The whole theory in this phase is like a hugely magnified version of the dichotomies of the originating embryo. That's why there is so much philosophizing and nonsense on the subject.
 
  • #55
You must be kidding. The time evolution is in the state in Hilbert space, not[/color] in the Born rule itself.

That is the[/color] problem I am talking about. We ought not to have dynamical evolution interrupted and suspended by "measurement" which turns on the Born rule, to figure out what it really wants to do next, then somehow the dynamics is allowed to run again.


Well, the decoherence program has something to say about this. I don't know if you are aware of this.

It's a bit decoherent for my taste.
 
  • #56
nightlight said:
Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap[/color], in a kind of slippery way, without explaining how, why and when does it stop functioning (macroscopic device ? consciousness?

The relative-state (or many worlds) proponents do away with it, and apply strict linearity. I have to say that I think myself that there is something missing in thas picture. But I think that quantum theory is just a bit too subtle to replace it with semiclassical stuff. I'd be surprised that such a model can predict the same things as QFT and most of quantum theory. In that it would be rather stupid, no? I somehow have the feeling - it is only that of course - that this semiclassical approach would be the evident way to try stuff before jumping on the bandwagon of full QM, and that people have turned that question in all possible directions, so that the possibilities have been exhausted there. Of course, one cannot study all "wrong" paths of the past and one has to assume that this has been looked at somehow, otherwise nobody gets nowhere if all wrong paths of the past are re and re and reexamined. So I didn't look in all that stuff, accepting that it cannot be done.

cheers,
Patrick.
 
  • #57
But I think that quantum theory is just a bit too subtle to replace it with semiclassical stuff.

I didn't have in mind the semiclassical models. The semiclassical scheme merely doesn't quantize EM field, but it still uses the external field aproximation, thus, although practical, it is limited and less accurate than QED (when you think of the difference in heavy gear involved, it's amazing it works at all). Similar problems plague Stochastic Electrodynamics (and its branch Stochastic Optics). While they can model many of the the so called non-classical effects touted in Quantum Optics (including the Bell's inequality experiments), they are also an external field approximation scheme, just using the ZPF distribution as the boundary/intial conditions for the classical EM field.

To see the difference from the above approaches, write down Dirac equation with minimal EM coupling, then add below the inhomogenious wave equation for the 4-potential A_mu (the same one from the Dirac equation above it) with the right hand side using the Dirac's 4-current. You have a set of coupled nonlinear PDEs without external field or external current approximation. See how far you get with that kind of system.

That's a variation of what Barut started with (and also with Schroedinger-Pauli instead of Dirac) and then managed to reproduce the results of the leading orders of QED expansion (http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= has 55 of his papers and preprints scanned; those from mid 1980s and on are mostly on his self-field). While this scheme alone obviously cannot be the full theory, it may be a at least knock on the right door.
 
Last edited by a moderator:
  • #58
nightlight said:
To see the difference from the above approaches, write down Dirac equation with minimal EM coupling, then add below the inhomogenious wave equation for the 4-potential A_mu (the same one from the Dirac equation above it) with the right hand side using the Dirac's 4-current. You have a set of coupled nonlinear PDEs without external field or external current approximation. See how far you get with that kind of system.

This is exactly what my old professor was doing (and in doing so, he neglected to teach us QFT, the bastard). He was even working on a "many particle dirac equation". And indeed, this seems to be a technique that incorporates some relativistic corrections for heavy atoms (however, there the problem is that there are too many electrons and the problem becomes untractable, so it would be more something to handle an ion like U+91 or so).

Nevertheless, I'd still classify this approach as fully classical, because there is no "quantization" at all, and the matter fields are considered as classical fields just as well as the EM field. In the language of path integrals, you wouldn't take into account anything besides the classical solution.
Probably this work can be interesting. But you should agree that it is still a far way to go to have a working theory, so you shouldn't sneer at us low mortals who, for the moment, take quantum theory in the standard way, no ?
My feeling is that it is simply too cheap, honestly.

cheers,
Patrick.
 
  • #59
This is exactly what my old professor [/color] was doing (and in doing so, he neglected to teach us QFT, the bastard).

What's his name. (Dirac was playing in later years with that stuff, too, so it can't be that silly.)

He was even working on a "many particle dirac equation".

What Barut found (although only for the non-relativistic case) was that for the N particle QM, he can obtain the equivalent result to conventional N particle QM, in a form superficially resembling the Hartree-Fock self-consistent field, using electron Psi_e and a nucleon Psi_n (all normalized to correct number of particles instead of to 1), as nonlinearly coupled classical matter fields in 3-D instead of the usual 3N-Dimensional configuration space, and unlike Hartree-Fock, it was not an approximation.

Nevertheless, I'd still classify this approach as fully classical, because there is no "quantization" at all, and the matter fields are considered as classical fields just as well as the EM field. In the language of path integrals, you wouldn't take into account anything besides the classical solution.

Indeed, that model alone doesn't appear to be the key by itself. For example the charge quantization doesn't come out of it and must be put in by hand, although no one has really solved anything without the substantial approximations, so no one knows what these equations are really capable of producing (charge quantization seems very unlikely, though, without additional fields or some other missing ingredient). But many have gotten quite a bit of mileage out of much much simpler non-linear toy models, at least in the form of insights about the spectrum of phenomena one might find in such systems.
 
Last edited:
  • #60
Seratend reply:
=======================================================
Before, here is below the place in time, when I started to reply:


nightlight said:
vanesch you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap[/color], (...)

(...)A theory cannot be logically coherent if it has an ill-defined switch between the two incompatible modes of operation, the dynamic equations and the collapse (which grew out of the similarly incoherent seeds, the Bohr's atom model and the first Plank's theory). The whole theory in this phase is like a hugely magnified version of the dichotomies of the originating embryo. That's why there is so much philosophizing and nonsense on the subject.

first,


vanesh said:
(...) I only skimmed quickly at this paper, but something struck me: he shows that Bell's equality can also be satisfied with a non-local model. Bell's claim is the opposite: that in order NOT to satisfy the equality, you need a non-local model.

(...) So I don't really see the point of the paper! But ok, I should probably read it more carefully.
cheers,
Patrick


Vanesh, do not loose your time with Accardi Paper. It is only an example: one attempt, among many others, surely not the best, of a random variable model that breaks the bell inequalities. If I correctly understand, the results of spin measurement depend on the apparatus settings (local random variables: what they call “cameleon effect”).
It has been a long time I’ve looked at this paper :), but the first pages, before their model, are a very simple introduction to the probability and bell inequalities on a general probability space, and after how to construct, a model that breaks this inequality (local or global). This kind of view has led to the creation of a school QM interpretation (“quantum probability”, that is slightly different from orthodox interpretation).


Second and last, I have some other comments on physics and the projection postulate and its dynamics.



nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter. (…)

(…) Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

The Schroedinger, Dirac and Maxwell equations can already be rederived as macroscopic approximation of the dynamics of simple binary on/off automata (see for example some interesting papers by Garnet Ord). These kind of tools are hugely richer modelling medium than either PDEs or Hilbert space.

I appreciate, when someone likes to check other possibilities in physical modelling (or theory if we prefer), it is a good way to discover new things. However, please, avoid saying that a model/theory is better than another does as the only thing we can say (to my modest opinion) is that each model has its unknown domain of validity.
Note I do not reject the possibility of a perfect model (full domain of validity), but I prefer to think it currently does not exist.

The use of PDE models is interesting. It has already proved its value in many physical branches. We can use the PDE to model the classical QM, as well as the relativistic one, this is not the problem.
For example you have bohemian mechanics (1952): you can get all the QM classical results with this model as well as you can write an ODE that complies with QM. (insertion of a Brownian motion like term in Newton’s equation – Nelson 1966 or your more recent “simple binary on/off automata” of Garnet Ord that seems to be the binary random walk of the Brownian motion –not enough time to check it :(
The main problem is to know if we can get the results of the experiments in a simpler way using such a method.

The use of Hilbert space tools in quantum mechanics formulation is just a matter of simplicity. It is interesting when we face discrete value problems (eg. Sturm-Liouville like problems). It shows,, for example, how a set of discrete values of operators change in time. Moreover it shows simply the representation relativity (e.g. quantum q or p basis) by a simple basis change. It then shows in a simplistic way the connection (a basis change) between a continuous and discrete basis (eg. {p,q} continuous basis and {a,a+} discrete basis).
In this type of case, the use of PDE may become very difficult. For example, the use of non gentle functions of L2(R,dx) spaces introduces less intuitive problems of continuities requiring for example the introduction of extension of the derivation operators to follow the full solutions of an extended space. This is not my cup of tea so I won’t go further on.


The text follows in the next post. :cry:

Seratend
 

Similar threads

  • · Replies 36 ·
2
Replies
36
Views
7K
Replies
65
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
Replies
3
Views
3K
  • · Replies 81 ·
3
Replies
81
Views
7K
  • · Replies 14 ·
Replies
14
Views
4K
Replies
28
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K