Young's Experiment: Exploring Wave-Particle Duality

  • Thread starter Cruithne
  • Start date
  • Tags
    Experiment
In summary: This is not supported by the experiments.In summary, this article is discussing a phenomena that is observed in Young's experiment, however it contradicts what is said in the other examples of the experiment. The mystery of the experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. Additionally, the statement suggests that the interference patterns produced were not the result of any observations.
  • #36
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.

Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.

Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.

You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].

Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be. To paraphrase a kid's response when told that thermos bottle keeps the hot liquids hot and cold liquids cold -- "How do it know?"

Or, another twist, you take 99 different angles for B and ontain sets of data A1[n],B1[n]; A2[n],B2[n]; ... A99[n],B99[n]. Now you're ready for the angle B100. This time the A100[n] has to keep rearranging itself to avoid matching all 99 previous arrays Ak[N].

Then you extend the above and, say, collect r=2^n data sets for 2^n different angles (they could be all the same angle, too). This time at the next angle, B_(2^n+1), the data for A_(2^n+1)[n] would have to avoid all 2^n arrays Ak[n], which it can't do. So you get that in each test there would be one failed QM prediction, for at least one angle, since that Bell inequality would not be violated.

Then you take 2^n*2^n previous test,... and so on. As you go up, it gets harder for the inequality violator, its negative test count has a guaranteed growth. Also, I think this therem is not nearly restraining enough and the real state is much worse for the inequality violator (as simple computer enumerations suggest when counting the percentages of violation cases for the finite data sets).

Or, you go back and start testing, say, angle B7 again. Now the QM magician in heaven has to allow new A7[n] to be the same as the old A7[n], which was prohibited up to that point. You switch B to B9, and now, the QM magician, has to disallow again the match with A7[n] and allow the match with old A9[n], which was prohibited until now.

Where is the memory for all that? And what about the elaborate mechanisms or the infrastructure needed to implement the avoidance scheme? And why? What is the point of remembering all that stuff? What does it (or anyone/anything anywhere) get in return?

The conjectured QM violation of of Bell's inequality basically looks sillier and sillier once these kind of implications are followed through. It is not any more mysterious or puzzling but plainly ridiculous.

And what do we get from the absurdity? Well, we get the only real confirmation for the collapse since Bell's theorem uses collapse to produce the QM prediction which violates the inequality. And what do we need the collapse for? Well, it helps "solve" the measurement problem. And why is there the measurement problem? Well, because Bell's theorem shows you can't have LHVs to produce definite results. Anything else empirical from either. Nope. What a deal.

The collapse postulate first lends a hand to prove Bell's QM prediciton which in turn, via LHV prohibition, creates a measurement problem which then the collapse "solves" (thank you very much). So the collapse postulate creates a problem then solves it. What happens if we take out collapse postulate all together. No Bell's theorem, hence no measurement problem, hence no problem at all. Nothing else is sensitive to the existence (or the lack) of the collapse but the Bell's inequality experiment. Nothing else needs the collapse. It is a parasitic historical relic in the theory.
 
Last edited:
Physics news on Phys.org
  • #37
nightlight said:
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.

Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.

Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.

You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].

Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be.

I'm not sure I understand what you are at. I will tell you what I make of what is written out above, and you correct me if I'm not understanding it right, ok ?

You say:
Let us generate a first sequence of couples:
S1 = {(a_1(n),b_1(n)) for n running from 1:N, N large}
Considering a_1 = (a_1(n)) as a vector in N-dim Euclidean space, we can require a certain correlation between a_1 and b_1. So a_1 and b_1 are to have an angle which is not 90 degrees.

We next generate other sequences, S2, S3... S_M with similar, or different, correlations. I don't know Sica's theorem, but what it states seems quite obvious, if I understand so: a_1, a_2 ... are to be considered independent, and hence uncorrelated, or approximately orthogonal in our Euclidean space. The corresponding b_2, b_3 etc... are also to be considered approximately orthogonal amongst themselves, but are correlated with similar or different correlations (angles different from 90 degrees in that Euclidean space) with their partner a_n. Sounds perfectly all right to me. Where's the problem ? You have couples of vectors making an angle (say 30 degrees) which are essentially orthogonal between pairs in E_N. Of course for this to hold, N has to be much much larger than the number of sequences M. So you have a very high dimensionality in E_n. I don't see any trouble, moreover, this is classical statistics ; where's the trouble with Bell and consorts ? It is simply an expression about the angles between the pairs, no ?

cheers,
patrick.
 
  • #38
vanesch I don't know Sica's theorem, but what it states seems quite obvious,

It is the array correlation inequality statement I gave earlier which also had the Sica's preprint link.

if I understand so: a_1, a_2 ... are to be considered independent, and hence uncorrelated, or approximately orthogonal in our Euclidean space. ... Sounds perfectly all right to me. Where's the problem ?

You were looking for a paradox or a contradiction while I was pointing at the peculiarity for a specific subset of sequences. The most probable and the average behavior is, as you say, the approximate orthogonality among a1, a2,... or among b1, b2,... There is no problem here.

The Sica's result is stronger, though -- it implies that if the two sets of measurements (A,B1) and (A,B2) satisfy Bell's QM prediction then it is necessary that a1 and a2 vectors in E_N be explicitly different -- the a1 and a2 cannot be parallel, or even approximately parallel.

With just these two data sets, the odds of a1 and a2 being nearly parallel are very small, of the order 1/2^N. But if you have more than 2^N such vectors, they cannot all satisfy the QM prediction requirement that each remain non-parallel to all others. What is abnormal about this, is that this means that at least one test is guaraneed to fail, which is quite different from what one would normally expect of a statistical prediction: one or more finite tests may fail (due to a statistical fluctuation). Sica's result implies that at least one test must fail, no matter how large the array sizes.

What I find peculiar about it is that there should be any requirement of this kind at all between the two separate sets of measurements. To emphasize the peculiarity of this, consider that each vector a1 and a2 will have roughly the 50:50 split between +1 and -1 values. So it is the array ordering convention for individual results in two separate experiments that is constrained by the requirement of satisfying the alleged QM prediction.

The peculiarity of such kind of constraint is that the experimenter is free to label individual pairs of detection results in any order he wishes, i.e. he doesn't have to store the results of (A,B2) into A2[] and B2[] arrays so that array indices follow temporal order. He can store the first pair of detection results into A2[17], B2[17], the second pair at A2[5], B2[5], ... etc. This labeling is purely a matter of convention and no physics should be sensitive to such labeling convention. Yet, the Sica's result implies that there is always a labeling convention for these assignements which yields negative result for the test of Bell's QM prediction (i.e. it produces the classical inequality).

Let's now pursue the oddity one more step. The original labeling convention for the experiment (A,B2) was to map the larger time of detection to larger index. But you could have mapped it the opposite way i.e. the larger time to smaller index. The experiments should still succeed, i.e. violate the inequality (with any desired certainty, provided you pick N large enough). Now, you could have ignored the time of detection and used a random number generator to pick the next array slot to put the result in. You still expect the experiment to almost always succeed. It shouldn't matter whether you use a computer generated pseudo-random generator or flip a coin. Now, the the array a1 is equivalent to a sequence of coin flips, as random as one can get. So we use that sequence for our labeling convention to allocate next slot in the arrays a2,b2. Now with a1 used as the labeling seed, you can make a2 parallel to a1, 100% of the time. Thus there is a labeling scheme for experiments (A,B2), (A,B3),... which makes all of them always fail the Bell's QM prediction test.

Now, you may say, you are not allowed to use a1 for your array labeling convention in experiment (A,B2). Well, Ok, so this rule for the labeling must be added to the QM postulates since it doesn't follow from the existent postulates. And we now have another postulate that says, roughly:

COINCIDENCE DATA INDEX LABELING POSTULATE: if you are doing a photon correlation experiment and your setup has the efficiency above 82%, and you desire to uphold the collapse postulate, you cannot label the results in any order you wish. Specifically, your labeling algorithm cannot use the data from any other similar photon correlation experiment which has one polarizer axis parallel to your current test and which has the setup efficiency above 82%. If none of the other experiment's axis is parallel to yours, or if their efficiency is below 82%, then you're free to use the data of said experiment in your labeling algorithm. Also, if you do not desire to uphold the collapse postulate, you're free to use any labeling algorithm and any data you wish.

That's what I see as peculiar. Not contradictory or paradoxical, just ridiculous.
 
Last edited:
  • #39
nightlight said:
vanesch I don't know Sica's theorem, but what it states seems quite obvious,

It is the array correlation inequality statement I gave earlier which also had the Sica's preprint link.

I'll study it... even if I still think that you have a very peculiar view of things, it merits some closer look because I can learn some stuff too...

cheers,
Patrick.
 
  • #40
vanesch said:
I'll study it... even if I still think that you have a very peculiar view of things, it merits some closer look because I can learn some stuff too...

cheers,
Patrick.

Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.
If you have 2 series of measurements, (a,b) and (a',b'), and you REORDER the second stream so that a = a', then of course the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.
So the point of the paper escapes me completely.

Look at an example.
Suppose we had some Bantum Theory, which predicts that <a.b> = 0, <a.b'> = 1 and <b.b'> = 1. You cannot have any harder violation of equation (3). (Quantum theory is slightly nicer).
Now, Bantum theory also only allows you to confront two measurements at a time.

First series of experiments: a and b:
(1,1), (1,-1),(-1,1),(-1,-1),(1,1), (1,-1),(-1,1),(-1,-1)

Clearly, we have equal +1 and -1 in a and in b, and we have <a.b> = 0.

Second series of experiments: a and b':
(1,1),(1,1),(-1,-1),(-1,-1),(1,1),(1,1),(-1,-1),(-1,-1),

Again, we have equal amount of +1 and -1 in a and b', and <a.b'> = 1.
Note that I already put them in order of a.

Third series of experiments: b and b':
(1,1),(1,1),(-1,-1),(-1,-1),(1,1),(1,1),(-1,-1),(-1,-1)

Note that for the fun of it, I copied the previous one. We have <b.b'> = 1, and an equal amount of +1 and -1 in b and b'.

There is no fundamental reason why we cannot obtain these measurement results, is there ? If experiments confirm this, Bantum theory is right. Nevertheless, |<ab> - <ab'>| <? 1 - <b.b'>
or |0 - 1| < 1 - 1 or 1 < 0 ?


cheers,
Patrick.
 
  • #41
vanesch said:
If you have 2 series of measurements, (a,b) and (a',b'), and you REORDER the second stream so that a = a', then of course the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.

I can even add that (a,b) and (a',b') have absolutely nothing to do with (b,b') as a new measurement. I have seen this kind of reasoning to refute EPR kinds of experiments or theoretical results several times now, and they are all based on a fundamental misunderstanding of what exactly quantum theory, as most people accept it, really predicts.

This doesn't mean that these discussions aren't interesting. However, you should admit that your viewpoint isn't so obvious as to call the people who take the standard view blatant idiots. Some work on the issue can be done, but one should keep an open mind. I have to say that I have difficulties seeing the way you view QM, because it seems to jump around certain issues in order to religiously fight the contradiction with Bell's identities. To me, they don't really matter so much, because the counterintuitive aspects of QM are strongly illustrated in EPR setups, but they are already present from the moment you accept superposition of states and the Born rule.

I've seen up to now two arguments: one is that you definitely need the projection postulate to deduce Bell's inequality violation, which I think is a wrong statement, and second that numerically, out of real data, you cannot hope to violate systematically Bell's inequality, which I think is also misguided, because a local realistic model is introduced to deduce these properties.

cheers,
Patrick.
 
Last edited:
  • #42
vanesch Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.

... the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.


The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure where he brings out that question and explicitly preserves <b.b'>. Additional analysis of the same question is in his later paper. It is not necessary to change the sum <b1.b2> even though individual elements of the arrays b1[] and b2[] are reshuffled. Namely there is a great deal of freedom when matching a1[] and a2[] elements since any +1 from a2[] can match any +1 from a1[], allowing thus for [(N/2)!]^2 ways to match N/2 of +1's and N/2 of -1's between the two arrays. The constraint from <b1.b2> requires only that the sum is preserved in the permutation, which is a fairly weak constraint.

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).
 
  • #43
nightlight said:
The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure where he brings out that question and explicitly preserves <b.b'>.

Ok, I can understand that maybe there's a trick to reshuffle the b2[] in such a way as to rematch <b.b'> that would be present if there was a local realistic model (because that is hidden in Sica's paper, see below). I didn't check it and indeed I must have missed that in Sica's paper. However, I realized later that I was tricked into the same reasoning error as is often the case in these issues, and that's why I posted my second post.
There is absolutely no link between the experiments [angle a, angle b] and [angle a, angle b'] on one hand, and a completely new experiment [angle b, angle b']. The whole manipulation of series of data tries to find a correlation <b.b'> from the first two experiments, and because there is a notational equivalence (namely the letters b and b') between the second datastreams of these first two experiments, and the first and second datastream of the third experiment. So I will now adjust the notation:
First experiment: analyser 1 at angle a, and analyser 2 at angle b, results in a data stream {(a1[n],b1[n])}, shortly {Eab[n]}
Second experiment: analyser 1 at angle a, and analyser 2 at angle b', results in a datastream {(a2[n],b'2[n])} , shortly {Eab'[n]}.
Third experiment: analyser 1 at angle b, and analyser 2 at angle b', results in a datastream {(b3[n],b'3[n])}, shortly {Ebb'[n]}.

There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold. The confusion seems to come from the fact that one tries to construct a <b.b'> from data that haven't been generated in the Ebb' condition, but from the Eab and Eab' conditions. Indeed, if these b and b' streams were to be predetermined, this reasoning would hold. But it is the very prediction of standard QM that they aren't. So the Ebb' case has the liberty of generating completely different correlations than those of the Eab and Eab' cases.

That's why I gave my (admittedly rather silly) counter example in Bantum mechanics. I generated 3 "experimental results" which

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.
Contrary to what Sica writes below his equation (3) in the first paper, he DID introduce an underlying realistic model, namely that the correlations of b and b' in the case of Eab and Eab' have anything to do with the correlations of Ebb'.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).

I'll give it a deeper look ; indeed it excaped me, it must be what phrase starting on line 14 of p6 alludes to (I had put a question mark next to it!), but even if I agree with it, the point is moot, because, as I said, THIS correlation between the b and b' trains (in my notation b1 and b'2) should a priori have nothing to do with the correlation between b3 and b'3. In fact, I now realize that you can probably fabricate every thinkable correlation between b1 and b'2 that is compatible with (3) by reshuffling b'2[n], so this correlation is not even well-defined. Nevertheless, by itself the result is interesting, because it illustrates very well a fundamental misunderstanding of standard quantum theory (at least I think so :-). I think I illustrated the point with my data streams in Bantum theory ; however, because i tricked them by hand you might of course object. If you want to, I can generate you a few more realistic series of data which will ruin what I think Sica is claiming when he writes (lower part of p9) "However, violation of the inequality by the correlations does imply that they cannot represent any data streams that could possibly exist or be imagined".

cheers,
Patrick.
 
  • #44
vanesch I can even add that (a,b) and (a',b') have absolutely nothing to do with (b,b') as a new measurement.

You may need to read the part two of the paper where the connection is made more explicit, and also check the original Bell's 1964 paper (especially Bell's Eq's (8) and (13) which utilize the perfect correlations for the parallel and anti-parallel apparatus orientations, to move implicitly between the measurements on B and A, in the QM or in the classical model; these are essential steps for the operational interpretation of the three correlations case).


I have seen this kind of reasoning to refute EPR kinds of experiments or theoretical results several times now, and they are all based on a fundamental misunderstanding of what exactly quantum theory, as most people accept it, really predicts.

That wasn't a refutation of Bell's theorem or experiments, merely a new way to see the nature of the inequality. It is actually quite similar to an old visual proof of Bell's theorem by Henry Stapp from late 1970s (I had it while working on masters thesis on this subject, it was an ICTP preprint that my advisor brought back from a conference in Trieste).

However, you should admit that your viewpoint isn't so obvious as to call the people who take the standard view blatant idiots.

I wouldn't do that. I was under the spell for quite a few years, even though I did masters degree on the topic and have read quite a few papers and books at the time and had spent untold hours discussing it with the advisors and the colleagues. It was only after leaving the academia and forgetting about it for few years, then happening to get involved a bit helping my wife (also a physicist, but experimental) with some quantum optics coincidence setups, that it struck me as I was looking at the code that processed the data -- wait a sec, this is nothing like I imagined. All the apparent firmness of assertions, such as A goes here, B goes there, ... in textbooks or papers, rang pretty hollow.

On the other hand, I do think, it won't be too long before the present QM mystification is laughed at by the next generation of physicists. Even the giants like Gerard 't Hooft are ignoring the Bell's and other no-go theorems and exploring the purely local sub-quantum models (the earlier heretics such as Schroedinger, Einstein, de Broglie, later Dirac, Barut, Jaynes,.. weren't exactly midgets, either).

...Bell's identities. To me, they don't really matter so much, because the counterintuitive aspects of QM are strongly illustrated in EPR setups, but they are already present from the moment you accept superposition of states and the Born rule.

There is nothing odd about superposition at least since Maxwell and Faraday. It might surprise you, but plain old classical EM fields can do the entanglement, GHZ state, qubits, quantum teleportation, quantum error correction,... and the rest of the buzzwords, just about all but the non-local, non-dynamical collapse - that bit of magic they can't do (check papers by Robert Spreeuw, also http://remote.science.uva.nl/~spreeuw/lop.htm ).

I've seen up to now two arguments: one is that you definitely need the projection postulate to deduce Bell's inequality violation, which I think is a wrong statement

What is your (or rather, the QM Measurement theory's) Born rule but a projection -- that's where your joint probabilities come from. Just recall that the Bell test can be viewed as a preparation of a system B in, say, pure |B+> state, which appears as a sub-ensemble of B for which A produced the (-1) result. The state of A+B which is a pure state initially collapses into a mixture rho = 1/2 |+><+|x|-><-| + 1/2 |-><-|x|+><+| from which one can identify the sub-ensemble of a subsystem, such as |B+> (without going to mixture the statements such as "A produced -1" are meaningless since the initial state is a superposition and spherically symmetrical). Unitary evolution can't do that without non-dynamical collapse (see von Neumann's chapter on measurement process and his infiite chain problem and why you have to have it).

You may imagine that you never used state |B+> but you did since you used the probabilities for B system (via the B subspace projector within your joint probabilities calculation) which only that state can reproduce (the state is unique once all the probabilities are given, according to Gleason).

Frankly, you're the only one ever to deny using collapse in deducing Bell's QM prediction. Only a suspension of the dynamic evolution can arrive from the purely local PDE evolution equations (the 3N coordinate relativistic Dirac-Maxwell equations for the full system, including the aparatus) to a prediction which prohibits any purely local PDE based mechanism from reproducing such prediction. (You never explained how can local PDEs do it without suspension of dynamics; except for trying to throw into the equations, as a premise, the approximate, explicitly non-local/non-relativistic instantaneous potentials.)


and second that numerically, out of real data, you cannot hope to violate systematically Bell's inequality, which I think is also misguided,

I never claimed that the Sica's result rules out, mathematically or logically, the experimental confirmation of the Bell's QM prediction. It only makes those predictions seem stranger than one would have thought from usual descriptions.

The result also sheds light on the nature of Bell's inequalities -- they are enumerative constraints, a la pigeonhole principle. Thus the usual euphemisms and excuses used to soften the plain simple fact of over three decades of failed tests, are non-sequiturs (see couple messages back for the explanation).
 
Last edited by a moderator:
  • #45
nightlight said:
What is your (or rather, the QM Measurement theory's) Born rule but a projection -- that's where your joint probabilities come from.

Yes, that was indeed the first question I asked you: to me, the projection postulate IS the Born rule. However, I thought you aimed at the subtle difference between calculating probabilities (Born rule) and the fact that AFTER the measurement, the state is assumed to be the eigenstate corresponding to the measurement result, and I thought it was the second part that you were denying, but accepting the absolute square of inproduct as the correct probability prediction. As you seem to say yourself, it is very difficult to disentangle both!

The reason why I say that you seem to deny superposition in its general sense is that without the Born rule (inproduct squared = probability) the Hilbert space has no meaning. So if you deny the possibility for me to use that rule on the product space, this means you deny the existence of that product space, and hence the superpositions of states such that the result is not a product state. You need that Born rule to DEFINE the Hilbert space. It is the only link to physical results. So in my toy model in 2x2 D hilbert space, I can think of measurements (observables, Hermitean operators) which can, or cannot factor into 1 x A or A x 1, it is my choice. If I choose to have a "global measurement" which says "result +1 for system 1 and result -1 for system 2", then that is ONE SINGLE MEASUREMENT, and I do not need to use any fact of "after the measurement, the state is in an eigenstate of...". I need such kind of specification in order for the product space to be defined as a single Hilbert space, and hence to allow for the superposition of states across the products. Denying this is denying the superposition.
However, you need a projection, indeed, to PREPARE any state. As I said, without it, there is no link between the Hilbert space description and any physical situation. The preparation is here the singlet state. But in ANY case, you need some link between an initial state in Hilbert space, corresponding to a physical setup.

Once you've accepted that superposition of the singlet state, it should be obvious that unitary evolution cannot undo it. So, locally (meaning, acting on the first part of the product space), you can complicate the issue as much as you like, there's no way in which you can undo the correlation. If you accept the Born rule, then NO MATTER WHAT HAPPENS LOCALLY, these correlations will show up, violating Bell's inequalities in the case of 100% efficient experiments.

cheers,
Patrick.
 
  • #46
vanesch There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold.

The whole point of reshuffling was to avoid the need for an underlying model (the route Bell took) in order to get around the non-commutativity of b,b' and the resulting counterfactuality (thus having to do the third experiment; as any Bell inequality test does) when trying to compare their correlations in the same inequality (that was precisely the point of the Bell's original objection to von Neumann's proof). See both Sica's papers (and the Bell's 1964 paper, also useful the related 1966 paper [Rev. Mod. Phys 38, 447-52] on von Neumann's proof) to see how much work it took to weave the logic around the counterfactuality and the need for either the third experiment or for the underlying model.
 
  • #47
nightlight said:
vanesch There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold.

The whole point of reshuffling was to avoid the need for an underlying model (the route Bell took) in order to get around the non-commutativity of b,b' and the resulting counterfactuality (thus having to do the third experiment; as any Bell inequality test does) when trying to compare their correlations in the same inequality (that was precisely the point of the Bell's original objection to von Neumann's proof).

Ah, well, I could have helped them out back then :smile: :smile:. It was exactly the misunderstanding of what exactly QM predicts that I tried to point out. The very supposition that the first two b[n] and b'[n] series should have anything to do with the result of the third experiment means that 1) one didn't understand what QM said and didn't say, but also 2) that one supposes that these series were drawn from a master distribution, which would have yielded the same b[n] and b'[n] if we happened to have choosen to measure those instead of (a,b) and (a,b'). This supposition by itself (which comes down to saying that the <b1[n],b'2[n]> correlation (which, I repeat, is not uniquely defined by Sica's procedure) has ANYTHING to do with whatever should be measured when performing the (b,b') experiment IS BY ITSELF a hidden-variable hypothesis.

cheers,
Patrick.
 
  • #48
Yes, that was indeed the first question I asked you: to me, the projection postulate IS the Born rule.

In the conventional textbooks QM measurement axiomatics the core empirical essence of the original Born rule (as Schoredinger intepreted it in his founding papers; Born introduced a related rule as a footnote for interpreting the scattering amplitudes) is hopelessly entangled with the Hilbert space observables, projectors, Gleason's theorem, etc.

However, I thought you aimed at the subtle difference between calculating probabilities (Born rule) and the fact that AFTER the measurement, the state is assumed to be the eigenstate corresponding to the measurement result, and I thought it was the second part that you were denying,

Well, that part, the non-destructive measurement, is mostly von Neumman's high abstraction which has little relevance or physical content (any discussion on that topic is largely a slippery semantic game, arising from the overload of term "measurement" and shifting its meaning between the prepartion and detection -- that whole topic is empty). Where is the photon A in Bell's experiment after it triggered the detector? Or for that matter any photon after detection in any Quantum Optics experiment?

but accepting the absolute square of inproduct as the correct probability prediction.

That is an approximate and limited operational rule. Any claimed probability of detection ultimately has to check against the actual design and the settings of the aparatus. Basically, the presumed linear response of an apartus to the Psi^2 is a linear approximation to a more complex non-linear response (e.g. check the actual photodetectors sensitivity curves, they are sigmoid with only one section approximately linear). Talking of "probability of detecting a photon" is somewhat misleading, often confusing, and it is less accurate than talking about degree and nature of response to an EM field.

In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.

The reason it is detached from the time and the dynamics is precisely in order to empower it with the magic capability of suspending the dynamics, producing the "measurement" result with such and such probability, then resuming the dynamics. And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates. By the time student gets through all of it, his mind will be too numbed, his eyes too glazed to notice that emperor wears no trousers.

The reason why I say that you seem to deny superposition in its general sense is that without the Born rule (inproduct squared = probability) the Hilbert space has no meaning.

It is a nice idea (linearization) and a useful tool taken much too far. The actual PDEs and the integral equations formulations of the dynamics are mathematically much richer modelling medium then their greately impoverishing abstraction, the Hilbert space.

Superposition is as natural with any linear PDEs and integral equations as it is with Hilbert space. On the other hand, the linearity is almost always an approximation. The QM (or of QED) linearity is an approximation for the more exact interaction between the matter fields and EM field. Namely the linearization arises from assuming that the EM fields are "external" (such as Coulomb potential or an external EM fields interacting with the atoms) and that the charge currents giving rise to quantum EM fields are external. Schroedinger's original idea was to put the Psi^2 (and its current) as the source terms in the Maxwell equations, obtaining thus the coupled non-linear PDEs. Of course, at the time and in that phase that was much too ambitious project which never got very far. It was only in late 1960s that Jaynes picked up the Schroedinger's idea and developed somewhat flawed "neoclassical electrodynamics". That was picket in mid-1980s by Asim Barut which worked out more accurate "self-field electrodynamics" which reproduces not only the QM but the leading radiative corrections of QED, without ever quantizing (which amounts to linearizing the dynamics then adding the non-linearities via perturbative expansion) the EM field. He viewed the first quantization not as some fancy change of classical variables to operators, but as a replacement of the Newtonian-Lorentz particle model with the Fraday-Maxwell type matter field model, resolving thus the particle-field dichotomy (which was plagued with the point-particle divergencies). Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

On the other hand, I think the future will favor neither, but rather the completely different, new modelling tools (physical theories are models) more in tune with the technology (such as Wolfram's automata, networks, etc). The Schroedinger, Dirac and Maxwell equations can already be rederived as macroscopic approximation of the dynamics of simple binary on/off automata (see for example some interesting papers by Garnet Ord). These kind of tools are hugely richer modelling medium than either PDEs or Hilbert space.

So if you deny the possibility for me to use that rule on the product space, this means you deny the existence of that product space, and hence the superpositions of states such that the result is not a product state.

I am only denying that this abstraction/generalization automatically applies that far in such simple-minded, uncritical manner in the Bell experiment setup. Just calling all Hermitean operators observable, doesn't make them so, much less at the "ideal" level. The least one needs to do in the modelling in this manner the Bell's setup is to include projectors to no_coincdince and no_detection subspaces (these would be from orbital degrees of freedom) so the prediction has some contact with the reality instead of bundling all the unknowns it into the engineering parameter "efficiency" so all the ignorance can be swept under the rug, while wishfully and imodestly labeling the toy model "ideal". What a joke. The most ideal model is one that predicts the best what actually happens (which is 'no violation') and not the one which makes the human modeler appear in the best light or in control of the situation the most.
 
  • #49
nightlight said:
[/color](the 3N coordinate relativistic Dirac-Maxwell equations for the full system, including the aparatus) to a prediction which prohibits any purely local PDE based mechanism from reproducing such prediction. (You never explained how can local PDEs do it without suspension of dynamics; except for trying to throw into the equations, as a premise, the approximate, explicitly non-local/non-relativistic instantaneous potentials.)

I don't know what 3N coordinate relativistic Dirac-Maxwell equations are. It sounds vaguely as the stuff an old professor tried to teach me instead of QFT. True QFT cannot be described - as far as I know - by any local PDE ; it should fit in a Hilbert state formalism. But I truly think you do not need to go relativistically in order to talk about Bell's stuff. In fact, the space-like separation is nothing very special to me. As I said before, it is just an extreme illustration of the simple superposition + Born rule case you find in almost all QM applications. So all this Bell-stuff should be explainable in simple NR theory, because exactly the same mechanisms are at work when you calculate atomic structure, when you do solid-state physics or the like.

cheers,
Patrick.
 
  • #50
nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.

You must be kidding. The time evolution is in the state in Hilbert space, not in the Born rule itself.

And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates.

Well, the decoherence program has something to say about this. I don't know if you are aware of this.

It is a nice idea (linearization) and a useful tool taken much too far. The actual PDEs and the integral equations formulations of the dynamics are mathematically much richer modelling medium then their greately impoverishing abstraction, the Hilbert space.

Superposition is as natural with any linear PDEs and integral equations as it is with Hilbert space. On the other hand, the linearity is almost always an approximation. The QM (or of QED) linearity is an approximation for the more exact interaction between the matter fields and EM field.

Ok this is what I was claiming all along. You DO NOT ACCEPT superposition of states in quantum theory. In quantum theory, the linearity of that superposition (in time evolution and in a single time slice) is EXACT ; this is its most fundamental hypothesis. So you shouldn't say that you are accepting QM "except for the projection postulate". You are assuming "semiclassical field descriptions".


Namely the linearization arises from assuming that the EM fields are "external" (such as Coulomb potential or an external EM fields interacting with the atoms) and that the charge currents giving rise to quantum EM fields are external. Schroedinger's original idea was to put the Psi^2 (and its current) as the source terms in the Maxwell equations, obtaining thus the coupled non-linear PDEs.

I see, that's indeed semiclassical. This is NOT quantum theory, sorry. In QED, but much more so in non-abelian theories, you have indeed a non-linear classical theory, but the quantum theory is completely linear.

Of course, at the time and in that phase that was much too ambitious project which never got very far. It was only in late 1960s that Jaynes picked up the Schroedinger's idea and developed somewhat flawed "neoclassical electrodynamics".

Yeah, what I said above. Ok, this puts the whole discussion in another light.

That was picket in mid-1980s by Asim Barut which worked out more accurate "self-field electrodynamics" which reproduces not only the QM but the leading radiative corrections of QED, without ever quantizing (which amounts to linearizing the dynamics then adding the non-linearities via perturbative expansion) the EM field. He viewed the first quantization not as some fancy change of classical variables to operators, but as a replacement of the Newtonian-Lorentz particle model with the Fraday-Maxwell type matter field model, resolving thus the particle-field dichotomy (which was plagued with the point-particle divergencies). Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

Such semiclassical models are used all over the place, such as to calculate effective potentials in quantum chemistry. I know. But I consider them just as computational approximations to the true quantum theory behind it, while you are taking the opposite view.

You tricked me into this discussion because you said that you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

cheers,
Patrick.
 
  • #51
I’ve read this interesting discussion and I’d like to add the following comments.

Bell’s inequalities and more precisely EPR like states help in understanding how quantum states behave. There are many papers in arxiv about theses inequalities. Many of them show how classical statistics can locally break these inequalities even without the need to introduce local (statistical) errors in the experiment.

Here are 2 examples extracted form arxiv (far from being exhaustive).

Example 1 : quant-th/0209123 Laloë 2002 extensive paper on QM interpretation questions (to my opinion against local hidden variable theory, but open mind => lot of pointers and examples)
Example 2: quant-ph/0007005 Accardi 2000 (and later). An Example of how a classical probability space can break bell inequalities (contextual).

The approach of Nightlight, if I have correctly understood, is another way (I’ve missed it: thanks a lot for this new possibility): instead of breaking the inequalities, the “statistical errors” (some events not counted by the experiment, or the way the experiment data is calculated), if included in the final result, force the experiment to follow the bell inequalities. This is another point of view on what is “really” going on with the experiment.

All of these alternative examples use a classical probability space, i.e. the Kolmogorov axiomatization, where one take the adequate variables such that they can violate the Bell’s inequalities (and now, a way to enforce them).

Now, if the question is to know whether the bell’s inequalities experiments are relevant or not, one conservative approach is to try to know (at least feel, and the best, demonstrate), if in “general”, “sensible” experiments (quantum or classical or whatever we want) are most likely to break the bell’s inequalities or not. If the answer is no, then we must admit that aspect type of experiments have detected a rare event and that the leaving “statistical errors” seem not to help (in breaking the inequalities). If the answer is yes, well, we can say what we want :).

The papers against bell’s inequalities experiments, to my modest opinion, demonstrate that a sensible experiment is more likely to detect the inequalities breaking so that we can say what we want! That’s a little bit disappointing, because in this case we still not know if any quantum state may be described by any “local” classical probability space or not. I really prefer to get an good and solid explanation.

To end, I did not know before the Sica’s papers. But, I would like to understand the mechanism he (and Nightlight) used in order to force the Bell’s inequality matching. I follow Vanesh reasoning without problem, but the Nighlight one is a little bit more difficult to understand: where is the additional freedom used to enforce the inequality.

So, let's try to understand this problem in the special case of the well known Aspect et al experiment 1982, phys.rev. letters (where only very simple mathematics are used). I like to use a particular case before making a generalisation; it is easier to see where the problem is.

First let’s take 4 ideal discrete measurements (4 sets of data) of an Aspect type experiment with no lost sample during the measurement process.

If we take the classical expectations formulas with have :

S+= E(AB)+E(AB’)=1/N sum_i1 [A(i1)B(i1)]+ 1/N sum_i2 [A(i2)B’(i2)]
= 1/N sum_i1_i2[A(i1)B(i1)+ A(i2)B’(i2)] (1)

Where A(i1),B(i1) is the data collected by the first experiment and A(i2),B(i2) the data collected by the second experiment. With N --> ∞ (we also take the same sample number for each experiment).

In our particular case A(i1) is the result of the spin measurement of photon 1 on the A (same name as the observable) axis (+1 if spin |+>, -1 if spin |->) while B(i1) is the result of the spin measurement of photon 2 on the B axis (+1 if spin |+>, -1 if spin |->).
Each ideal measurement (given by label i1 or i2) thus gives two spin results (the two photons must be detected).
Etc … For the other measurement cases.

We thus have the second equation:

S-= E(A’B)-E(A’B’)=1/N sum_i3 [A’(i3)B(i3)]- 1/N sum_i4 [A’(i4)B’(i4)]
= 1/N sum_i3_i4[A(i3)B(i3)- A(i4)B(i4)] (2)

Labelling equation (1) or (2), ie, changing the ordering of label i1,i2,i3,i4 does not change the result (sum is commutative).

Now, If we want get the inequality S+=|E(AB)+E(AB’)|≤ 1+E(BB’), we first need to make a filter to the rhs equation (1), otherwise A cannot be factorized: we must select a subset of experiment samples with A(i1)=A(i2).

If we take a large samples number N, equation (1) is not changed with this filtering and we get:

|S+|= |E(AB)+E(AB’)|= 1/N |sum_i1_i2[A(i1)B(i1)+ A(i2)B’(i2)] |
= 1/N |sum_i1[A(i1)B(i1)+ A(i1)B’(i1)] |=
≤1/Nsum_i1 |[A(i1)B(i1)+ A(i1)B’(i1)]|

We then used the simple inequality |a.b+a.c|≤ 1+ a.c (|a|,|b|,|c| ≤1) for each label i1

|S+|= |E(AB)+E(AB’)| ≤1+1/N sum_i1[B(i1)B’(i1)] (3)

Remind that B’(i1) is the data of the second experiment relabelled with a subset of label i1. Now this re-labelling has a freedom because we may have several experiment results (50%) where A(i1)=A(i2).

So in equation (3) |sum_i1[B(i1)B’(i1)]]| depends on the artificial label order.

We also have almost the same inequality for equation (2)

|S+|= |E(A’B)-E(A’B’)|= 1/N |sum_i3_i4[A’(i3)B(i3)- A’(i4)B’(i4)] |
= 1/N |sum_i3[A’(i3)B(i3)- A’(i3)B’(i3)] |=
≤1/Nsum_i3 |A’(i3)B(i3)- A’(i3)B’(i3)|

We then used the simple inequality |a.b-a.c|≤ 1- a.c (|a|,|b|,|c| ≤1)

|S+|= |E(A’B)-E(A’B’)| ≤1-1/N sum_i3[B(i3)B’(i3)] (4)

So in equation (4) |sum_i1[B(i3)B’(i3)]]| depends on the artificial label ordering i3.

Now, we thus have the bell inequality:

|S=S++S-|≤ S++ S-= 2+1/N sum_i1_i3[B(i1)B’(i1)-B(i3)B’(i3)] (5)

where sum_i1_i3[B(i1)B’(i1)-B(i3)B’(i3)] depends on the labelling order we have used to filter and get this result.

I think that (3) and (4) may be the labelling order pb remarked by Nighlight in this special case.

Up to know, we have only spoken of collection of measurement results of values +1/-1.

Now if B is a random variable that depends only on the local experiment apparatus (the photon polarizer) we have B=B(apparatus_B, hv) where hv is the local hidden variable, we should have:

1/N.sum_i1[B(i1)B’(i1)= 1/N.sum_i3B(i3)B’(i3)] = <BB’> when N--> ∞.
(so we have the Bell inequality |S|≤2).

So, now I can use the Nighlight argument, ordering of B’(i1) and B’(i3) is totally artificial then the question is: should I got 1/N.sum_i1[B(i1)B’(i1)<> 1/N.sum_i3B(i3)B’(i3)] or the equality?

Moreover, Equation (5) seems to show that this kind of experiments are more likely to see a violation of a bell inequality as B(i1),B’(i2),B(i3)B’(i4) comes from 4 different experiments.


Seratend

P.S. Sorry if some minor mistakes are left.
 
  • #52
Sorry, my post was supposed to follow this one:


nightlight said:
vanesch Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.

... the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.


The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure where he brings out that question and explicitly preserves <b.b'>. Additional analysis of the same question is in his later paper. It is not necessary to change the sum <b1.b2> even though individual elements of the arrays b1[] and b2[] are reshuffled. Namely there is a great deal of freedom when matching a1[] and a2[] elements since any +1 from a2[] can match any +1 from a1[], allowing thus for [(N/2)!]^2 ways to match N/2 of +1's and N/2 of -1's between the two arrays. The constraint from <b1.b2> requires only that the sum is preserved in the permutation, which is a fairly weak constraint.

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).

Seratend,
It takes time to answer :)
 
  • #53
seratend said:
Example 2: quant-ph/0007005 Accardi 2000 (and later). An Example of how a classical probability space can break bell inequalities (contextual).

I only skimmed quickly at this paper, but something struck me: he shows that Bell's equality can also be satisfied with a non-local model. Bell's claim is the opposite: that in order NOT to satisfy the equality, you need a non-local model.

A => B does not imply B => A.

The "reduction" of the "vital assumption" that there is one and only one underlying probability space is in my opinion EXACTLY what is stated by local realistic models: indeed, at the creation of the singlet state with two particles, both particles carry with them the "drawing of the point in the underlying probability universe", from which all potential measurements are fixed, once and for all.

So I don't really see the point of the paper! But ok, I should probably read it more carefully.

cheers,
Patrick
 
  • #54
vanesch you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap, in a kind of slippery way, without explaining how, why and when does it stop functioning (macroscopic device ? consciousness? a friend of that consciousness? decoherence? consistent histories? gravity? universe branching? jhazdsfuty?) and how it resumes. That is a tacit recognition that linear evolution such as the linear Schrodinger or Dirac equations, don't work correctly throughout. So when the limitation of the linear approximation reaches a critical point, the probability mantras get chanted, the linear evolution is stopped in a non-dynamic, abrupt way, and temporarily substituted with a step-like, lossy evolution (the projector) to a state in which it ought to be, then when in the safe waters again, the probability chant stops, and the regular linear PDE resumes. The overall effect is at best analogous to a piecewise linear approximation of a curve which, all agree, cannot be a line.

So this is not a matter who is for and who is against the linearity -- since no one is "for" with the only difference that some know that. The rest believe they are in the "for" group and they despise the few who don't believe so. If you believe in projection postulate, you believe in temporary suspension of linear evolution equations, however ill-defined it may be.

Now that we agreed we're all against linearity, what I am saying is that this "solution," the collapse, is an approximate stop-gap measure, due to intractability of already known non-linear dynamics, which in principle can produce collapse-like effects when they're called for, except in a lawful and clean way. The linearity would hold approximately as it does now, and no less than it does now, i.e. it is analogous to smoothing the sharp corners of the piecewise linear approximation with a mathematically nicer and more accurate approximation.

While you may imagine that non-linearity is a conjecture, it is the absolute linearity that is a conjecture, since non-linearity is a more general scheme. Check von Neumman's and Wigner's writings on the measurement problem to see the relation between the absolute linearity and the need for the collapse.

A theory cannot be logically coherent if it has an ill-defined switch between the two incompatible modes of operation, the dynamic equations and the collapse (which grew out of the similarly incoherent seeds, the Bohr's atom model and the first Plank's theory). The whole theory in this phase is like a hugely magnified version of the dichotomies of the originating embryo. That's why there is so much philosophizing and nonsense on the subject.
 
  • #55
You must be kidding. The time evolution is in the state in Hilbert space, not in the Born rule itself.

That is the problem I am talking about. We ought not to have dynamical evolution interrupted and suspended by "measurement" which turns on the Born rule, to figure out what it really wants to do next, then somehow the dynamics is allowed to run again.


Well, the decoherence program has something to say about this. I don't know if you are aware of this.

It's a bit decoherent for my taste.
 
  • #56
nightlight said:
Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap, in a kind of slippery way, without explaining how, why and when does it stop functioning (macroscopic device ? consciousness?

The relative-state (or many worlds) proponents do away with it, and apply strict linearity. I have to say that I think myself that there is something missing in thas picture. But I think that quantum theory is just a bit too subtle to replace it with semiclassical stuff. I'd be surprised that such a model can predict the same things as QFT and most of quantum theory. In that it would be rather stupid, no? I somehow have the feeling - it is only that of course - that this semiclassical approach would be the evident way to try stuff before jumping on the bandwagon of full QM, and that people have turned that question in all possible directions, so that the possibilities have been exhausted there. Of course, one cannot study all "wrong" paths of the past and one has to assume that this has been looked at somehow, otherwise nobody gets nowhere if all wrong paths of the past are re and re and reexamined. So I didn't look in all that stuff, accepting that it cannot be done.

cheers,
Patrick.
 
  • #57
But I think that quantum theory is just a bit too subtle to replace it with semiclassical stuff.

I didn't have in mind the semiclassical models. The semiclassical scheme merely doesn't quantize EM field, but it still uses the external field aproximation, thus, although practical, it is limited and less accurate than QED (when you think of the difference in heavy gear involved, it's amazing it works at all). Similar problems plague Stochastic Electrodynamics (and its branch Stochastic Optics). While they can model many of the the so called non-classical effects touted in Quantum Optics (including the Bell's inequality experiments), they are also an external field approximation scheme, just using the ZPF distribution as the boundary/intial conditions for the classical EM field.

To see the difference from the above approaches, write down Dirac equation with minimal EM coupling, then add below the inhomogenious wave equation for the 4-potential A_mu (the same one from the Dirac equation above it) with the right hand side using the Dirac's 4-current. You have a set of coupled nonlinear PDEs without external field or external current approximation. See how far you get with that kind of system.

That's a variation of what Barut started with (and also with Schroedinger-Pauli instead of Dirac) and then managed to reproduce the results of the leading orders of QED expansion (http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= has 55 of his papers and preprints scanned; those from mid 1980s and on are mostly on his self-field). While this scheme alone obviously cannot be the full theory, it may be a at least knock on the right door.
 
Last edited by a moderator:
  • #58
nightlight said:
To see the difference from the above approaches, write down Dirac equation with minimal EM coupling, then add below the inhomogenious wave equation for the 4-potential A_mu (the same one from the Dirac equation above it) with the right hand side using the Dirac's 4-current. You have a set of coupled nonlinear PDEs without external field or external current approximation. See how far you get with that kind of system.

This is exactly what my old professor was doing (and in doing so, he neglected to teach us QFT, the bastard). He was even working on a "many particle dirac equation". And indeed, this seems to be a technique that incorporates some relativistic corrections for heavy atoms (however, there the problem is that there are too many electrons and the problem becomes untractable, so it would be more something to handle an ion like U+91 or so).

Nevertheless, I'd still classify this approach as fully classical, because there is no "quantization" at all, and the matter fields are considered as classical fields just as well as the EM field. In the language of path integrals, you wouldn't take into account anything besides the classical solution.
Probably this work can be interesting. But you should agree that it is still a far way to go to have a working theory, so you shouldn't sneer at us low mortals who, for the moment, take quantum theory in the standard way, no ?
My feeling is that it is simply too cheap, honestly.

cheers,
Patrick.
 
  • #59
This is exactly what my old professor was doing (and in doing so, he neglected to teach us QFT, the bastard).

What's his name. (Dirac was playing in later years with that stuff, too, so it can't be that silly.)

He was even working on a "many particle dirac equation".

What Barut found (although only for the non-relativistic case) was that for the N particle QM, he can obtain the equivalent result to conventional N particle QM, in a form superficially resembling the Hartree-Fock self-consistent field, using electron Psi_e and a nucleon Psi_n (all normalized to correct number of particles instead of to 1), as nonlinearly coupled classical matter fields in 3-D instead of the usual 3N-Dimensional configuration space, and unlike Hartree-Fock, it was not an approximation.

Nevertheless, I'd still classify this approach as fully classical, because there is no "quantization" at all, and the matter fields are considered as classical fields just as well as the EM field. In the language of path integrals, you wouldn't take into account anything besides the classical solution.

Indeed, that model alone doesn't appear to be the key by itself. For example the charge quantization doesn't come out of it and must be put in by hand, although no one has really solved anything without the substantial approximations, so no one knows what these equations are really capable of producing (charge quantization seems very unlikely, though, without additional fields or some other missing ingredient). But many have gotten quite a bit of mileage out of much much simpler non-linear toy models, at least in the form of insights about the spectrum of phenomena one might find in such systems.
 
Last edited:
  • #60
Seratend reply:
=======================================================
Before, here is below the place in time, when I started to reply:


nightlight said:
vanesch you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap, (...)

(...)A theory cannot be logically coherent if it has an ill-defined switch between the two incompatible modes of operation, the dynamic equations and the collapse (which grew out of the similarly incoherent seeds, the Bohr's atom model and the first Plank's theory). The whole theory in this phase is like a hugely magnified version of the dichotomies of the originating embryo. That's why there is so much philosophizing and nonsense on the subject.

first,


vanesh said:
(...) I only skimmed quickly at this paper, but something struck me: he shows that Bell's equality can also be satisfied with a non-local model. Bell's claim is the opposite: that in order NOT to satisfy the equality, you need a non-local model.

(...) So I don't really see the point of the paper! But ok, I should probably read it more carefully.
cheers,
Patrick


Vanesh, do not loose your time with Accardi Paper. It is only an example: one attempt, among many others, surely not the best, of a random variable model that breaks the bell inequalities. If I correctly understand, the results of spin measurement depend on the apparatus settings (local random variables: what they call “cameleon effect”).
It has been a long time I’ve looked at this paper :), but the first pages, before their model, are a very simple introduction to the probability and bell inequalities on a general probability space, and after how to construct, a model that breaks this inequality (local or global). This kind of view has led to the creation of a school QM interpretation (“quantum probability”, that is slightly different from orthodox interpretation).


Second and last, I have some other comments on physics and the projection postulate and its dynamics.



nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter. (…)

(…) Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

The Schroedinger, Dirac and Maxwell equations can already be rederived as macroscopic approximation of the dynamics of simple binary on/off automata (see for example some interesting papers by Garnet Ord). These kind of tools are hugely richer modelling medium than either PDEs or Hilbert space.

I appreciate, when someone likes to check other possibilities in physical modelling (or theory if we prefer), it is a good way to discover new things. However, please, avoid saying that a model/theory is better than another does as the only thing we can say (to my modest opinion) is that each model has its unknown domain of validity.
Note I do not reject the possibility of a perfect model (full domain of validity), but I prefer to think it currently does not exist.

The use of PDE models is interesting. It has already proved its value in many physical branches. We can use the PDE to model the classical QM, as well as the relativistic one, this is not the problem.
For example you have bohemian mechanics (1952): you can get all the QM classical results with this model as well as you can write an ODE that complies with QM. (insertion of a Brownian motion like term in Newton’s equation – Nelson 1966 or your more recent “simple binary on/off automata” of Garnet Ord that seems to be the binary random walk of the Brownian motion –not enough time to check it :(
The main problem is to know if we can get the results of the experiments in a simpler way using such a method.

The use of Hilbert space tools in quantum mechanics formulation is just a matter of simplicity. It is interesting when we face discrete value problems (eg. Sturm-Liouville like problems). It shows,, for example, how a set of discrete values of operators change in time. Moreover it shows simply the representation relativity (e.g. quantum q or p basis) by a simple basis change. It then shows in a simplistic way the connection (a basis change) between a continuous and discrete basis (eg. {p,q} continuous basis and {a,a+} discrete basis).
In this type of case, the use of PDE may become very difficult. For example, the use of non gentle functions of L2(R,dx) spaces introduces less intuitive problems of continuities requiring for example the introduction of extension of the derivation operators to follow the full solutions of an extended space. This is not my cup of tea so I won’t go further on.


The text follows in the next post. :cry:

Seratend
 
  • #61
second part


Now let’s go back to the projection postulate and time dynamics in the Hilbert space formulation.


nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.


The reason it is detached from the time and the dynamics is precisely in order to empower it with the magic capability of suspending the dynamics, producing the "measurement" result with such and such probability, then resuming the dynamics. And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates. By the time student gets through all of it, his mind will be too numbed, his eyes too glazed to notice that emperor wears no trousers.


Projection postulate (PP) is one of the orthodox/Copenhagen postulates that is not well understood by many people even if it is one of the most simple (but may be subtle).

PP is not completely outside the QM it mimics the model of scattering theory. The only thing that we have to know about PP is the description of the result of the unknown interaction between a quantum system (a system with N variables) and a measurement system (a quantum system of may be an infinite number of quantum variables: 10^23 variables, or more):
-- From an input state of the quantum system, PP gives an output state of the quantum system like in the quantum scattering theory except that we assume that the time of interaction (the state update) is as short as we want and that the interaction may be huge:
|in> --measurement--> |out>
-- Like scattering theory, the projection postulate does not need to know the evolution of the “scattering center” (the measurement system): in scattering theory we often assume a particle with an infinite mass, this is not much different from a heavy measurement system.
-- Like the scattering theory, you have a model: the state before the interaction, and the sate after the interaction. You do not care about what occurs during the interaction. And it is perfect, because you avoid manipulating incommensurable variables and energies due to this huge interaction and where the QM may become wrong. However, before and after the interaction we are in the supposed validity domain of QM: that’s great and its exactly we need for our experiments! Then we apply the Born rules: we then have our first explanation why born rules apply to the PP model: it is only an extension of the scattering theory rather than an “of the hat” postulate.

What I also claim with the PP, is that I have a “postulate”/model that gives me the evolution of a quantum system interaction with a huge system and that I can verify in the everyday quantum experiments.
I am not saying that I have a collapse or a magical system evolution just what is written on most of schoolbooks: I have a model of the time evolution of the system in interaction with the measurement system. Therefore, I also need to describe this evolution on all the possible states of the quantum state.

Now most of the people using the PP always forget the principal thing: the description of the complete modification of the quantum system by the measurement “interaction”. The missing of such complete specification almost always leads to these “collapse” and others stuffs. When we look at the states issued by the measurement apparatus this is not a problem, but the paradoxes (and questions about projection postulate or not) occur for the other states.

For example, when we say that we have an apparatus that measures the |+> spin. It is common to read/see this description:
1) We associate to the apparatus the projector P_|+>= |+><+|. We thus say that we have an apparatus that acts on the entire universe forever (even before its beginning).
2) For a particle in a general state |in>, we find the state after measure:
|out>= P_|+>|in>= |+> (we skip the renormalisation)
And most of the people are happy with this result.
So if we take now a particle |in>=|-> and apply the projector we get |out>= P_|+>|in>= 0.
Is there any problem?
I say: what is a particle with a state equal to a null vector? Consider now two particles in the state |in>=|in1>|-> where the measurement apparatus acts on:
|out>= P_|+>|in>=<+|->|in1>|+>=0|in1>|+>=0, the first particle has also disappeared during the measurement of the second particle.

What’s wrong: In fact, like in scattering theory, you must describe the measurement interaction output states for all input states otherwise you will get in trouble. Classical QM as well as field/relativistic QM formalism does not like the null state as the state of a particle (it is one of the main mathematical reasons why we need to add a void state /Hilbert void space to the states of fields ie <void|void> <>0).
Therefore, we have to specify also the action of the measurement apparatus on the sub Hilbert space orthogonal to the measured values!
In our simple case |out>= P_|->|in>= |somewhere> : we just say for example that particles of spin |-> will be stopped by the apparatus or, if we like, will disappear (in this case we need to introduce the creation/annihilation formalism: the jump to the void space). We may also take |somewhere(t)>: to say that after the interaction the particle has a new non permanent state: it is only a description of what the apparatus do on particles.

So if we take our 2 particles we have:
|out>=sum P_|>|in1>|->= (|+><+|+|somewhere><-|)||in1>|+>= |in1>|somewhere>

Particle doesn’t 1 disappear and is unchanged by the measurement (we do not need to introduce the density operator to check it).

Once we begin to define the action of the PP on the complete Hilbert space (or at least the sub Hilbert space under interest), everything becomes automatic and the magical stuff disappears.

Even better, you can define exactly, and in a very simple way, where and when the measurement occurs and describes local measurement apparatuses. Let’s go back to our spin measurement apparatus:
Here is the form a finite spatial extension apparatus measuring the |+> spin:
P_|+>=|there><there||+><+| (1)
Where <x|there>=There(x)~there. There(x) is different from 0 only on a small local zone of the apparatus. It is where the measurement will take place.

We thus have to specify the action on the other states (rest of the universe, rest of the spin states) otherwise P_|+> will make the particles “disappear” if particles are not within the spatial domain of the apparatus. For example:

P_|->=|there><there||somewhere><-|+ |everywhere - there>< everywhere - there|(|+><+| +|-><-|)


And, if we take |in>=|x_in(t)>|+> a particle moving along the x-axis (very small spatial extension), we approximately know the measurement time (we do not break the uncertainty principle :): time of interaction tint occurs at |x_in(tint)>=|there>.

So once we take the PP in the write manner (the minimum: only a state evolution), we have all the information to describe the evolution of the particle. And it is not hard to see and to describe the dynamical evolution of the system and to switch on and off the measurement apparatus during the time.



Seratend.
 
  • #62
seratend The use of PDE models is interesting. It has already proved its value in many physical branches. We can use the PDE to model the classical QM, as well as the relativistic one, this is not the problem.

Hilbert space is an abstraction of the linear PDEs. There is nothing abot it that PDEs don't have, i.e. it doesn't add properties but it subtracts them. That is fine, as long as one understands that every time you make abstraction, you throw out quite a bit from the more specific model you are abstracting away. Which means you may be abstracting away something which is essential.

In the Hilbert space abstraction we subtract the non-linearity traits of the PDE (or integral eqauations) modelling tool. That again is perfectly fine, as long as one understands that linear modelling in any domain is generally an approximation for the more detailed or deeper models. Our models are always an approximation to a domain of phenomena, like a Taylor expansion to a function within a proper domain.

While the linear term of Taylor series is useful, it is by no means the best math for everything and for all times. And surely one would not take the linear term and proclaim that all functions are really linear functions and then to avoid admitting it ain't so, one then proceeds using piecewise linear approximations for everything. That could be fine too, but it surely cannot be said that it is the best one can do with functions, much less that it is the only thing valid about them.

This kind of megalomany is precisely what the Hilbert space abstraction (the linear approximation) taken as a foundation of the QT has done -- this is how all has to work or it isn't the truest and the deepest physics.

Since the linearity is a very limited model for much of anything, and it certainly was never a perfect approximation for all the phenomena that QT was trying to describe, the linear evolution is amended -- it gets suspended in an ill-defined slippery maner (allegedly only when "measurement" occurs, whatever that really means) the "state" gets changed to where its linear model shadow, the state vector (or statistical operator), ougth to be, then the linear evolution gets resumed.

That too is all fine, as long as one doesn't make such clumsy rigging and ad-hockery into the central truth about universe. At least one needs to be aware that one is merely rectifying the inadequacies of the linear model while trying to describe phenomena which don't quite fit the first term of Taylor expansion. And one surely ought not to make the ham-handed ways we use to ram it in, into the core principle and make way too much out of it. There is nothing deep about projection (collapse) postulate, it's a underhanded slippery way to acknowledge 'our model doesn't work here'. It is not a virtue, as often made to look like; it is merely a manifestation of a model defect, of the ignorance.

Bell's theorem is precisely analogous of deifying the piecewise linear approximation model and proclaiming that the infinities of its derivatives are fundamental trait of the modeled phenomenon. Namely, the Bell's QM "prediction" is a far stretch of the collapse postulate (which is detached from and declared contrary and an override of the dynamics, instead of being recognized for what it is -- a kludgey do-hickey patching over the inadequacies of the linear approximation, the Hilbert space axiomatics, for the dynamics) to the remote non-interacting system and then proclaiming this shows fundamental non-locality. The non-locality was put in by hand through the inappropriate use of the piecewise linear approximation for the dynamics (i.e. throught the misuse of the projection postulate of the standard QM axiomatics). It applies the instantaneous projection to the remote system, disregarding the time, distance and the lack of interaction. It is as hollow as making a big ado about the infinites in the derivatives of piecewise linear approximations. It's a waste of time.
 
Last edited:
  • #63
nightlight said:
Hilbert space is an abstraction of the linear PDEs. There is nothing abot it that PDEs don't have, i.e. it doesn't add properties but it subtracts them.
You have such an ego, this is amazing. You pretend to know everything about both domains, which is simply impossible for a single one !

What about fractal dimension as computed with wavelets ? This is a great achievement of the Hilbert space formalism. The reason Hilbert spaces were discovered, was to understand how Fourier could write his meaningless equations, and get so powerful results at the end of the day. How come when PDEs get too complicated to deal with, we reduce them to strange attractors, and analyze these with Hilbert space technics ?

It is not because a computation is linear, that it is trivial. It makes it doable. If you are smart enough to devise a linear algorithm, you could presumably deal with any problem. The Lie algebra, reduces the study of arbitrary mappings, to those linear near the identity : does it substract properties : yes, global ones. Does it matter ? Not so much, they can be dealt with afterwards.

Your objections are formal, and do not bring very much. You are very gifted at hands waving.
 
  • #64
nightlight said:
Bell's theorem is precisely analogous of deifying the piecewise linear approximation model and proclaiming that the infinities of its derivatives are fundamental trait of the modeled phenomenon. Namely, the Bell's QM "prediction" is a far stretch of the collapse postulate (which is detached from and declared contrary and an override of the dynamics, instead of being recognized for what it is -- a kludgey do-hickey patching over the inadequacies of the linear approximation, the Hilbert space axiomatics, for the dynamics) to the remote non-interacting system and then proclaiming this shows fundamental non-locality. The non-locality was put in by hand through the inappropriate use of the piecewise linear approximation for the dynamics (i.e. throught the misuse of the projection postulate of the standard QM axiomatics). It applies the instantaneous projection to the remote system, disregarding the time, distance and the lack of interaction. It is as hollow as making a big ado about the infinites in the derivatives of piecewise linear approximations. It's a waste of time.

I personally think that although this idea is interesting, it is very speculative and you should keep an open mind towards the standard theory too. After all, under reasonable assumptions of the behaviour of photon detectors (namely that they select a random fraction of the to be detected photons, with their, very measurable efficiency epsilon), we DO find the EPR type correlations, which are hard to reproduce otherwise. So, indeed, not all loopholes are closed, but there is VERY REASONABLE EXPERIMENTAL INDICATION that the EPR predictions are correct (EDIT: what I mean by this is that if it weren't for an application in an EPR experiment, but say, to find the coincidences of the two photons in a PET scanner, you wouldn't probably object to the procedure at all ; so if presented with data (clicks in time) of an experiment of which you don't know the nature, and people ask you to calculate the original correlation when the efficiencies of the detectors are given, you'd probably calculate without hesitation the things you so strongly object to in the particular case of EPR experiments). Until you really can come up with a detailled and equally practical scheme to obtain these results, you should at least show the humility of considering that result. It is a bit easy to say that you have a much better model of QM, except for those results that don't fit in your conceptual scheme, which have to be totally wrong. Also, I think you underestimate the effort that people put into this, and are not considered as heretics. But saying that "young student's minds are misled by the priests of standard theory" or the like make you sound a bit crackpottish, no ? :-p
I repeat, this is not to say that work like you are considering should be neglected ; but please understand that it is a difficult road full of pitfalls which has been walked before by very bright minds, and who came back empty hands. So I'm still convinced that it is a good idea to teach young students the standard approach, because research in this area is still too speculative (and I'd say the same about string theory !).

cheers,
Patrick.
 
Last edited:
  • #65
nightlight Hilbert space is an abstraction of the linear PDEs. There is nothing abot it that PDEs don't have, .

The reciprocal is also true. I think you really have problems/belief with mathematical toys: they all say the same thing in different ways. One of the problems with these toys is the domain of its validity we assume. The other problem is the advance in the mathematical research that can restrict, voluntary, the domain of validity (e.g. Riemann integration of 19th century and the 20th general theory of integration).
Please do not forget, that I have no preference concerning PDE or Hilbert spaces, because I cannot have an objective demonstration that one formulation gives a domain of solutions larger or smaller than the other one.
So you say, that PDE are better than the Hilbert space (I don’t know what is “better” in your mind), then, please try to give a rigorous mathematical demonstration. I really think you have no idea (or may be you do not want to have) on how they may be close.
Like projection postulate, you give an affirmation but not a piece of demonstration, because if you give a demonstration, you have to give its domain of validity. Therefore you may see that you therorem depends on the assumed domain of validy of PDE or Hilbert spaces (like restricting the domain of validity of the integrals to the 19th century).
You are like some people saying that probability has nothing to do with integration theory.

I like PDE and Hilbert spaces and probability, I always try to see what IS different and what SEEMS different and thus I peek what is the more adequate to solve a problem.

nightlight
i.e. it doesn't add properties but it subtracts them. That is fine, as long as one understands that every time you make abstraction, you throw out quite a bit from the more specific model you are abstracting away. Which means you may be abstracting away something which is essential.

In the Hilbert space abstraction we subtract the non-linearity traits of the PDE (or integral eqauations) modelling tool. That again is perfectly fine, as long as one understands that linear modelling in any domain is generally an approximation for the more detailed or deeper models. Our models are always an approximation to a domain of phenomena, like a Taylor expansion to a function within a proper domain.
.

I think you block with linearity, like a schoolchild with the addition and multiplication. He may think that addition has nothing to do with multiplication until discovering that a multiplication is only an addition.

You seem to view Hilbert space linearity like the first new comers in quantum area: you are trying to see what it is not here.
Linearity of Hilbert spaces just allows saying that ANY UNTHINKABLE vector MAY describe a system and that’s all:
We may choose the Hilbert space we want: L2(R,dx), L1(R,dx), any Hilbert space, it seems to have no importance, it is only a matter of representation. So how can you tell that linearity “doesn’t add properties but it subtracts them” , but I say linearity says NOTHING. It says just what is evident: all solutions are possible: the belong to the Hilbert space.

You say that linearity of the Hilbert spaces imposes more restriction that the use of PDE while you may use this “awful linearity” without problem when you write your PDE in an abstract Euclidian space. I assume that you know that an Euclidian space is only a real Hilbert space. How do you manage this kind of linearity?


nightlight
While the linear term of Taylor series is useful, it is by no means the best math for everything and for all times. And surely one would not take the linear term and proclaim that all functions are really linear functions and then to avoid admitting it ain't so, one then proceeds using piecewise linear approximations for everything. That could be fine too, but it surely cannot be said that it is the best one can do with functions, much less that it is the only thing valid about them.
.

Please try to define what is “by no means the best math for everything for all times”. Such an assertion is very large and may lead to the conclusion the definite use of a math toy is given only once in time by the current knowledge we have about it.

Once again, you are restricting the domain of validity of the “Taylor” mathematical toy (as well as the Hilbert spaces). You are like a 19th century mathematician discovering the meaning of continuity. You think, with your implicit restricted domain of validity, that Taylor series does only apply to analytic functions. Try to expand your view, like the PDE toy, think for example the Taylor series in a different topological space with a different continuity. Think for example on the Stone Weistrass theorem, and the notion of complete spaces.

Like Hilbert spaces, PDE, probability theory, etc… Taylor series are only a toy with its advantage and disadvantage that can evolve with the advance in the mathematics.

nightlight

This kind of megalomany is precisely what the Hilbert space abstraction (the linear approximation) taken as a foundation of the QT has done -- this is how all has to work or it isn't the truest and the deepest physics.

Since the linearity is a very limited model for much of anything, and it certainly was never a perfect approximation for all the phenomena that QT was trying to describe, the linear evolution is amended -- it gets suspended in an ill-defined slippery maner (allegedly only when "measurement" occurs, whatever that really means) the "state" gets changed to where its linear model shadow, the state vector (or statistical operator), ougth to be, then the linear evolution gets resumed.
.

I think you mix Linearity of the operator space with the linearity of Hilbert spaces. How can you manage such a mix (an operator space is not an Hilbert space)? I really think you really need to have a look on papers like Paul J. Werbos (arxiv). Such papers may help you in understanding better the difference of linearity of a vector space and the non linearity of operators space. May be, its papers will help you to better understand how Hilbert spaces toys are connected to ODE and PDE toys.

You even mix linearity with unitary evolution! How can you really speak about measurement if you do not seem to see the difference?
Look: Unitary evolution assumes the continuity of evolution like the assumption of continuity in PDE or ODE. It is not different. You can suppress this requirement if you want like in PDE or ODE (in some limit conditions) there is no problem, only short mind.
Suppression of this requirement is equivalent to consider the problem of the domain of definition of unbounded operators in Hilbert spaces and the type of discontinuities they have (i.e reduction of the Hilbert space where the particle stays).



nightlight

That too is all fine, as long as one doesn't make such clumsy rigging and ad-hockery into the central truth about universe. At least one needs to be aware that one is merely rectifying the inadequacies of the linear model while trying to describe phenomena which don't quite fit the first term of Taylor expansion. And one surely ought not to make the ham-handed ways we use to ram it in, into the core principle and make way too much out of it. There is nothing deep about projection (collapse) postulate, it's a underhanded slippery way to acknowledge 'our model doesn't work here'. It is not a virtue, as often made to look like; it is merely a manifestation of a model defect, of the ignorance. …

.

Once again, I think you do not understand the projection postulate or you attach too many things to a simple state evolution (see my precedent post).
May be is it the term postulate that you do not like or may be is it the fact it is said that an apparatus not fully described by the theory gives results of this theory? Tell me, what theory does not use such a trick to describe results that are in final seen by the human?
The main difference comes from the fact that quantum theory has written this fact explicitly, so our attention is called to it and it is good: we must not forget that we are far from having described how everything works and if it is possible without requiring, for example, black boxes.

Please tell us what you really understand by projection postulate! It’s a good way to improve our knowledge and may be detect some errors.

Seratend
 
  • #66
:smile: Continuation:

nightlight
Bell's theorem is precisely analogous of deifying the piecewise linear approximation model and proclaiming that the infinities of its derivatives are fundamental trait of the modeled phenomenon. Namely, the Bell's QM "prediction" is a far stretch of the collapse postulate (which is detached from and declared contrary and an override of the dynamics, instead of being recognized for what it is -- a kludgey do-hickey patching over the inadequacies of the linear approximation, the Hilbert space axiomatics, for the dynamics) to the remote non-interacting system and then proclaiming this shows fundamental non-locality. The non-locality was put in by hand through the inappropriate use of the piecewise linear approximation for the dynamics (i.e. throught the misuse of the projection postulate of the standard QM axiomatics). It applies the instantaneous projection to the remote system, disregarding the time, distance and the lack of interaction. It is as hollow as making a big ado about the infinites in the derivatives of piecewise linear approximations. It's a waste of time.
.

I may repeat vanesh, but what do you intend by “collapse” or “collapse postulate” ? . We have only a “projection postulate” - see my previous post: We even may get a unitary evolution model of the particles in the Bell type experiment if we want! I really would like to know where you think there is a “fundamental” problem with this experiment/theorem.

As said before, I think you mix several types of linearity in your discussion and you confuse linearity with unitary evolution.



You also attach too many deductions from what Bell theorem says: two calculus models of a 2 particle system gives 2 different values. Then we have experiments, with certain assumptions as always in physics, who gives also the known results. That’s all. So where is the problem?

After, you have the interpretation (yours, and everybody) and the corrections of the model and theorem: we have had a lot of ones since 1964. And one of the interpretations is centred on non-locality of variables. Ok, this interpretation of the theorem disturbs you and me, that’s all (not the linearity of Hilbert spaces or what else):

The non-locality is not given by the theorem itself, it is interpreted from the single “classical” model” used in the theorem (its domain of validity): This model is incompatible with the model used in the QM formalism.
Bell, in its initial paper has given its definition of “locality” and therefore one of its interpretation of the theorem: “the vital assumption is that the result B for particle 2 does not depend on the setting a of the magnet for particle A on b”. But we can also prove easily that some “non local” variables may match the bell inequality if we want (e.g. Accardi 2000).
So the real question, I think is: does “the classical model of bell” contain all possible local variable models? Rather than does the theorem implies that QM formulation is not compatible with non local hidden variables.

Concerning the relevance of the experiments, a lot of work is done since 1964. And we must say yes, we may have some errors and explicit filters and assumptions, etc … in the experiment, but the numeric results of the experiments just give the expectation value of QM with a good confidence and only after the break of the bell inequality. So all of the errors must comply gently with the QM model: that only what is mainly searched in physics (or at least in technology): an experimental(“real”) numeric value that complies with the abstract model (or the opposite :biggrin: ).

Seratend :bugeye:
 
  • #67
vanesch VERY REASONABLE EXPERIMENTAL INDICATION that the EPR predictions are correct

I don't find it a "reasonable experimental indication" at all.

The presence of the cos(2a) modulation on top of a larger "non-ideality" is a trait shared with the most natural classical physics of this setup. And that is all that gets tested.

If you could show the experimental results to any physicist, from Malus through Lorenz, they wouldn't be surprised with the data in the least. Maxwell could have probably written down a fairly accurate model for the actual data, since the classical EM fields have all the main "non-classical" traits of the quantum amplitudes, including entaglement (see papers by Robert Spreeuw, and http://remote.science.uva.nl/~spreeuw/lop.htm ).

So this argument is barking up the wrong tree. There is nothing distinguishing about cos(2a) modulated correlation. When you enhance the simple classical models with a detector model, which models detection noise (via the Stochastic Electrodynamics, the classical EM model with ZPF boundary conditions) along with the detector sensitivity curves and perform the same kind of subtractions and data adjustments as the experimenters do, you get exactly what the experimenters have obtained (see numerous Marshall & Santos papers on this; they've been battling this battle for the last 25 years).

What distinguishes the alleged QM "prediction" from the classical one is the claim that such setup can produce a pure, unmodulated cos(2a). That is a conjecture and not a prediction. Namely, the 2x2 space model gives you only a hint of what kind of modulation to expect and says nothing about the orbital degrees of freedom, much less about dynamics of the detectior trigger. To have a proper prediction, one needs to have error bounds of the prediction (this is not a sampling error but the prediction limits) so that one can say e.g. the data which gets measured will be in [a,b] 95% of the time (in the limit of infinite sample). If I were in a hurry to make a prediction with the error bounds, such that I had to put my money on the prediction, I would at least include the 'loss projectors' to "missed_aperture" and "failed_to_trigger" ... subspaces, based on known detector efficiencies, and thus predict a wide, modulated cos(2a) correlation, indistingushable from the classical models.

In other words, to have a QM prediction which excludes all classical models, one needs to examine any setup candidate through the natural classical models then deduce a trait from the QM model which cannot be reproduced first by the natural classical models and then, if the prediction passes this basic preliminary citeria, analyse to which degree any other type of classicality can be excluded by the prediction.

In the case of the Bell's setup, one would have immediately found that the classical EM predicts a modulated cos(2a) correlation, so one would have to conclude that the prediction of the 2x2 model (which in order to have error bounds has to include the loss projectors to cover for all the skipped over and thus unknown details) is not sharp enough to draw the line between the QM and the classical models.

One would then evalute the sharpest cos(2a) modulation that natural classical models produce (which amounts to 1/2 photon equivalent of noise added to QM model). With this least of the QM-EM distingusihibility thresholds at hand, one can now focus the QM modelling to this distinction line. One would analyse the orbital propagation, the apertures, the unsharpness of the source particle number, etc, using of course any relevant empirical data available (such as detector response curves) looking to shrink the wide "loss projectors" spread of the toy model, so the more accurate QM prediction wich includes its error margin now falls beyond the classical line. Then you would have a proper QM prediction that excludes at least the straightforward classical models. At this point, it would become clear that with the present knowlede and empirical data on apartus properties, the best proper prediction of QM is indistungushable from the prediction of the natural classical models.

One has thus given up the proper QM prediciton, leaving it as a conjecture, something the experiment will have to resolve. And it's here that the Bell's inequality would come into exclude the artificial classical models, provided the experiments show the actual data violates the inequality.

The theory (aided by empirical apparatus data) does not have a prediction which distinguishes even the natural classical models from the QM model for this setup. The Bell's inequality itself isn't a QM prediction, it is a classical prediction. And the error margins of the toy model are too large to make a distinction.

You may reply here that you don't need any such procedure, when QM already has projection postulate which predicts the inequality violation.

Ok, so, in order to distinguish the prediction from classical models, tell me what error margin for the prediction does the postulate give? None that I know of (ignoring the finite sample error which is implicitly understood and can be made as small as necessary).

Does that absence of mention of error margin mean that error margin is 0 percent? If it does, than the postulate is plainly falsified with any measurement. Clearly, there is an implicit understanding here that if you happen to require an error margin for any particular setup, you will have to evaluate it from the specific setup and a model (or use empirical data) for the detectors. Since we do need a good error margin here to be distingushable even if only to separate from the weaker threshold of natural models, we can't avoid the kind of estimation sketched earlier.

You might try refining the response saying: the absence of mention of error margin means that the "ideal system" has error margin 0. Which axiom defines the "ideal system"? Is that any system that one can construct in the Hilbert space? If so, why bother with all the arguments about the shaky projection postulate, when one can simply construct a non-local Hamiltonian that no local model can reproduce?

what I mean by this is that if it weren't for an application in an EPR experiment, but say, to find the coincidences of the two photons in a PET scanner, you wouldn't probably object to the procedure at all ; ...

Why would I object. For the prediction here the distingushability from classical models is irrelevant, it is not a requirement. Therefore the constraints on the error margins of the prediction are much weaker. Why would the experiment designer care here whether any classical model might be able to replicate the QM prediction. He has entirely different requirements. He might even bypass much of the error margins computations and simply let the setup itself "compute" what these are. For him it may not matter whether he has a genuine prediction or a just a heuristic toy model to guide the trial and error -- he is not asserting a theorem (as Bell's QM "prediction" is often labeled) claiming that there is a prediction with such and such error margins.

Until you really can come up with a detailled and equally practical scheme to obtain these results, you should at least show the humility of considering that result.

I did put quite bit of thought and effort on this problem over the years. And for several years I believed with the utmost humility and the highest respects the conventional story line.

It was only after setting foot into the real life Quantum Optics lab that I realized that the conventional presentations were largely misleading and are continuing as a waste of physics students' time and creativity. It would be better for almost everyone if the their energies were redirected away from this tar pit.

But saying that "young student's minds are misled by the priests of standard theory" or the like make you sound a bit crackpottish, no ?

I didn't 'invent' the "priesthood" label. I heard it first in this context from Trevor Marshall, who, if anyone, knows what he is talking about.

So I'm still convinced that it is a good idea to teach young students the standard approach

Well, yes, the techniques have to be taught. But if I were to teach my own kids, I would tell them to forget the projection postulate as an absolute rule and to take it as a useful but limited approximation and as a prime example of its misuse and the pitfalls to watch for I would show them the Bell's theorem.
 
Last edited by a moderator:
  • #68
nightlight said:
If you could show the experimental results to any physicist, from Malus through Lorenz, they wouldn't be surprised with the data in the least. Maxwell could have probably written down a fairly accurate model for the actual data, since the classical EM fields have all the main "non-classical" traits of the quantum amplitudes, including entaglement (see papers by Robert Spreeuw, and http://remote.science.uva.nl/~spreeuw/lop.htm ).

I think I'm beginning to see what you are alluding to. Correct me if I'm wrong. You consider the parametric down conversion as a classical process, out of which come two continuous EM waves, which are then "photonized" only locally in the detector, is that it ?

EDIT: have a look at quant-ph/9810035



cheers,
Patrick.
 
Last edited by a moderator:
  • #69
nightlight said:
vanesch VERY REASONABLE EXPERIMENTAL INDICATION that the EPR predictions are correct

I don't find it a "reasonable experimental indication" at all.

The presence of the cos(2a) modulation on top of a larger "non-ideality" is a trait shared with the most natural classical physics of this setup. And that is all that gets tested.

I'm probably very naive, but let's do a very simple calculation. We assume a perfect singlet state from the start (psi = 1/sqrt(2) (|+>|-> - |->|+>). Probably it must be an idealisation, and there might be some poluent of |+>|+> states but let us assume that it is neglegible.
Let us assume we have an angle th between the polarizers, and hence a quantum prediction of correlation C_ideal. So C_ideal is the fraction of hit-hit combinations within a certain coincidence time (say, 50 ns) on both detectors and is given, by the simple QM model, by C_ideal = sin(th/2)^2.

Now, assume efficiencies e1 and e2 for both photon detectors (we could take them to be equal as a simplifying assumption).

Take detector 1 as the "master" detector. The efficiency e1 is just a limitation of the number of events (it is as if the intensity were multiplied by e1), so we do not even have to take it into account. Each time that detector 1 has seen a photon, we ASSUME - granted - that there has been a photon in branch 2 of the experiment, and that two things can happen: or that the second photon was in the wrong polarization state, or that it was in the right one. The probability, according to QM, to be in the right polarization state is C_ideal. For each of those, the probability of actually beind detected is e2. So I'd guess that the quantum prediction to have coincidences in channel 2, if channel 1 triggered, to be equal to e2xC_ideal.
Vice versa, the prediction to have coincidences in channel 1, when channel 2 triggered, is equal to e1xC_ideal.
If these two rates of coincidence are verified, I'd take it as experimentally established that C_ideal is a correct theoretical prediction.
Here, e1 and e2 can be very small, it doesn't matter, because ANY original coincidence rate undergoes the same treatment. Also I didn't take into account spurious coincidences, but they are related to overall flux, so they can be experimentally easily distinguished.
I would think that this is how people in quantum optics labs do their thing, no ?

cheers,
Patrick.

EDIT: I realize that there is of course something else that can seriously disturb the measurement, which is a proportional rate of uncorrelated photons from the downconversion xtal. But that is also reasonable to get rid of.

Indeed, we can assume that the photons falling onto detector 1 which are not correlated with the second branch, can be found by removing the second polarizer. So if we have rate I1 of uncorrelated photons, and rate I2 of correlated ones (the ones we are modelling), we can simply remove polarizer 2 and find the number of coincidences we obtain that way. It will equal e2xI2. So for a total rate (I1 + I2) on detector 1, this means e2xI2 gives 100% correlation, hence we have to multiply our original rate e2 C_ideal with I2/(I1+I2) as a prediction. Even this will have to be corrected if there is a loss in the polarizer, which could be introduced as an "efficiency" of the polarizer. All this is standard experimental techniques.
 
Last edited:
  • #70
vanesch You consider the parametric down conversion as a classical process, out of which come two continuous EM waves, which are then "photonized" only locally in the detector, is that it ?

The classicality of PDC is not my conjecture but a mathematical result of the same kind as the similar 1963 Sudarshan-Glauber result (cited earlier) showing the same for the thermal and laser sources -- there is no multipoint coincidence setup for these sources that can yield correlations distinguishable from the classical EM filed interacting with a square law detector (which in turn can be toy-modelled as an ionisation of a Schrodinger's atom). Marshall, Santos and their collaborators/disciples have shown this equivalence for type I and || PDC sources. Here are just couple (among over a dozen) of their papers on this question:
Trevor W. Marshall Do we need photons in parametric down conversion?

The phenomenon of parametric down conversion from the vacuum may be understood as a process in classical electrodynamics, in which a nonlinear crystal couples the modes of the pumping field with those of the zeropoint, or "vacuum" field. This is an entirely local theory of the phenomenon, in contrast with the presently accepted nonlocal theory. The new theory predicts a hitherto unsuspected phenomenon - parametric up conversion from the vacuum.
Alberto Casado, Trevor W. Marshall, Emilio Santos
Type-II parametric down conversion in the Wigner-function formalism. Entanglement and Bell's inequalities

We continue the analysis of our previous articles which were devoted to type-I parametric down conversion, the extension to type-II being straightforward. We show that entanglement, in the Wigner representation, is just a correlation that involves both signal and vacuum fluctuations. An analysis of the detection process opens the way to a complete description of parametric down conversion in terms of pure Maxwell electromagnetic waves.

The essential new ingredient in their approach is the use of the vacuum fluctuations (they call it Zero Point Field, ZPF, when referring to it as the special initial & boundary conditions of classical EM field; the ZPF distribution is not some kind of adjustible fudge factor since it is uniquely determined by the requirements of Lorenz invariance) and its relation to the detection. That addition also provides a more concrete physical interpretation of the somewhat abstract 1963 results for the thermal and laser sources.

Namely, the ZPF background (which amounts to an equivalent of 1/2 photon per mode and which the "normal ordering" of operators in QED, used for computing the multipoint correlations, discards from the expressions) allows one to have, among other effects, a sub-ZPF EM wave, with energy below the background (result of a superposition with the source field e.g. on the beam splitter), but which carries the phase info of the original wave and can interfere with it, yet it is normally not detected (since it falls below the detector's dark-current trigger cutoff which is callibrated to register no events for the vacuum fluctuations alone; this is normally accomplished by a combination of adjustments to the detector's sensitivity and the post-detection subtracion of the backgorund rate).

This sub-ZPF component (a kind of "dark wave"; I recall Marshall using term "ghost" or "ghost field" for it) behaves formally as a negative probability in the Wigner's joint distribution formalism (it has been know for a long time that allowing for negative probabilities yields at least formally the classical-like joint distributions; the sub-ZPF component provides a simple and entirely non-mysterious interpretation of these negative probabilities). It is conventionally undetectable (in the coincidence experiments it gets callibrated & subtracted away to match the correlations computed using the normal ordering rule; it shows as a negative dip below the baseline rate=0 in some more detailed data presentations), yet it travels the required path and picks all the inserted phase shifts along the way so the full intereference behavior is preserved, while any detection on each path shows just one "photon" (the averaging over the ZPF and source distributions smoths out the effect, matching quantitatively the photon anti-correlation phenomenon which is often mentioned as demonstrating the "particle" aspect of photons).

{ Note that ZPF alone plus classical point particles cannot reproduce the Schroedinger or Dirac equations (that goal used to be the "holy grail" in the early years of the Stochastic ED, 1960s & 1970s). Although there were some limited successes over the years, Marshall now admits it can't work; this realization has nudged this approach to Barut's Self-field ED, except with the advantage of the ZPF tool.}
 
Last edited:

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
  • Quantum Physics
Replies
2
Views
290
Replies
28
Views
581
  • Quantum Physics
Replies
14
Views
1K
  • Quantum Physics
3
Replies
81
Views
4K
Replies
1
Views
649
  • Quantum Physics
Replies
22
Views
944
Replies
11
Views
1K
Replies
60
Views
3K
Replies
18
Views
2K
Back
Top