Bell's theorem and Harrison's (2006) inequality

In summary: I believe the conversation is about a potential draft post that has gone missing when the thread was closed. The draft post included a Credo and complied with relevant PF rules. Much time was given to making it concise yet complete. The draft was to be presented for critique on the thread to ensure correct interpretation of Harrison's work. The post also requested for help in recovering the missing draft.
  • #36
BTW, I also disagree with the claim that the GHZ type tests are somehow also can be explained via such semiclassical stochastic theory. Paul Kinsler has described this 10 years ago on where the GHZ and semiclassical stochastic would deviate in the 3rd order correlations[1].

Thus, considering that there are many GHZ-type measurements venturing into this regime already, I don't see the validity of the claim that stochastic theory are consistent with GHZ experiments.

Zz.

[1] P. Kinsler PRA 53, 2000 (1996).
 
Physics news on Phys.org
  • #37
If I may join in the discussion...

Some arguments not to put the "local realist" viewpoint too soon down:

I have to agree with Maaneli and others here that all (to my knowledge, performed to date) "all-optical" Bell tests are also perfectly described by stochastical optics. This follows from a theorem that Santos proved, and that shows that any parametric down conversion 1 -> 2 photons, plus photon detectors that do not have neglegible dark current and a quantum efficiency > 87%, will give identical results between the quantum prediction and the stochastic ED prediction.

Moreover, Nelson proved already in the 60ies that stochastic electrodynamics also gives rise to the correct black body curve without the hv quantization.

For reminders: stochastic electrodynamics is simply standard Maxwell electrodynamics, with the additional given fact that space is filled with a stochastic radiation with energy spectrum [tex] h \nu/2 [/tex].

The 87% photo-effiency limit on photodectectors is a fundamental limit in SED (in the same way as the Carnot cycle is a fundamental limit to heat engines, and proposing experiments with >87% quantum efficiency detectors is the SED equivalent to the thermodynamic nonsense of proposing experiments with thermal machines having higher efficiency than the Carnot cycle). So one of SED's predictions is that it is impossible to construct a >87% efficient polarizing photodetector (at neglegible dark current).

There are also indications that the same stochastic radiation counters exactly the radiation loss by acceleration in the (classical) hydrogen atom for specific states of that classical atom, giving rise to exactly the spectrum of hydrogen. See for instance: Phys. Rev. E 69, 016601 (2004)

Barut showed that if we take the electron as a classical field described by the Dirac equation, and add SED for the EM part, that several standard "QED" results (including the Lamb shift, and the gyromagnetic ratio) can be matched up at least 6th order in perturbation.

What does all this mean ? Difficult to say. It seems that simply adding a universal noise to the EM field with said spectrum gives us a classical field theory which can reproduce, in all cases where it has been worked out, the observed quantum results (which were often considered typically quantum effects and historically cited as proof for quantum theory). As far as I know, no results are available that are in contradiction with quantum theoretical predictions that have been realistically verified - although they are of course in contradiction with ideal quantum predictions (just as a theory using heat engines with efficiency beyond the Carnot cycle, say, "ideal heat engines" in a certain theory, would be able to make predictions which are not in agreement with standard thermodynamics). Bell's inequalities and all that, are to SED, what are violations of the second law to thermodynamics: if you are allowed to have heat-engines going beyond the Carnot cycle, then yes, you can find "ideal" situations where you violate the second law, and claims of "corrected" violations of Bell's inequalities sound to SED proponents as claims of experimental observation of the violation of the second law of thermodynamics, using "corrected" heat flows after correcting for the finite efficiency of your heat engine.

So it is not amazing that both communities (with the QM community being orders of magnitude larger than the SED community) have a "deaf's man's discussion": for the QM people, the detector corrections are a minor experimental matter, and for the SED people, they are as fundamental as the Carnot efficiency, and "correcting" for it is an evident error.
In as much as QM people say that they simply have to wait for a more efficient detector, SED people find that like waiting for the invention of a heat engine that goes beyond the Carnot cycle efficiency.

That said, and this is probably Zapperz remark, only a few systems have been studied (Bell experiment setups with parametric down conversion, the hydrogen atom, the black body spectrum, the Lamb shift, the gyromagnetic ratio...) while quantum theory can describe with not much difficulty about all systems (apart from gravitational) we can encounter.
Is this due to the very small community of people working on it, or are there more fundamental reasons why there's no more progress on the classical side, after these nevertheless spectacular results ?

I think the real challenge is quantum chemistry and solid state physics. If these classical techniques can produce good results in these domains, then that would be a genuine message. On the other hand, the passive hostility of the scientific community towards this kind of work will make it hard to come about with serious results in these domains, where quick and hard results are more important than philosophical stances.
After all, chances are that the programme fails (simply because, after all, nature is not that way), and if it works out, chances are that it doesn't do any better than quantum theory - it would only result in a conceptual and philosophical advance, to know that there are two possible descriptions of nature now. But, but, it might also possible that such an approach, if it works, produces better algorithms. As of today, however, it seems carreer suicide to engage into that path.

Nevertheless I think it would be exciting to see how far this model can be pushed: or it will run into fundamental troubles at some point or other, in which case we know it is not the way to go, and can forget about it once and for all, or it will end up and "eat" the entire scope of quantum theory, in which case that would be an amazing result. It would then even be nicer to understand the relationship between both. On a personal note, I fail to understand how these explorations are frowned upon more than all the speculative quantum-gravity stuff...

Of course, I agree with Zapperz remarks (and it gives, by large, the advantage currently to the quantum side): quantum theory describes about all we have around us (apart from gravity), including quantum optics, while SED is currently limited to only optical phenomena, leaving aside, apart from a few simple examples, light-matter interactions.
But who knows that there is no extension of SED which mimic quantum theory correctly also for light-matter interactions, or for matter all together ? I think the question is open.

However, if we are going to use all-optical experiments to give that "ultimate proof" to quantum theory, by trying to prove violations of Bell inequalities, then the SED results are significant: they are indistinguishable from the quantum predictions in realistic experimental conditions. If our argument is that quantum theory also describes *matter* correctly, then we should seek that ultimate proof rather with material systems, and not with all-optical systems. After all, with matter systems, we don't seem to have that detection efficiency problem (although we seem to have another problem: producing an entangled state of sufficient purity, and avoiding decoherence until measurement).

For most physicists, however, what counts is what works, and today, it has to be said that the working scope of quantum theory is vastly more open than the scope of working SED.

What I find most interesting however, is the explanatory power of SED within its limited scope, and how it whipes off the table all those silly textbook claims of the absolute necessity of quantum theory in explaining this and that (photo-electric effect, black body radiation, stability of the hydrogen atom, spectrum of hydrogen, Lamb shift, Bell tests...). It is almost as if quantum theory needs a proof of necessity beyond its predictive power (nobody develloped proofs of necessity of Newtonian mechanics - people were just happy that it worked). Quantum theory is interesting because it makes a lot of good predictions. That's all it needs to do. It sometimes feels like Bell and other tests are parts of a credo to the exclusive belief in quantum theory. That's the end of science. Even if it proves a wrong path and/or a dead end, SED is here to remind us of the fragility of certain absolute claims, and as such, it has already its main use.
 
  • #38
ZapperZ said:
BTW, I also disagree with the claim that the GHZ type tests are somehow also can be explained via such semiclassical stochastic theory. Paul Kinsler has described this 10 years ago on where the GHZ and semiclassical stochastic would deviate in the 3rd order correlations[1].

Thus, considering that there are many GHZ-type measurements venturing into this regime already, I don't see the validity of the claim that stochastic theory are consistent with GHZ experiments.

Zz.

[1] P. Kinsler PRA 53, 2000 (1996).

I was aware of Kinsler's theoretical result, but I didn't know that they had any experimental confirmation...
 
  • #39
There is something that you didn't consider, vanesh.

If SED is correct, then QM can't be. They differ fundamentally in the way they describe nature. So if one assumes SED is valid, then QM can't be a valid description. This then leaves a very LARGE 20,000 pound gorrila in the middle of the room for one to explain away via the fact that QM has worked, and worked so well for 100 years.

To me, many things can describe the outcome of "first-order" observations. Photoelectric effect, blackbody radiation, etc... are what I consider first-order observations. It is in the details where you separate the wheat from the chaff. When you start looking more closely at the photoemission process, and start pushing the envelope into more sensitive regime, this is where you start seeing the deviation. No SED theory has ever been attempted to match the results of ARPES, RPES, even multiphoton photoemission processes. In this day and age, photoelectric effect is chicken feed. Matching it is like saying one can do simple trig. It is why we have more demanding EPR-type experiments such as the multipartite measurements. The Aspect-type results are not longer sufficient anymore.

The higher-order, more detailed/sensitive tests remains the realm of QM. All SED have done is match the superficial nature of those tests, not the detailed ones. It is difficult for me to give much credence on something like that.

Zz.
 
  • #40
ZapperZ said:
If SED is correct, then QM can't be. They differ fundamentally in the way they describe nature. So if one assumes SED is valid, then QM can't be a valid description.

I don't know. Probably they are both "wrong" at some point, no ?
I assume every scientific theory as only valid up to a point - which doesn't stop me from exploring its associated toy universe (that is, the imaginary universe in which it is exactly valid), but it is my belief that every so many years/centuries/millennia/whatever, we will change fundamentally our paradigm (until we get tired of it, experiments become impossible or we don't exist anymore), declaring the previous one "wrong". I'm not talking about details, but about fundamentally changing the basic rules of the game. This is a simple extrapolation of what happened ever since we discovered the scientific method.

This then leaves a very LARGE 20,000 pound gorrila in the middle of the room for one to explain away via the fact that QM has worked, and worked so well for 100 years.

So what ? Newtonian physics worked for over 200 years.
But I thought I said that: clearly, QM is by far the more successful theory if you look at its scope.

To me, many things can describe the outcome of "first-order" observations. Photoelectric effect, blackbody radiation, etc... are what I consider first-order observations. It is in the details where you separate the wheat from the chaff. When you start looking more closely at the photoemission process, and start pushing the envelope into more sensitive regime, this is where you start seeing the deviation. No SED theory has ever been attempted to match the results of ARPES, RPES, even multiphoton photoemission processes. In this day and age, photoelectric effect is chicken feed. Matching it is like saying one can do simple trig. It is why we have more demanding EPR-type experiments such as the multipartite measurements. The Aspect-type results are not longer sufficient anymore.

I know. But we don't even understand why SED is in agreement with QM "on the chicken-feed". Once, this "chicken-feed" was used as "unrefutable argument for the validity of QM" and this is still what you find in many textbooks, and *this* is what I said: the main utility of SED, as of today, is to show that what was once considered a "pure and genuine quantum effect" could in fact also be explained by a classical field theory with one single addition: a noise spectrum.
That those historically important examples (which are still cited in many introductory courses, and of which many people, because of that, still think they are unrefutable proofs of quantum theory) are now considered "chicken feed" is a bit easy. Aspect was revolutionary, and now also chicken feed.
I think it would be interesting to see why and how SED is in experimental agreement on these examples (and maybe many more).

The higher-order, more detailed/sensitive tests remains the realm of QM. All SED have done is match the superficial nature of those tests, not the detailed ones. It is difficult for me to give much credence on something like that.

The point is not the credence. I don't think that one should believe in one or the other (that's my main point: when you believe, you're not doing science anymore). Of course, quantum theory is much further develloped, much more useful as of today, doesn't know any experimental refutation etc... As I said, quantum theory is useful (and should be studied) already only for that reason (whether it is ultimately "right" or not: it is very useful). SED is lightyears behind. But SED has been worked on by what ? 20 people ? How many people, and how much funding went into QM ?
Don't get me wrong, I don't think that any amount of funding can make a totally ill founded theory work as well as QM. But maybe SED would have had an equally successful development if it would have received as much attention. So I think it is not totally fair to ask of SED to give you the same level of actuality and sophistication as QM has today, given the hugely different amounts of means that were invested in both paths.

So, how could both then be "right" ?
Maybe there is a link between both, that QM and (a successor of) SED are equivalent within the realm of experimentally accessible tests, just as they turned out to be, against all odds, equivalent on the chicken feed. Maybe QM is a kind of idealistic extrapolation of (a successor of) SED, and maybe each time we find a different prediction between SED and QM, this is simply because this is an fundamentally unrealisable experiment (like the beyond-carnot-efficiency heat engines), and describes one of these idealistic extrapolations with no reality.

And maybe not. Maybe, SED is just a last convulsion of a dead paradigm, and its agreement on all the chicken feed is nothing but sheer coincidence without any deeper meaning. But it would be good to find out, no ? I have to say that I'm surprised by these coincidences. When I first heard of SED, I thought it was a model set up explicitly and only for the sake of explaining a specific Bell test using down converters. But when I saw that the only thing needed, namely a noise spectrum, could explain so many different examples (which turned out to be historical motivations for quantum theory), then I was intrigued. Something is to be understood here. That's only my point: that it is a bit easy to write all this off on the back of sheer coincidence. I don't know if there's something, and what it is, but my feeling there's something unexplained in this unreasonable success of SED and it would be good to find out. Maybe there's a simple reason for it. Maybe we can show, from within quantum theory, that a certain class of results is equivalent with SED, and a larger class isn't (while still remaining open to experiment). But maybe we'll find out that said class contains very few or no verified results. Who can say, as long as it isn't done ?

After all, QM also faces its gorilla: gravity (with which SED has no problems for instance).

EDIT:
the chicken-feed list:
-photo-electric "lumpiness"
-black body radiation
-stability and spectrum of hydrogen
-gyromagnetic ratio for electrons up to order 6 in alpha
- Lamb shift
- Bell experiments with PDC xtals

Now, ask your average student a list of results which were the historical motivations which made people finally accept quantum theory ? This is what I find intriguing.

EDIT2: personally, I find this exploration more "cost-effective" than pondering for 30 years about how to tie up my shoes in 11 dimensions :wink:
 
Last edited:
  • #41
ZapperZ said:
If SED is correct, then QM can't be.

What is SED?
An acronym for something I assume.
Is ‘SO’ (Stochastic Optics) derived from SED
Or is SED part of SO
 
  • #42
SED = Stochastic ElectroDynamics
 
  • #43
vanesch said:
But SED has been worked on by what ? 20 people ? How many people, and how much funding went into QM ?

EDIT2: personally, I find this exploration more "cost-effective" than pondering for 30 years about how to tie up my shoes in 11 dimensions :wink:

Didn't we have the same discussion about Bohmian Mechanics? (And maybe one or two about MWI :smile: ) In which the BMers said that they would be able to explain everything in terms of their proto-theory if just given more resources?

In other words, why design a theory to describe & predict what we can already describe & predict with QM? Yes, I know *maybe* it would lead us somewhere new, but so might equal research into the QM we have now! It would make more sense to stop funding research into the standard model IF we had hit an impass and were not making new discoveries. That is hardly the case, as new area after new area has been discovered in recent years (how about GHZ & delayed choice quantum erasers just in the area of entanglement). When Marshall & Santos use their new improved line of thinking to push us into new fruitful territory, then they will really have something. Until then, the QM competition is kilometers (maybe even miles) behind.

P.S. Maybe you should be looking at your shoes in 26 dimensions. :tongue:
 
  • #44
ZapperZ said:
I would strongly suggest you re-read the PF guidelines that you have explicitly agreed to. Pay attention to personal and speculative theories and how PF would handle such a thing. If you wish to continue advertizing your theory (sans your website), please do so in the IR forum per our Guidelines.

Zz.


Please explain

Forum Rules

Registration to this forum is free! We do insist that you abide by the rules and policies detailed below. If you agree to the terms, please check the 'I agree' checkbox and press the 'Register' button below. If you would like to cancel the registration, click here to return to the forums index.

Although the administrators and moderators of Physics Help and Math Help - Physics Forums will attempt to keep all objectionable messages off this forum, it is impossible for us to review all messages. All messages express the views of the author, and neither the owners of Physics Help and Math Help - Physics Forums, nor Jelsoft Enterprises Ltd. (developers of vBulletin) will be held responsible for the content of any message.

By agreeing to these rules, you warrant that you will not post any messages that are obscene, vulgar, sexually-oriented, hateful, threatening, or otherwise violative of any laws.

The owners of Physics Help and Math Help - Physics Forums reserve the right to remove, edit, move or close any thread for any reason.

I have read, and agree to abide by the Physics Help and Math Help - Physics Forums rules.

Carpe Dime
QuantunEnigma
 
  • #45
Read this: https://www.physicsforums.com/showthread.php?t=5374"

In particular:
Overly Speculative Posts:
Physicsforums.com strives to maintain high standards of academic integrity. There are many open questions in physics, and we welcome discussion on those subjects provided the discussion remains intellectually sound. Posts or threads of a speculative nature that lack substantial support or well-considered argumentation will be deleted. Posts deleted under this rule will be accompanied by a private message from a Staff member, with an invitation to resubmit the post in accordance with our https://www.physicsforums.com/showthread.php?t=82301. Poorly formulated personal theories and unfounded challenges of mainstream science will not be tolerated anywhere on the site.​
 
Last edited by a moderator:
  • #46
DrChinese said:
Most people do not believe that Bell's Theorem requires any agreement with experiment. It simply says that the theoretical predictions of Local Reality are incompatible with the theoretical predictions of QM. Since we know the exact circumstances in which they are different (i.e. specific angle settings), it is really hard to see where you can go with this.


Please explain ''Local Reality'' and ''theoretical predictions of Local Reality''.

Thank you.
 
  • #47
DrChinese said:
Didn't we have the same discussion about Bohmian Mechanics? (And maybe one or two about MWI :smile: ) In which the BMers said that they would be able to explain everything in terms of their proto-theory if just given more resources?

Well, why not ? In fact, I fight every dogmatic religious attitude with religious conviction :biggrin:
In the same way as I would argue against a religious Local Realist, I argue against a Religious Bohmian, or a Religious quantum theorist. I take insult at being considered as a religious MWI-er btw, I'm not. I only argue against religious anti-MWI-ers :-). In this thread, it appeared to me that there was some Inquisitional Threat against people investigating into SED kind of theories.

It cannot be denied I think, that SED does what is often claimed is impossible. As such, it does have some flavor of Bohmian mechanics, which also does, what was claimed to be impossible. These (obviously erroneous) claims of impossibility are often the fruit of zealous religious conviction in need for proof of rightness. These counter examples are good to study, be it just to sober up from false certainties. It is simply good scientific attitude to recognize what is, and what is not, established, and every inquiry which can clear that out is a contribution to take.

In other words, why design a theory to describe & predict what we can already describe & predict with QM? Yes, I know *maybe* it would lead us somewhere new, but so might equal research into the QM we have now! It would make more sense to stop funding research into the standard model IF we had hit an impass and were not making new discoveries.

Well, nobody is talking about giving up QM ! It's way too successful ! But my modest opinion is that these very simple SED models merit maybe a slightly less aggressive treatment from the scientific community than they are treated with now. True, there is an entire crowd of crackpot local realists, but that doesn't mean that all of it is crackpot. As of now, it is for instance totally impossible to find even a postdoc position working on that subject. As far as they are concerned, they are way closer to sound science than yet another over-hyped string/loop/whatever version of the Ultimate Theory of Reality and Everything (where it is not so hard to find funded positions): basic postulates of a model are written down, a deduction of predictions is worked out and then compared with experimentally known results. Science as by the book.

My point is simply that the simplicity of this SED model and the accuracy of its predictions (true, within a very restricted domain for the moment) is intriguing, and that we might learn something if only we understood why. I have a hard time believing that it is pure coincidence that quantum theory and SED models give so close results, with so different postulates. So the point is not so much SED versus QM, but how come that SED and QM give same predictions.

Quantum theory doesn't need any "proof". It simply needs to be explored in every corner. Thinking about alternative theories is a good way to find suggestions of exploring quantum theory experimentally. It is also a good practice to see what is "typically quantum", and what's not (all the classical examples of "typically quantum" and which are also explained by SED, are obviously less typically quantum than first thought).

As such, I think that the only totally correct statement of the empirical situation is:
"quantum theory has been challenged and made successful predictions in all empirical tests, also in those suggested by rival theories, such as local realist ones."

In other words, all experimental work as of today has not succeeded in falsifying quantum theory. However, claiming that it falsified SED is, as far as I know, still wrong. Maybe I'm wrong here, but I'm still not aware of a falsification of SED, in the sense: a clear SED prediction has been worked out, and a different experimental result has been established. As pointed out by Zapper, Kinsler worked out a theoretical proposition using 3-rd order correlations where certain SED predicted correlations are different from the quantum predictions, but I'm not aware of any experimental verification of it. Santos showed that second-order correlation functions using parametric down conversion and "low efficiency" photon detectors (<87%) are identical between QM and SED, but Kinsler showed that 3rd order correlations (if we can produce them) are not.
Here we see SED at work to suggest further tests of QM, which I think is a positive attitude.

General relativity also has its "challengers" (Brans-Dicke theory being the most famous one). It is interesting that some classical "tests" of GR can also be explained by B-D (for instance, the gravitational time dilatiation). As such, exploration of B-D theory is a useful exercise.

One should not religiously commit to a single theory, and view competitors as personal rivals. Competitive theories are the backbone of scientific inquiry.

That is hardly the case, as new area after new area has been discovered in recent years (how about GHZ & delayed choice quantum erasers just in the area of entanglement). When Marshall & Santos use their new improved line of thinking to push us into new fruitful territory, then they will really have something. Until then, the QM competition is kilometers (maybe even miles) behind.

That depends in fact. If we would understand why SED gives similar results than quantum theory in those simple cases (which were nevertheless at one time held to be "proofs" of quantum theory, remember), we might understand maybe certain properties of quantum theory better, maybe even leading to faster calculational algorithms. Imagine for instance that we would understand when stochastic models a la SED give similar or identical results as QM: that might make very simplified quantum chemistry models (using classical models + noise in the right way). It might also turn out to be in fact, more complicated (as is for instance the case when treating spin in Bohmian mechanics).

Point is: we have a simple model, which is fundamentally different from QM, and which makes accurate predictions. That needs to be understood, instead of frowed upon. One should stop making cliques of people, as religious brotherhoods, vowed to the success of one, or another theory.

I like to read about SED, and I find that fascinating. I also like to read about Bohm, and about other theories. That doesn't mean that the next day I burn all my books on quantum theory.

A totally different matter is considering whether it is wise to spend a lot of time or money on it. That's an entirely personal choice. And as to funding: I would certainly not argue for dropping funding of QM research! But I can think of quite some funded activities which are, IMO, less well spend than trying to find out why SED works so well in certain domains. Activities where there are no predictions, the postulates change every other day, and where there are no experiments...

I do think I see your point, which is: why bother ? Why bother with something that might work, while we have something that does work ? As I said, that's a personal matter. I can very well understand this viewpoint. But I can also understand the person who bothers. Maybe simply because that person is intrigued by it and hopes to learn something. Or maybe because of the fact that a SED-like theory has all chances of not having troubles with gravity.

P.S. Maybe you should be looking at your shoes in 26 dimensions. :tongue:

Depends if they are bosonic or not :cool:
 
Last edited:
  • #48
vanesch said:
1. I take insult at being considered as a religious MWI-er btw, I'm not.

2. I think that the only totally correct statement of the empirical situation is:
"quantum theory has been challenged and made successful predictions in all empirical tests, also in those suggested by rival theories, such as local realist ones."

1. You know I would never imply that... :smile:

2. Um, this statement is actually wrong, Vanesch. Aspect's test DID send local realists back to the drawing board! And so have subsequent enhancements such as GHZ! That is why Santos is working so hard on refinements to his model. What, this is perhaps his 10th iteration/refinement to make it agree? His models are frequently attacked and duly reconstructed within months to keep the subject alive. Plus, he must adapt to new experimental results. And yet the QM model has NOT needed similar adjustment.

A more accurate statement is:

"Quantum theory has been challenged and made successful predictions in all empirical tests. It may be possible to construct local realist theories that are experimentally indistinguishable from quantum theory, even though Bell's Theorem would seem to preclude this."

Hey, I don't have a problem if Santos spends time on it. I don't even claim his model is scientifically useless (although it appears that way to me). And perhaps a future discovery will show its power, sure, I can acknowledge that. But what credit does it does it deserve today? It is pretty obvious that the primary angle to keep his ideas alive is to hang his hat on detection inefficiency and noise as a way to escape the day of reckoning. Not very impressive to me, but if you want to give it more significance then I am OK with that.
 
  • #49
QuantunEnigma said:
Please explain ''Local Reality'' and ''theoretical predictions of Local Reality''.
Sorry QuantunEnigma I’m not buying it.
You just happen to join the forum here on very same day that Gordon finally has more than one pointless page on his web site, and the first post you made attempts to draw attention to that site.
If you’re not WM, you must be someone helping him – and no I’m not going to mention the name of the site here for you, I’ve seen nothing there worthy of sharing with anyone.

So just what is your point – are you looking to fill in the blanks in “W-Local” and “W-factoring” with something from DrC who does know something? Follow the path to his website info if you want to learn something worthwhile.

If you can not make your point short, direct and clear on your own website please listen to Zz and don’t waste our time with it here
 
  • #50
QuantunEnigma said:
Please explain ''Local Reality'' and ''theoretical predictions of Local Reality''.

Thank you.

Welcome to PhysicsForums, QuantunEnigma. In the hopes that I am not being baited (as RandallB points out above):

A Local Realitic theory is a theory composed with the following ideas:

a. Locality: often considered as the same thing as Lorentz invariance, it is essentially the idea that effects do not propagate locally faster than c.

b. Reality: In the words of Einstein, who was the ultimate local realist: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

Bell discovered that QM leads to some theoretical predictions that are nonsensical (and violate b. above), such as negative probabilities. However, they are supported by experiment.

I hope this answers your question.
 
  • #51
DrChinese said:
1. You know I would never imply that... :smile:

2. Um, this statement is actually wrong, Vanesch.
Aspect's test DID send local realists back to the drawing board!

Note that my statement didn't include any statement about a falsification of any LR theory. I said that certain experiments, on which quantum theory made successful predictions (and hence survived the falsification), were suggested by LR theories.
As such, these LR theories were useful, in that they suggested experiments one would otherwise maybe not even have thought about of performing. And that's the essence of what I'm trying to argue in this thread here: that these alternative theories do have some uses (in this case, suggest tests of QM). To me, its main use is already to show that a lot of popular textbook arguments of the necessity of QM are simply erroneous reasoning. In the same way that Bohmian mechanics was useful to show the erroneous reasoning in von Neumann's impossibility proof.

Also, maybe it sent *some* LR to the drawing board, but that would then be a naive lot. The SED model is way older than the Aspect experiments. A simple paper that fascinated me is Boyer, "Derivation of the Blackbody Radiation Spectrum without quantum assumptions", Phys, Rev. 182, vol5 1969, where the SED model is in fact used.

And so have subsequent enhancements such as GHZ! That is why Santos is working so hard on refinements to his model. What, this is perhaps his 10th iteration/refinement to make it agree?
His models are frequently attacked and duly reconstructed within months to keep the subject alive. Plus, he must adapt to new experimental results. And yet the QM model has NOT needed similar adjustment.

As I said, the point is not so much (and I have to say I don't like Santos/Marchall's "religious brotherhood" style either) QM versus SED, but to understand the relationship.
Whether a model is found by twiddling and tweaking doesn't really matter much after the fact (I could cite again the string community and related, who have been twiddling and tweaking quantum models to try to get out compatibility with that simple experiment: drop your pen to the floor and see it fall!). What matters is the existence or not, of the model. It is IMO always interesting to study examples of alternatives (be it to suggest experiments, or to broaden the understanding of the reasons why there is equivalence).

Hey, I don't have a problem if Santos spends time on it. I don't even claim his model is scientifically useless (although it appears that way to me). And perhaps a future discovery will show its power, sure, I can acknowledge that. But what credit does it does it deserve today? It is pretty obvious that the primary angle to keep his ideas alive is to hang his hat on detection inefficiency and noise as a way to escape the day of reckoning. Not very impressive to me, but if you want to give it more significance then I am OK with that.

I think it already showed its use. I'm for instance not even sure that Kinsler would have thought of his 3rd order correlations, or GHZ would have proposed their tests, if, after Aspect, the world would have chanted in unity about the ultimate achievement. The existence of these critics, and the existence of their alternative models, is what motivates progress. Like competition in economy, and opposition in politics.
It points to where there has been complacent and erroneous thinking.
 
Last edited:
  • #52
vanesch said:
... I could cite again the string community and related, who have been twiddling and tweaking quantum models to try to get out compatibility with that simple experiment: drop your pen to the floor and see it fall! ...

That's pretty funny (and probably accurate as well)!

I have no issue at all with attacks (in the competitive spirit you mention) on QM when there is a genuine issue or alternative hypothesis. I think that the experimental envelope is constatnly being pushed, and I would be the last to advocate that we stop the scientific study because "we already know it all".

It seems strange to see people constructing theories that say "Nature is LR but Experiments will always say QM" in the presence of Bell's Theorem. In my opinion, for SED to be a viable local realistic alternative to QM: it MUST make a prediction for entangled photon spin correlation that is at odds with Malus' Law (cos^2). I mean, that's ultimately the point of the 87% efficiency threshold that Santos claimed must be surpassed to distinguish (i.e. that there is a difference in the predictions which is being masked due to experimental loopholes). I just do not see how that makes any sense, because to assert that is essentially to say that Malus' Law is wrong too. And I consider that to be pretty fundamental.

Oops, I just realized I probably opened up another can of worms. Sorry...
 
  • #53
DrChinese said:
A Local Realitic theory is a theory composed with the following ideas:

a. Locality: often considered as the same thing as Lorentz invariance, it is essentially the idea that effects do not propagate locally faster than c.

b. Reality: In the words of Einstein, who was the ultimate local realist: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."


If that is indeed what "local realistic" means, then the terminology is completely wrong as a description of what is ruled out by empirical violations of Bell type inequalities. There exists a valid argument from "a" (locality, plus some of the empirical predictions of QM) to "b": the EPR argument. (The original EPR version, however, was both obscure as the logic and the point -- and a bit archaic now that Bell has provided a more precise definition of locality. See quant-ph/0601205 for an updated version of the argument from "a" to "b".) Let me state the point clearly in words: the only way a theory can predict the empirically-supported perfect-correlation (when Alice and Bob both measure along the same axis the spins of a pair of spin 1/2 particles in the singlet state) and respect relativistic locality, is for the theory to encode, in advance of any measurements, definite outcomes for all possible spin measurements -- i.e., locality *entails* what dr chinese above calls "realism", i.e., "locality" entails "local realism".

Thus, what's being tested in the Bell test experiments isn't the conjunction of two premises (locality + realism = "local realism") but simply locality. Anybody who is confused about this point needs to go back and read Bell, because he explains it as well as anyone could.

This point is orthogonal to the debate about QM vs SED that's been going on here, but it seems to me a fundamental point since both sides in this other debate have swallowed this standard terminology ("local realism") without realizing that it is based on a flawed understanding of Bell's work.
 
  • #54
***
Thus, what's being tested in the Bell test experiments isn't the conjunction of two premises (locality + realism = "local realism") but simply locality. Anybody who is confused about this point needs to go back and read Bell, because he explains it as well as anyone could. ***

Perhaps you have ever heard about the possibility for negative probabilities or even complex amplitudes of detection ?? Feynmann, Dirac (even prior to the existence of the Bell inequalities :cool: - this guy was clearly clairvoyant-), Barut and others have given explicit ways to violate the Bell inequalities in local theories in this way. :cool: So your statement is clearly false (as are most crappy papers analysing what Bell had to say) and you ignore this with the same pleasure as you dismiss predeterminism.

Since Dr. Chinese here challenges the work of Santos, perhaps Dr. Chinese should explain to us how photon and electron detectors work. What do we measure exactly and to what do we imagine it corresponds to ? Then we or he could understand why it might be that a detector inefficiency is possibly a fundamental issue (and indeed the consequence of a different view upon measurement, NOT the entangled state) and not some temporary technical limitation. Also, he could illuminate us by telling how a GENUINE entangled state is produced !

Vanesch, congrats with your scientific attitude.

Careful
 
Last edited:
  • #55
Careful said:
Perhaps you have ever heard about the possibility for negative probabilities or even complex amplitudes of detection ?? Feynmann, Dirac (even prior to the existence of the Bell inequalities :cool: - this guy was clearly clairvoyant-), Barut and others have given explicit ways to violate the Bell inequalities in local theories in this way. :cool: So your statement is clearly false (as are most crappy papers analysing what Bell had to say) and you ignore this with the same pleasure as you dismiss predeterminism.

"Negative probability" is a contradiction in terms. Look at the axioms that need to be satisfied for something to be a "probability". Page 1 of any probability/stats textbook.

Formally (i.e., leaving aside the actual meanings of relevant concepts) your statement is true: you can violate Bell's inequalities with a local theory if you allow probability distributions that are sometimes negative. But please. If that's the best available argument against my claim, it's just a complicated way of confessing that my claim is true.


Vanesch, congrats with your scientific attitude.

I'll second that. Vanesch's comments on this thread are a much needed breath of scientific fresh air in the face of dogmatic/religous attachment to QM. My only complaint would be that one shouldn't give quite as much credit to QM as Vanesch has done here (though I know from other discussions he agrees with me about this). QM (assuming thereby we mean the orthodox or Copenhagen theory) is a bad theory. It is "unprofessionally vague and ambiguous" (Bell's description) about such crucial things as: what it's about, when its two mutually incompatible recipes for time-evolution apply (i.e., what exactly is this thing "measurement" which makes unitary evolution stand aside momentarily in favor of collapse), etc. If there were no such foundational problems with orthodox QM, I would incline toward the view that SED is pointless (along the lines of, say, some non-atomic continuum theory of matter that manages somehow to explain the ideal gas law and some of chemistry, but in today's context where it is just absolutely certain that matter is atomic, so the alternative would be at best a curiosity). But given that these foundational problems do exist, dogmatic attachment to orthodox QM is simply indefensible, and anyone who maintains this attitude (and the associated vitriolic dismissal of things like SED and Bohmian Mechanics) thereby reveals himself as a non-thinking, anti-scientific dogmatist.

Or, if you like, I could tell you what I really think. :rofl:
 
  • #56
Careful said:
1. Since Dr. Chinese here challenges the work of Santos, perhaps Dr. Chinese should explain to us how photon and electron detectors work. What do we measure exactly and to what do we imagine it corresponds to ? Then we or he could understand why it might be that a detector inefficiency is possibly a fundamental issue (and indeed the consequence of a different view upon measurement, NOT the entangled state) and not some temporary technical limitation.

Also, he could illuminate us by telling how a GENUINE entangled state is produced !

Not sure I follow what you are asking...


1. Sure, I challenge the work of Santos... but more from a philosophical perspective rather than saying there is an error in it per se. Detector efficiency may be fundamental to Santos' position, but I don't think too many scientists will see it as such. I think his focus is much too narrow to gain any mainstream attention. And despite his best efforts, I do not see how a stochastic approach will ever work to accomplish his goal: a local realistic alternative to QM that respects Bell. But I could be wrong.

If we need to have a separate thread about the pros and cons of Santos and Marshall's work, then I would be happy to participate. However, I don't want to mislead anyone into thinking I am an expert on it. Nor should anyone think that I am denying that they are respected scientists. However, SED is less mainstream than Bohmian Mechanics, which is itself not mainstream. Given the nature of this forum, I think that is relevant.


2. Parametric Down Conversion produces entangled photon pairs. With the evidence, how could you not believe this... unless, of course, you deny the existence of entanglement a priori. The only problem is that clearly, you can measure the difference between groups of entangled photon pairs vs. pairs in which there is no entanglement. (And there are many different ways to entangle particles.) So the question is really: what do YOU call the photon pairs produced by PDC?

I do not believe there is a deeper level of reality than the HUP implies. Therefore, I do not believe there is definite real value for observables outside the context of a measurement. I consider this an orthodox view, hardly in need of further description.
 
  • #57
Vanesch said:
Vanesch, congrats with your scientific attitude.

I third that. Vanesch, thank you for taking the defense of my position while I was away. Your eloquent comments precisely characterize my views as well on why it is worth investigating local realistic theories such as SED.

Zapperz said:
No SED theory has ever been attempted to match the results of ARPES, RPES, even multiphoton photoemission processes. In this day and age, photoelectric effect is chicken feed.

SED has not yet been successful in this regime. However, there is currently ongoing work by Dan Cole, who's paper Vanesch cited. The three papers on detectors I cited by Santos do also discuss this problem, if you care to read them. I also have specific ideas along these lines, which I won't discuss.

However, there is an important point to consider about SED and local realist theories in general. If SED is correct, then the physical description of atomic physical processes will also be much more detailed and complex than will the standard quantum mechanical treatment. In fact, the actual physics of SED is very nonlinear, when modeled precisely and accurately. But this is very difficult to do because of the nonlinearity in the theory. In fact, because analytical analyses alone of nonlinear systems is not very reliable, SED theorists are taking advantage of numerical simulations of atomic physical processes described by SED; and lo and behold, these numerical simulations are beginning to show that SED works where it was once thought to fail, such as in generating the probability density distribution for the position of an electron in the ground state of the hydrogen atom. The techniques of SED are also becoming extremely useful in analysis of Casimir and van der waals forces in various boundary conditions:

http://www.bu.edu/simulation/publications/dcole/PDF/DCColeBUPresentationApril162003.pdf

Now you might say that that's why QM is a better theory, because it gives a correct first-order description of atomic spectra and photoemission processes, whereas SED has to resort to a nonlinear description of light-atom interactions. However, a nonlinear description is what would be expected from a more fundamental and accurate stochastic local realist description of atomic physics. In fact, one could have said the same about Newtonian gravity versus general relativity when it was being developed. You could have argued, what is the use of a nonlinear field equation to describe, say, the motion of a test particle in a gravitation potential, when we already have a perfectly linear theory (Newtonian gravity) that does this just fine? Of course the claim was that GR would be the more fundamental and accurate description of gravity, to which Newtonian gravity is only a very good approximation; and given this to be the claim, then there would eventually be new or different predictions that GR would make against Newtonian gravity. And indeed there were.

Likewise, the same claim would be made about SED, that it gives a more accurate description of atomic-optical physics to which standard QM is an excellent mathematical approximation. Therefore, SED will make new predictions that standard QM does not. And of course we know this is true! But such tests have yet to be carried out. So, I would say give it time.

Vanesch said:
Don't get me wrong, I don't think that any amount of funding can make a totally ill founded theory work as well as QM. But maybe SED would have had an equally successful development if it would have received as much attention. So I think it is not totally fair to ask of SED to give you the same level of actuality and sophistication as QM has today, given the hugely different amounts of means that were invested in both paths.

Indeed this is partly true. Theoretical QM research received several orders of magnitude more man power, grant money, and time than has SED (BTW, I think this is the same reason that Bohmian mechanics has yet to be made fully relativistic). However, another significant reason for the limited scope of SED, is the fact that the necessary mathematical and computational tools to accurately analyze the nonlinear partial differential equations of SED for nonlinear systems were only developed in the 80's when many researchers in the field had already become pessimistic about the theory. There is a very nice review article of the history of SED that can be found here:

http://www.bu.edu/simulation/publications/dcole/PDF/SwedenCole2005.pdf

Vanesch said:
After all, QM also faces its gorilla: gravity (with which SED has no problems for instance).

EDIT:
the chicken-feed list:
-photo-electric "lumpiness"
-black body radiation
-stability and spectrum of hydrogen
-gyromagnetic ratio for electrons up to order 6 in alpha
- Lamb shift
- Bell experiments with PDC xtals

Now, ask your average student a list of results which were the historical motivations which made people finally accept quantum theory? This is what I find intriguing.

EDIT2: personally, I find this exploration more "cost-effective" than pondering for 30 years about how to tie up my shoes in 11 dimensions.

These are excellent points. Just to add to the SED chicken feed list, the Casimir effect, Unruh-Davies radiation, and Aharonov-Bohm effect. And the SED description of the AB-effect also has an experimentally distinguishable prediction:

"The Paradoxical Forces for the Classical Electromagnetic Lag Associated with the Aharonov-Bohm Phase Shift". Timothy H. Boyer.
http://arxiv.org/abs/physics/0506180

Vanesch said:
In fact, I fight every dogmatic religious attitude with religious conviction In the same way as I would argue against a religious Local Realist, I argue against a Religious Bohmian, or a Religious quantum theorist.

Same here. In fact, I have a currently running debate with Sheldon Goldstein about the problems with the physical interpretation of the wave function in BM, as well as one with Trevor Marshall on conservation of energy issues in SED.

Vanesch said:
My point is simply that the simplicity of this SED model and the accuracy of its predictions (true, within a very restricted domain for the moment) is intriguing, and that we might learn something if only we understood why. I have a hard time believing that it is pure coincidence that quantum theory and SED models give so close results, with so different postulates. So the point is not so much SED versus QM, but how come that SED and QM give same predictions.

Exactly. From a philosophy of science perspective, if we understand what functional aspect of the mathematical structure and physical ontology of these different theories gives them much of the same predictive power, that would also be of considerable value to the scientific methodology of physics. There are many alternative formulations of physics, such as Brans-Dicke theory, which Vanesch also mentioned, or Bohmian mechanics, Everett's MWI, GRW spontaneous collapse, or even SED, which all have vastly different ontologies, but which are still empirically very close. As a consequence, it is very difficult as a theorist, to know which ontological interpretation is closer to the objective truth. Developing a rigorous means by which to help make this judgement would be of value for any theorist, and especially those who work on competing theories which are very far from being experimentally testable, i.e. string theory, loop quantum gravity, Hawking's quantum cosmology, even semiclassical gravity!.

Vanesch said:
One should not religiously commit to a single theory, and view competitors as personal rivals. Competitive theories are the backbone of scientific inquiry.

Yes! In fact, these arguments about how local realistic challenges to the standard formalism of QM can give us deeper insights into it, has already been proven in my opinion. Einstein's critical mind allowed him to see more deeply into the foundations of quantum mechanics than many of its most ardent defenders. And the kind of philosophically motivated critical questions he asked but could not yet answer were to bear fruit barely 10 years after his death when they were taken up again by another progressive critic of standard QM - John Bell.


DrChinese said:
It seems strange to see people constructing theories that say "Nature is LR but Experiments will always say QM" in the presence of Bell's Theorem.

Indeed that would seem strange to "see people constructing theories that say "Nature is LR but Experiments will always say QM" in the presence of Bell's Theorem." Santos and Marshall are not saying this however. They are saying that "Nature is LR and experiments are consistent with this."

DrChinese said:
In my opinion, for SED to be a viable local realistic alternative to QM: it MUST make a prediction for entangled photon spin correlation that is at odds with Malus' Law (cos^2). I mean, that's ultimately the point of the 87% efficiency threshold that Santos claimed must be surpassed to distinguish (i.e. that there is a difference in the predictions which is being masked due to experimental loopholes). I just do not see how that makes any sense, because to assert that is essentially to say that Malus' Law is wrong too. And I consider that to be pretty fundamental.

I thought you said you were very familiar with Santos and Marshall's work? Marshall and Santos showed a long time ago that stochastic noise does in fact modify Malus Law, in such a way that is still consistent with observation. Please read the abstract of this paper:

Stochastic optics: A local realistic analysis of optical tests of Bell inequalities
http://prola.aps.org/abstract/PRA/v39/i12/p6271_1


DrChinese said:
If we need to have a separate thread about the pros and cons of Santos and Marshall's work, then I would be happy to participate. However, I don't want to mislead anyone into thinking I am an expert on it. Nor should anyone think that I am denying that they are respected scientists. However, SED is less mainstream than Bohmian Mechanics, which is itself not mainstream. Given the nature of this forum, I think that is relevant.

I would be willing to participate in such a separate thread. However, SED not being mainstream has not deteriorated the quality of the arguments or discussion in this thread. Moreover, SED is solid, peer-reviewed work, just as is Bohmian mechanics.


DrChinese said:
2. Parametric Down Conversion produces entangled photon pairs. With the evidence, how could you not believe this... unless, of course, you deny the existence of entanglement a priori. The only problem is that clearly, you can measure the difference between groups of entangled photon pairs vs. pairs in which there is no entanglement. (And there are many different ways to entangle particles.) So the question is really: what do YOU call the photon pairs produced by PDC?

DrChinese, you apparently are not very familiar with Marshall and Santos' work. They and others have accounted for PDC entanglement of photons within the stochastic optical formalism:

"What is entanglement?" Emilio Santos.
I conjecture that only those states of light whose Wigner function is positive are real states, and give arguments suggesting that this is not a serious restriction. Hence it follows that the Wigner formalism in quantum optics is capable of interpretation as a classical wave field with the addition of a zeropoint contribution. Thus entanglement between pairs of photons with a common origin occurs because the two light signals have amplitudes and phases, both below and above the zeropoint intensity level, which are correlated with each other.
http://arxiv.org/abs/quant-ph/0204020

A Local Hidden Variables Model for Experiments involving Photon Pairs Produced in Parametric Down Conversion: Alberto Casado, Trevor Marshall, Ramon Risco-Delgado, Emilio Santos.
http://arxiv.org/abs/quant-ph/0202097

A. Casado, T. W. Marshall, and E. Santos, J. Opt. Soc. Am. B, 14,
494-502 (1997).

A. Casado, A. Fern´andez-Rueda, T. W. Marshall, R. Risco-Delgado,
and E. Santos, Phys. Rev. A 55, 3879-3890 (1997).

A. Casado, A. Fern´andez-Rueda, T. W. Marshall, R. Risco-Delgado,
and E. Santos, Phys. Rev. A 56, 2477-2480 (1997).

A. Casado, T. W. Marshall, and E. Santos, J. Opt. Soc. Am. B 15,
1572-1577 (1998).

A. Casado, A. Fern´andez-Rueda, T. W. Marshall, J. Mart´inez, R. Risco-Delgado, and E. Santos, Eur. Phys. J. D 11, 465 (2000).

A. Casado, T.W. Marshall, R. Risco-Delgado, and E. Santos, Eur. Phys. J. D 13, 109 (2001).

DrChinese said:
I do not believe there is a deeper level of reality than the HUP implies. Therefore, I do not believe there is definite real value for observables outside the context of a measurement. I consider this an orthodox view, hardly in need of further description.

That I would have to sharply disagree with. Bohmian mechanics proves the opposite of what you believe regarding the HUP or that observables don't have a definite real value before measurement.

Regards,
Maaneli
 
Last edited:
  • #58
** ''Negative probability" is a contradiction in terms. Look at the axioms that need to be satisfied for something to be a "probability". Page 1 of any probability/stats textbook. **

I thought you would say this. It implies you did not understand anything of this proposal (and neither about probability theory, I know Kolmogorov did not understand this either so you are in good company). :bugeye: Negative probability could mean negative energy, the amplitudes could reveal something about how detection works... Anyway, probability only needs to be positive in the limit of infinite measurements, the mistake people like you, Shimony ... make is that you always assume statistics to apply to single events.

***
Formally (i.e., leaving aside the actual meanings of relevant concepts) your statement is true: you can violate Bell's inequalities with a local theory if you allow probability distributions that are sometimes negative. But please. If that's the best available argument against my claim, it's just a complicated way of confessing that my claim is true. ***

Absolutely not, again I would invite you to think about it. As far as I know, Sorkin's proposal goes in a similar direction but one needs to revise
measurement completely (as well as stop thinking in terms of one particle situations).

As far as it goes, I explained why BM does not solve measurement either ... it is rather nonsensical that the electron goes through the left slit and the measurement apparatus points out right :cool:. The problem I have with all these stories such as BM, Copenhagen, MWI is that these are offering a very simple naive way out (although BM definitely does a better job), while I hear most of these people complaining about naive realists which come up with much more subtle and intelligent constructions :rofl:

Careful
 
Last edited:
  • #59
***
1. Sure, I challenge the work of Santos... but more from a philosophical perspective rather than saying there is an error in it per se. Detector efficiency may be fundamental to Santos' position, but I don't think too many scientists will see it as such. I think his focus is much too narrow to gain any mainstream attention. And despite his best efforts, I do not see how a stochastic approach will ever work to accomplish his goal: a local realistic alternative to QM that respects Bell. But I could be wrong.
***

I ask you how such detector (as well as the detection process) WORKS so that we can see whether Santos is an idiot or not.

***
2. Parametric Down Conversion produces entangled photon pairs. With the evidence, how could you not believe this... unless, of course, you deny the existence of entanglement a priori. The only problem is that clearly, you can measure the difference between groups of entangled photon pairs vs. pairs in which there is no entanglement. (And there are many different ways to entangle particles.) So the question is really: what do YOU call the photon pairs produced by PDC? ***

States produced in parametric downconversion are product states, of course you can write them as a sum of entangled states and then claim that entanglement has been observed which is what you say - but if that is your case, then it is a very weak one indeed. I asked for a GENUINELY entangled state, how to produce such one ??

I am not claiming that the entanglement correlations do not exist (although they have not been observed) but that the explanation QM attributes to them is wrong since it depends upon unphysical processes (consciousness or action at a distance).

Careful
 
Last edited:
  • #60
ttn said:
I'll second that. Vanesch's comments on this thread are a much needed breath of scientific fresh air in the face of dogmatic/religous attachment to QM. My only complaint would be that one shouldn't give quite as much credit to QM as Vanesch has done here (though I know from other discussions he agrees with me about this). QM (assuming thereby we mean the orthodox or Copenhagen theory) is a bad theory.


Ugh, you can't say that either! The quantum formalism is a vastly successful formalism, if you read it in the following way: imagine a professor telling his students: "ok, today I'm going to introduce you to something called "quantum mechanics". First of all, it is - as is any new scientific theory - very strange ; some say, ununderstandable ; but I'll show you how you have to use it, how to make calculations, and I can tell you that people have done so, and always could find agreement with all non-gravitational experiments (even the effect of the fixed gravitational potential can be taken into account, which makes exception to the cited limitation in some simple cases). This is the main reason why you should learn it. Don't ask me what it "means". Just learn how to do the calculations. That's the quantum formalism [...] "

You can transpose that to any scientific theory, it is its essence. The problem is, not many people (especially students) are interested in "learning to do calculations of outcomes of experiments". People want philosophy, or better yet, they want revelation. They want to know what it actually means, and to what great secrets of nature they will be introduced. They want to know the truth, they want to know "what really happens", not simply some calculational rules. Well, it's a lesson in philosophy we receive from modern physics, that as of now (and probably for a long time to come), we won't know the "truth". That doesn't stop some from claiming they do, but this is not different from any sect guru and its adepts who claims to know the enlightment. The only thing we finally know, is that certain formal systems of calculation are extremely accurate in a certain scope of application. That's way much sobering than the Great Story of the Meaning of Life, the Universe and Everything (and which was probably the main reason for many students to get enrolled into physics in the first place, not in the least because of the hype in popular literature about this). In fact, some might even regret finally to have come to the lecture of "how do I calculate outcomes of experiments", and tell themselves that, all matters equal, it would have been a better idea to learn "how to increase the contents of my bank account", the philosophical challenge of both endeveours being upon reflection, about similarly meager.

Because of that disappointment, and because of the inquiring nature of the human mind, and because that's why they came here for in the first place, and because it sells more books, people cannot be satisfied with that all-too-limited set of "lectures of how do I calculate outcomes of experiments". They want to know "the Truth". Now, where there is demand, there will be offer, so that's what you get: the Truth. In other words, an ontological interpretation of the formal rules you use to get outcomes of experiments. A story, which tells you what Really Happens (TM). Exactly like the sect guru tells you the Truth, and what Really Happens.
We, as humans, need that, in order to satisfy our minds, to motivate ourselves to work ourselves through all that formal stuff, and also to find inspiration in our thinking. I call an "ontological interpretation" the "toy world" that is associated with a certain formalism. You can even have some liberty in setting up such a toy world that corresponds to a given formalism (as long as it is faithfully in agreement with the elements of said formalism of course). When there is such liberty, then you can argue endlessly of the merits of one over the other (which is what happens here in this and related threads).
Every new scientific formalism has had its dose of "ontological criticism". With Newton, the main problem was to know what was it, physically, that was "pushing" on the planets, and was formally *represented* by the "force of gravity" ? Were it invisible angels ?
With Maxwell, the question was what were these "fields" in space where there was nothing ? Vibrations in some bizarre invisible liquid ? Relativity poses the question of what is exactly this "space-time manifold" ? Some kind of 4-dimensional pasta in a twisted form ?
But the real whopper came with quantum theory, of which Bohr simply said that it describes *nothing*. :cool: First, the jet-setters found that a cool idea, something different than usual: hey, we're describing very accurately "nothing". All that naive lot is thinking about "stuff", but we think about "nothing", that's way cooler ! There are still a lot of adepts of the "there's nothing" view, but now that it has lost its initial fanciness due to its fashionable novelty, many people start to realize that having a mental picture of the toy world of "nothing" is not what they came for initially, when they wanted to learn the Truth, and that it doesn't help them thinking about it. Come in the Local Realists, the Everettians and the Bohmians.

Personally, I need a story too, and that's why I apply to quantum theory exactly the same kind of reasoning as to all others: take the elements of your formalism as "reality". If Maxwell has fields, take them as real. If Newton has forces, take them as real. If GR has a 4-dim manifold, take it as real. Well, if quantum theory has a unitary structure, take it as real. You then end up in MWI (that's why I consider myself, in as far as I'm thinking about a quantum toy world, an MWI-er), and many people don't like that because it looks so totally different from what we thought the world was like when we were kids. But it is no more or no less real than all that other stuff: it is real in the *toy world* that you mentally set up in order for you to picture the formalism.

SED has simply *another* toy world, and Bohmian mechanics yet another toy world. SED, because it has a totally different formalism (classical fields with noise terms) ; Bohm because it takes over the formalism of (unitary) quantum theory, and adds an extra formal element to it similar to the Newtonian formalism: particles and forces.

So it is a bit strange that a Bohmian would find the quantum formalism a "bad theory", because he includes it. He only added an extra machinery for the simple sake of being able to construct a different toy world which is closer to his intuitive desires. Nevertheless, the Bohmian relationship with quantum theory is entirely understood, because it was initially set up on purpose to be so. There's no surprise when both find identical predictions. We know mathematically why this is so (and in a rather straightforward way). Nevertheless, because of the totally different toy world offered by BM, it can offer a refreshing perspective on things like the two-slit experiment for instance. It's fun to know you can think of that experiment in several ways (in different toy worlds), and nevertheless obtain the same results: we can think that there is "nothing", or we can think that there are "parallel worlds", or we can think that there are genuine particles guided with some non-local quantum force. These mental pictures are entirely different, although they share the same core calculation (which is nothing else but the quantum formalism, eventually embellished with some extra machinery - nevertheless, the right result finds its origin in the schroedinger equation).

However, SED is entirely different. SED is a theory of coupled classical field equations with noise terms. There is a relationship with quantum theory of course, because in QFT, the field operators satisfy similar non-linear equations (without the noise terms), but what is not understood is how come that these noise terms in the classical equations can mimic so many aspects of the operator solution without the noise terms. That's an entirely formal question, apart from any philosophical interference (and I think that it is the important reason to consider SED up to some point). From the SED PoV, it needs to be understood how come that solutions to non-linear partial differential equations with noise terms agree with solutions of nonlinear operator equations (in totally different spaces). From the QM PoV, it needs to be understood how come that the solutions to operator equations in high-dimensional spaces are well described by "simple" solutions of non-linear PDE in 3-D by adding noise.

So there are two entirely different discussions here. One is pseudo-philosophical, and concerns personal preferences for toy worlds. It is only pseudo-philosophical, because the true philosophical attitude is to say that, unfortunately, we don't know what is true, and we're limited to guessing. This then leads to religious brotherhood attitudes where the Good (us) fights the Evil (them).

The other discussion is about understanding the relationship between different formalisms which (within certain limited scopes) succeed in making identical predictions. *this* is the interesting discussion.

In conclusion: I think it is wrong to say that quantum theory is a "bad theory". It works marvelously. However, I think it is wrong to reify it, and it is enlightening sometimes to look upon its results from different angles, be it Bohmian or SED, or what ever.
 
  • #61
There is something highly unethical to talk about detection loopholes or detector efficiency while ignoring some very fundamental aspects and responses to such things.

Unlike most of you, *I* have been involved in actual measurement of such things since the start of my graduate school years, and since about 1 1/2 years ago, have been making high QE photocathodes. So I can talk about background noise, dark current, detector signal, blah blah blah till everyone turns blue. Trying to distinguish between what is "noise" and what is "signal" is a HUGE part of my work. If you look at the raw data from photoemission spectroscopy, for example (i.e. if you make cuts in the data in my avatar), you will see background noise, detector noise, dark currents, etc... Yet, according to SED (and Santos), the these "random" background noise can somehow mimic "actual signal"! NO KIDDING!

How convenient can that be when you can simply stick something in ad hoc, and voila, you can mimic the actual signal simply by burying something in the detector noise. Or did we forget that SED comes with its own set of assumptions about the nature of such background fluctuations? And unlike QM, many of these "assumptions" have not even been tested at the most fundamental level to even see if they are consistent with observation.

Photodetector performance is such a crucial issue, and has been studied so extensively, it is not even funny. Yet, I have seen no actual study done to see how well the detector performance actually matches any of SED's assumption. If we can verify everything from Fowler-Nordheim law at finite temperatures to and the Richardson-Dushman relations for photocathodes, how come this void for SED remains? One would think this is one very fundamental aspect of verification of SED to be taken seriously. Or maybe it is because it is not falsifiable?

However, the most disturbing and unethical aspect of this discussion is the complete void of citation to the TONS of issues that have already been addressed regarding the detection efficiency. All I see are references given to various detection issues that somehow supports SED's point of view on the Bell-type experiments (while ignoring the more stringent CHSH-type experiments). Nowhere was there any mention, by the so-call experts or students in SED, papers such as by S. Massar et al[1] or A. Cabello[2] that have either formulated a Bell-type inequality that are insensitive to detector inefficiency, or that one can distinguish already between quantum optics prediction versus classical with just a detector at 69% efficiency (which we already have!). Or what about Tittel et al.[3] experiment that analyzed their data without subtracting any accidental coincidences (something that many have claimed would reveal "non-quantum" results)?

Where are the rebuttals from the SED camp to those papers? Check any of Santos or Marshall's published papers and citations to their papers that addressed many of the issues that they brought up. So how come they did not address any of these? And I only did a very quick search on a few papers that I am aware of. The rest of you who, I presumed, work in this field or very much interested in it, should have a truckload of literature that you are sitting on. So why were these types of papers that have addressed such detector issues WITHHELD from being listed here alongside those that were so quickly advertized?

There are more of these type of papers. This is why I find such omission here very disturbing. It somehow conveys that the issues brought up by SED are "unanswerable" and thus, must be true. If you omitted such on info on purpose, then shame on you. If you simply were ignorant of all of these large bodies of information, then what else have you missed that you SHOULD have known before pushing this thing onto us?

Zz.

[1]S. Massar et al. PRA 66, 052112 (2002).
[2] A. Cabello PRA 72, 050101 (2005).
[3] W. Tittle et al. PRL 81, 3563 (1998).
 
Last edited:
  • #62
RandallB said:
Sorry QuantunEnigma I’m not buying it.
You just happen to join the forum here on very same day that Gordon finally has more than one pointless page on his web site, and the first post you made attempts to draw attention to that site.
If you’re not WM, you must be someone helping him – and no I’m not going to mention the name of the site here for you, I’ve seen nothing there worthy of sharing with anyone.

So just what is your point – are you looking to fill in the blanks in “W-Local” and “W-factoring” with something from DrC who does know something? Follow the path to his website info if you want to learn something worthwhile.

If you can not make your point short, direct and clear on your own website please listen to Zz and don’t waste our time with it here

As long as one person knows the secret web address then I have done an accidental good job.


I did not know it was illegal to give it out when I did.


The address had many pages long before i communicated here but do not let facts kerb your enthusiasms.


Was it Albert Einstein said, "Rich thinkers always meet violent opposition from poor minds" please?
 
  • #63
Maaneli said:
I thought you said you were very familiar with Santos and Marshall's work? Marshall and Santos showed a long time ago that stochastic noise does in fact modify Malus Law, in such a way that is still consistent with observation. Please read the abstract of this paper:

Moreover, SED is solid, peer-reviewed work, just as is Bohmian mechanics.

...That I would have to sharply disagree with. Bohmian mechanics proves the opposite of what you believe regarding the HUP or that observables don't have a definite real value before measurement.

Regards,
Maaneli

I really have to be amused at someone who touts both SED and BM in the same post.

In case it wasn't clear, I consider the idea that Malus' Law is incorrect to be the death knell for any hypothesis. You may as well argue that c is really 5% higher than the usual published value and the difference is noise. Apparently, experimental noise only comes in one kind: the kind that keeps an agenda alive.

BTW, if you think that I don't consider alternative theories and speculative hypotheses... you are completely wrong. In that regard, I am probably no different than anyone else, and I read plenty of unpublished articles. But call it for what it is, and don't elevate it above proven useful theory.
 
  • #64
DrChinese said:
1. Sure, I challenge the work of Santos... but more from a philosophical perspective rather than saying there is an error in it per se. Detector efficiency may be fundamental to Santos' position, but I don't think too many scientists will see it as such. I think his focus is much too narrow to gain any mainstream attention. And despite his best efforts, I do not see how a stochastic approach will ever work to accomplish his goal: a local realistic alternative to QM that respects Bell. But I could be wrong.

The point of SED is that the Bell violations in quantum theory are idealistic formal extrapolations which have no feasible counterpart in the empirical world. It took me some time too to understand that, but I think that SED sees quantum states that violate Bell as something similar to complex analytical extension of the 1/r law or something: formally you can calculate it, but you'll never encounter a complex distance between two bodies.

I'll play the devil's advocate:

I already gave the example of "ideal" thermal engines. Imagine a proponent of thermodynamics (here, SED), which claims that you cannot violate the second law of thermodynamics (Bell's inequalities).
Now, Maxwell Deamon (Bell) comes along and proves the MD theorem in Formal Heat Engine Theory (QM): using ideal heat engines (100% efficient "photon detectors"), which convert perfectly, heat into work, it is easy to show how to violate the second law. Here goes the proof:

Entropy is given, in adiabatic conditions, by T dS = dQ.

Now, consider a heat reservoir R1 at T1, and another R2 at T2>T1.

Take a heat engine E1, which takes an amount of heat dQ1 from R1, and converts it to work dW. Drive with that work, a heat engine E2, which converts it to heat in R2. Conservation of energy requires us:

dQ1 = dW = dQ2 (1)

Now, the entropy change of the first heat reservoir is dS1 = - dQ1/T1, and that of R2 is dS2 = + dQ2/T2.

dS = dS1 + dS2 ; using (1), this gives:

dS = dW (1/T2 - 1/T1) = dW (T1 - T2) / T1 T2

Given that T2 > T1, we have that dS is negative: violation of the second law. Hence the theorem of MD tells us that in Formal Heat Engine Theory, the second law is violated.

There has been experimental evidence for this. Not that one has seen a RAW DATA violation of the second law, but that was due to the finite efficiency of the heat engines used (around 20-25% as of today between heat reservoirs of 400 and 300 K).

It has been experimentally established, that 1000 Joule was extracted at 300K, and about 180 Joules was restored in the 400K reservoir.
Now, if we correct for the 20% efficiency of our heat engine, that means that with a perfect engine, we have 1000 Joules * 0.2 = 200 Joules (the rest was lost due to efficiency in the engine) extracted at 300 K, and 180 Joules that could ideally be restored to the 400 K reservoir.

dS1 = - 200 Joules/300K = -0.66 J/K
dS2 = + 180 J/ 400K = + 0.45 J/K

dS1 + dS2 = -0.66 J/K + 0.45 J/K = - 0.21 J/K

The experimental error is estimated at about 0.01 J/K using errors on the efficiency, the thermometers and calorimeters, so this means that a violation of the second law of thermodynamics with 20 sigma has been observed.

(end devil's advocate).

You directly see the irony here. It is how SED proponents read the claims of Bell identity violation using fair sampling corrections.
The error in the above reasoning has of course been the hypothesis of ideal heat engines and the "correction" for the inefficiency of our experimental heat engine. SED proponents claim that ideal photodetectors with 100% efficiency fall in the same category.

Given the nature of this forum, I think that is relevant.

Well, the "mainstream" clause is essentially to keep out crackpottery. As long as it is about peer-reviewed published stuff (and related, eventually non-peer reviewed stuff), and the idea is not to call 90% of all working scientists misguided idiots, it is "mainstream enough".

Don't get me wrong. I'm (of course) not an avid SED proponent! But SED has booked some intriguing successes, which have a published record. Even if SED is ultimately wrong, those successes remain, and SED is a good "reality check" to find out if certain quantum claims are really so quantum. I think that is way scientific enough to sanction discussion about it here - be it simply to clear up some errors in reasoning.


2. Parametric Down Conversion produces entangled photon pairs. With the evidence, how could you not believe this... unless, of course, you deny the existence of entanglement a priori. The only problem is that clearly, you can measure the difference between groups of entangled photon pairs vs. pairs in which there is no entanglement.
(And there are many different ways to entangle particles.) So the question is really: what do YOU call the photon pairs produced by PDC?

This is simply obtained by considering a non-linear dielectric in classical electromagnetism (and adding noise to every mode: that's the non-classical part of SED). Actually, the quantum description is derived from this (often used in practice) classical description of non-linear dielectrics: its series expansion in coupled modes classically, gives you the photon couplings in the quantum version.

In a SED description, an "entangled pair" simply comes down to EM pulses with or without a phase/polarisation relation, superposed over noise.

"Detection of photons" in SED is a stochastic process as a function of the incident intensity, and the famous "subtraction" of Santos is the fact that the incident intensity of purely the noise modes is subtracted in a detector. Correlated pulses of light will hence give you correlations between detection events. The thing that generates a lot of funny ("quantum") effects in SED is that the noise modes are also present in the optical system (they are not independent after-the-fact noises at the detector).

As such, you can understand that, for SED people, a "100% efficient photodetector" is nonsense, because each pulse will only have a finite probability (after subtraction) of seeing a click. There's an upper limit to the probability of clicking upon a pulse. It corresponds to about the 87% needed in order to avoid Bell.

Again, you don't have to buy this. But it points out the difference in approach, and why people find each other idiots. SED people find QO people "idiots" because they use " corrections for over-unity devices", and QO people find SED people "idiots" because they don't accept trivial experimental corrections.

That said, I wonder indeed how SED talks itself out of GHZ experiments...
Haven't seen that yet.
 
  • #65
DrChinese said:
Welcome to PhysicsForums, QuantunEnigma. In the hopes that I am not being baited (as RandallB points out above):

A Local Realitic theory is a theory composed with the following ideas:

a. Locality: often considered as the same thing as Lorentz invariance, it is essentially the idea that effects do not propagate locally faster than c.

b. Reality: In the words of Einstein, who was the ultimate local realist: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

Bell discovered that QM leads to some theoretical predictions that are nonsensical (and violate b. above), such as negative probabilities. However, they are supported by experiment.

I hope this answers your question.

You were not baited and a "mutual friend" ? liked your response also he said. We had discussion re naive realism and the problem that anyone would believe it. But your answer is not naive realism as we understand it and so it is a good and helpfull answer to build on.
 
  • #66
DrChinese said:
Bell discovered that QM leads to some theoretical predictions that are nonsensical (and violate b. above), such as negative probabilities. However, they are supported by experiment.

I hope this answers your question.

I think this cannot be correct. Please which experiments support negative probabilities?
 
  • #67
vanesch said:
Ugh, you can't say that either! The quantum formalism is a vastly successful formalism, if you read it in the following way: imagine a professor telling his students: "ok, today I'm going to introduce you to something called "quantum mechanics". First of all, it is - as is any new scientific theory - very strange ; some say, ununderstandable ; but I'll show you how you have to use it, how to make calculations, and I can tell you that people have done so, and always could find agreement with all non-gravitational experiments (even the effect of the fixed gravitational potential can be taken into account, which makes exception to the cited limitation in some simple cases). This is the main reason why you should learn it. Don't ask me what it "means". Just learn how to do the calculations. That's the quantum formalism [...] "

I would have no objection to this. The problem is that Copenhagen quantum theory is not the same thing as the quantum formalism. If students were just told "there is this formalism that allows us to calculate the probabilities for various things, but we really don't have a theory yet, i.e., nobody knows what the heck is going on to give rise to these various outcomes" that would be fine, and then maybe some of the brighter students could work on trying to develop a theory. The problem is, it isn't presented this way (because the "founders" of Copenhagen didn't think of it this way, and Copenhagen has basically been accepted as orthodoxy). It's presented as: "we have the theory, it's all worked out, we have a complete description of what's going on physically to give rise to these measurement outcomes, and the theory is: you shouldn't talk about such things, or if you do you better limit your talk to the wave function only and not ever mention that t here might be a more detailed level of description that actually makes sense of some things, oh and by the way even though the wave function alone provides a complete description of physical states you shouldn't think of the wave function as describing anything physically real [?!??], it's only about our knowledge, oh and also don't worry too much about the fact that the time evolution of the wave function is different depending on whether or not someone is looking -- sure, the wf provides a complete description of physical states, but when we use that second time evolution equation we'll just switch over to thinking of the wf as only representing our knowledge so as to avoid the implication that our mere act of looking changes the physical dynamics... of course, then again, it's pretty cool that the mere act of looking changes the physical dynamics, yeah, that totally sticks it to those jerk classical physicists who believed in an objective external reality that did its thing independent of human consciousness... etc"

This is a bad theory.


The problem is, not many people (especially students) are interested in "learning to do calculations of outcomes of experiments". People want philosophy, or better yet, they want revelation.

No, people want *physics*. At least, reasonable physicists do. Look at what your view implies: Ptolemaic and Copernican models of the solar system are really the same thing, and it's merely a "philosophy" question (or something that isn't scientific, can only be answered by "revelation") which one is "really true". Or: is matter made of atoms? On your view (apparently) there is no such question, at least not as a scientific question. Sure, maybe philosophers or religious zealots could ask such a question, but good scientists know that as long as you've got some magic equations to tell you what the temperature is (or whatever) that's as far as science can go. Well I say: one look at the history of science should demonstrate immediately and conclusively that this is not as far as science can go. And anybody who says that this no longer applies in the quantum realm is trapped in a circular argument: Copenhagen is true because Copenhagen is true.


They want to know what it actually means, and to what great secrets of nature they will be introduced. They want to know the truth, they want to know "what really happens", not simply some calculational rules.

Yup. I agree completely -- assuming "they" is transposed to refer to "good scientists" rather than "philosophical/religious nuts" or whatever you had in mind...


Well, it's a lesson in philosophy we receive from modern physics, that as of now (and probably for a long time to come), we won't know the "truth".

And therefore we never will and therefore we should stop thinking about it and trying to discover it? How, in retrospect, would we judge someone who said that about astronomy in 1400 or about the basic nature of matter in 1700?



Personally, I need a story too, and that's why I apply to quantum theory exactly the same kind of reasoning as to all others: take the elements of your formalism as "reality". If Maxwell has fields, take them as real. If Newton has forces, take them as real. If GR has a 4-dim manifold, take it as real. Well, if quantum theory has a unitary structure, take it as real. You then end up in MWI (that's why I consider myself, in as far as I'm thinking about a quantum toy world, an MWI-er), and many people don't like that because it looks so totally different from what we thought the world was like when we were kids. But it is no more or no less real than all that other stuff: it is real in the *toy world* that you mentally set up in order for you to picture the formalism.

This is all beside the point. Here's the real issue: is there, or is there not, a single real world "out there" independent of us? If there is, then one and only one of the various possible "toy worlds" (i.e., theories) will correspond to the real thing. That is the true theory. It is of course true that there can be underdetermination, i.e., different theories which make the same sets of empirical predictions in some (say, present) context of knowledge. That just means it isn't yet clear which theory is true. But you seem to want to leap from this to a conclusion like "we can therefore never know which theory is true, and therefore we should quit thinking about it, quit worrying about which one is true, perhaps quit thinking that there is a real world out there at all." Well I say that's just crazy! It would have been the end of science if such a view had been accepted in the past, and nothing has changed.

BTW, to distance myself from one of the strawmen you attack, this does not mean that we must dogmatically latch onto some one theory today in the absense of sufficient evidence distancing it from the alternatives and securing its relation to the facts. If the evidence doesn't yet prove one theory right as against its competitors, then it would be irrational to believe that anyone theory is definitely right. But postponing judgment until the evidence is in (and maybe this'll take a million years, who knows) is not the same as giving up entirely on the concept of there being a truth of the matter. By the way, the fact that there can still be distinct, open, viable theories at some stage in history (e.g., today) does not mean that "anything goes" and we should accept something "unprofessionally vague and ambiguous" such as Copenhagen as also viable. Just because we don't yet know what's true, doesn't mean we can't identify crap when we see it.


SED has simply *another* toy world, and Bohmian mechanics yet another toy world. SED, because it has a totally different formalism (classical fields with noise terms) ; Bohm because it takes over the formalism of (unitary) quantum theory, and adds an extra formal element to it similar to the Newtonian formalism: particles and forces.

Sure, Bohm adds something to the "wave function only" description of orthodox QM. This is the basis for all of the bogus charges that it is unnecessarily cumbersome, that it should be dismissed by Occam's razor, that it is just OQM plus some arbitrary metaphysics, etc. But the real truth is that Bohm also *subtracts* a lot of junk that is present in OQM, namely the various measurement axioms. In Bohm's theory no such axioms are needed because the "toy universe" described by that theory makes no dynamical distinction between "measurement" and "non-measurement". There is just one kind of dynamics, and it applies all the time, whether a "measurement" is happening or not -- so all of the formal rules about measurement that are "axioms" from the POV of OQM, are theorems -- implications of the basic postulates of the theory -- in Bohmian Mechanics. This is really beside the current point, but it's worth noting since so many people fail to understand this.




So it is a bit strange that a Bohmian would find the quantum formalism a "bad theory", because he includes it.

It's not the mere formalism which is a "bad theory". Copenhagen (with all its extra-formal principles such as "completeness", there is nothing beyond the HUP as dr chinese spouts endlessly, and also its interpretation of some of the formal rules, namely the measurement postulates) is the bad theory.


He only added an extra machinery for the simple sake of being able to construct a different toy world which is closer to his intuitive desires.

That is completely false and unfair. The main benefit of Bohm is simply that he provides a clear, consistent theory whose postulates are 100% absolutely clear. It is, as one commentator put it, a real "physicists' theory" as contrasted with Copenhagen and its vague muddled confusions about "completeness" and anti-realism and collapse and whatnot. That Bohm also provides a simple, intuitive physical picture of quantum processes is gravy.


However, SED is entirely different.

Suppose, just for the sake of argument (and I will eat my shoes if this turns out to be true), SED was someday proved to make all the same predictions as QED. Then would it, or wouldn't it, be "entirely different"? That is, would it then, like Bohm, be (according to you) just another philosophical/metaphysical/religious/bu**sh** story to append to what's really scientific (viz, the quantum formalism)? Or would it be a genuinely different theory? Or what? I say they're all "entirely different" theories. SED, orthodox QM or QED or whatever, Bohmian Mechanics, GRW, etc. are all completely different theories. At most one of them is true because they say wildly different things about the physical world. You seem to want to equate a theory with its empirical predictions, which (I think we will agree, probably) makes QED and SED distinct theories, but renders OQM and Bohm and GRW and ... all just so many different "bedtime stories" associated with the same one physical theory. Unfortunately, in addition to being based on a crazy anti-realist premise, this way of classifying things also renders Ptolemy and Copernicus "the same theory". So much for the Copernican revolution and everything it led to in astronomy and physics...




So there are two entirely different discussions here. One is pseudo-philosophical, and concerns personal preferences for toy worlds. It is only pseudo-philosophical, because the true philosophical attitude is to say that, unfortunately, we don't know what is true, and we're limited to guessing.

That's a false dichotomy. Sure, maybe we have to guess today. But the real issue is: should we, or shouldln't we, be trying to do things in science that will result in us *not* just having to guess *tomorrow*? That is, should we or shouldn't we take the idea that one and only one of these theories is true, and get on with the task of trying to understand and generalize them all for the sake of eventually finding out which one really is true? That's what I took you to be saying before about SED, which is what I agreed with. To whatever extent SED is able to explain many or all of the various observations that are normally cited as proof of some quantum theory (and I don't know enough to be anything but mildly skeptical that "all" could really be the case), it means that belief in the quantum theory was premature and, in fact, scientifically, we can't be sure (today), i.e., "we're limited to guessing", which might be right -- which means that the proper scientific attitude to take is to keep working to understand how both theories work, how it is exactly that they manage to predict the same things even though they are so different, and hopefully use that knowledge to find some areas where they predict *different* things so that (in some "tomorrow") we can resolve the question empirically, scientifically. It seems you apply different standards when it comes to (say) OQM vs Bohm vs GRW, and I don't understand why. It's exactly the same issue.


This then leads to religious brotherhood attitudes where the Good (us) fights the Evil (them).

Oh please. So the idea that there is a real world out there and it is the task of science to figure out what it's like, leads somehow to religious jihads? Is that supposed to be an argument against scientific realism or something? I think you'll find if you look at history that a generally scientific attitude (based on realism) correlates r ather negatively with the instigation of religious jihads.



In conclusion: I think it is wrong to say that quantum theory is a "bad theory". It works marvelously.

So did Ptolemaic astronomy. But anyway, you're just equivocating again here between "quantum theory" meaning merely the quantum formalism, and its meaning the actual *theory* (Copenhagen, or whatever) that is presented in texts and believed by most people. It's the latter that's bad, not the former. (But since you seem, for some strange philosophical reason, to reject any distinction between mere formalism and theory, maybe I'm wrong to say that this is a mere equivocation on your part -- there really is some kind of substantive philosophical disagreement between us here; it's not just a minor logical error on your part.)
 
  • #68
vanesch said:
I'll play the devil's advocate: [snip]



Vanesch, that is a brilliant example. :!)
 
  • #69
QuantunEnigma said:
As long as one person knows the secret web address then I have done an accidental good job.

The address had many pages long before i communicated here but do not let facts kerb your enthusiasms.

Was it Albert Einstein said, "Rich thinkers always meet violent opposition from poor minds" please?
Gordon
(Or as you call yourself: MW, Mostly Wrong, QuantunEnigma, any more?)

Oh please, you think your a “Rich Thinker”
In post #66 you ask DrC for a negative probability example when he has already given you just that in his link that you quoted back to him!

On Aug 20 your site had only one working page with links to dozens of THIS PAGE DOWN BEING WORKED on links. That hardly counts as “many pages”,

On Aug 21 you put up about a dozen more pages, but any chance you might document your claim the “Bell Logic is false” in your explanations of your versions of W ‘locality’ and ‘factoring’ is still buried behind “This Page Down” links.
Then on the 22nd rather than use your MW id you created the extra QuantunEnigma id to lure DrC and others to your site.

That’s not rich thinking – that’s baiting and dishonest.
You owe both DrC and Zz an apology.
 
  • #70
To Zz's point: Call it for what it is... SED is a speculative work-in-progress that has yet to yield a single useful discovery. In the meantime:

Experimental violation of a Bell's inequality with efficient detection

M. A. ROWE, D. KIELPINSKI, V. MEYER, C. A. SACKETT, W. M. ITANO, C. MONROE & D. J. WINELAND (Nature, 2001)

Abstract:
Local realism is the idea that objects have definite properties whether or not they are measured, and that measurements of these properties are not affected by events taking place sufficiently far away. Einstein, Podolsky and Rosen used these reasonable assumptions to conclude that quantum mechanics is incomplete. Starting in 1965, Bell and others constructed mathematical inequalities whereby experimental tests could distinguish between quantum mechanics and local realistic theories. Many experiments have since been done that are consistent with quantum mechanics and inconsistent with local realism. But these conclusions remain the subject of considerable interest and debate, and experiments are still being refined to overcome 'loopholes' that might allow a local realistic interpretation. Here we have measured correlations in the classical properties of massive entangled particles (9Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's 'signal' is 2.25 +/- 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called 'detection' loophole.

To Vanesch's analogy (with thermodynamics): We are being asked to accept that "noise" accounts for violation of Bell Inequalities. Yet regardless of detector efficiency, the results are the same! Aspect's early inefficient tests yield almost precisely the same results as the later, more refined tests (as compared to the predictions of QM). Where is the movement towards the SED predicted values you might expect when visibility increases?

And to anyone who is actually in SED's court: If Malus' Law does not hold, please answer the following question: What are the values for the coincidence rate at 0, 22.5 and 45 degress (as compared to the cos^2 function from both classical optics and QM)? A specific value, so we have something to discuss... after all, it can't match QM without running afoul of Bell...
 
Last edited:
<h2>1. What is Bell's theorem?</h2><p>Bell's theorem is a fundamental concept in quantum mechanics that states that certain predictions of quantum mechanics cannot be reproduced by any local hidden variable theory. It was proposed by physicist John Stewart Bell in 1964 and has been extensively tested and confirmed through experiments.</p><h2>2. What is Harrison's (2006) inequality?</h2><p>Harrison's inequality is a mathematical expression that was derived by mathematician Michael Harrison in 2006. It is used to prove Bell's theorem and is based on the concept of entanglement, which is a phenomenon where two or more particles become connected and behave as a single system even when separated by large distances.</p><h2>3. How does Bell's theorem relate to quantum entanglement?</h2><p>Bell's theorem is closely related to quantum entanglement, as it proves that entanglement cannot be explained by classical physics and requires a non-local interpretation of reality. This means that the properties of entangled particles are connected regardless of the distance between them, which goes against our everyday understanding of how objects behave.</p><h2>4. What are the implications of Bell's theorem and Harrison's inequality?</h2><p>The implications of Bell's theorem and Harrison's inequality are significant for our understanding of the universe and the nature of reality. They suggest that there are fundamental limitations to our ability to understand and predict the behavior of particles, and that our current understanding of physics may need to be revised.</p><h2>5. How have Bell's theorem and Harrison's inequality been tested?</h2><p>Bell's theorem and Harrison's inequality have been tested through various experiments, including the famous Bell test experiments. These experiments involve entangling particles and measuring their properties in different locations to see if they are connected, as predicted by quantum mechanics. The results of these experiments have consistently supported the predictions of Bell's theorem and Harrison's inequality.</p>

1. What is Bell's theorem?

Bell's theorem is a fundamental concept in quantum mechanics that states that certain predictions of quantum mechanics cannot be reproduced by any local hidden variable theory. It was proposed by physicist John Stewart Bell in 1964 and has been extensively tested and confirmed through experiments.

2. What is Harrison's (2006) inequality?

Harrison's inequality is a mathematical expression that was derived by mathematician Michael Harrison in 2006. It is used to prove Bell's theorem and is based on the concept of entanglement, which is a phenomenon where two or more particles become connected and behave as a single system even when separated by large distances.

3. How does Bell's theorem relate to quantum entanglement?

Bell's theorem is closely related to quantum entanglement, as it proves that entanglement cannot be explained by classical physics and requires a non-local interpretation of reality. This means that the properties of entangled particles are connected regardless of the distance between them, which goes against our everyday understanding of how objects behave.

4. What are the implications of Bell's theorem and Harrison's inequality?

The implications of Bell's theorem and Harrison's inequality are significant for our understanding of the universe and the nature of reality. They suggest that there are fundamental limitations to our ability to understand and predict the behavior of particles, and that our current understanding of physics may need to be revised.

5. How have Bell's theorem and Harrison's inequality been tested?

Bell's theorem and Harrison's inequality have been tested through various experiments, including the famous Bell test experiments. These experiments involve entangling particles and measuring their properties in different locations to see if they are connected, as predicted by quantum mechanics. The results of these experiments have consistently supported the predictions of Bell's theorem and Harrison's inequality.

Similar threads

Replies
55
Views
6K
  • Quantum Physics
Replies
10
Views
2K
  • Quantum Physics
Replies
3
Views
809
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
  • Quantum Physics
Replies
22
Views
32K
  • Quantum Physics
2
Replies
47
Views
3K
Replies
56
Views
5K
Replies
18
Views
1K
  • Quantum Physics
2
Replies
50
Views
7K
Replies
177
Views
28K
Back
Top