How can we test the holographic principle and nonlocality in quantum mechanics?

  • Thread starter Thread starter christian_dude_27
  • Start date Start date
  • Tags Tags
    Holographic Universe
Click For Summary
The discussion revolves around testing the holographic principle and nonlocality in quantum mechanics, sparked by interest in Michael Talbot's "The Holographic Universe." Participants emphasize the need for a solid understanding of quantum mechanics fundamentals before delving into complex theories like nonlocality and entanglement. They suggest starting with basic concepts and resources, cautioning against metaphysical interpretations that could distort scientific understanding. Evidence of nonlocality, such as entangled particles, is acknowledged, but interpretations vary, particularly regarding their compatibility with special relativity. Overall, a foundational grasp of quantum mechanics is deemed essential for meaningful exploration of these advanced topics.
  • #151
**Oh really? Can you give an example? Or do you just have in mind the super-determinism scenario, in which you reject the idea of a free choice of parameter settings? But again, that wouldn't be a local model which violates the Bell inequalities; it would be a reason why the Bell inequalities can never be experimentally tested. **

You basically don't know that as I argued in my previous post - so it seems you are just telling us what your pal Bell spits out (without any evidence).

** So I presume you have in mind exactly what you said: a "classical local model which violates the Bell inequalities blatantly". And since Bell's theorem proves this is impossible, I'd be very interested to see your alleged counterexample. **

The models I refer to require indeed correlations beyond the lightcone which might or might not be considered as natural (the references are in the Morgan paper).

The validity of the no-correlation hypothesis has to be judged within the framework of a natural THEORY. As I seem to remember, another ``natural'' assumption such as the fair sampling hypothesis (in the EPR photon experiments) has been proven wrong in the framework of stochastic electrodynamics (which is very natural if you take classical physics seriously). Philosophical prejudices in this matter are of no interest whatsoever.

Careful
 
Last edited:
Physics news on Phys.org
  • #152
Careful said:
**Oh really? Can you give an example? Or do you just have in mind the super-determinism scenario, in which you reject the idea of a free choice of parameter settings? But again, that wouldn't be a local model which violates the Bell inequalities; it would be a reason why the Bell inequalities can never be experimentally tested. **

You basically don't know that as I argued in my previous post - so it seems you are just telling us what your pal Bell spits out (without any evidence).

** So I presume you have in mind exactly what you said: a "classical local model which violates the Bell inequalities blatantly". And since Bell's theorem proves this is impossible, I'd be very interested to see your alleged counterexample. **

The models I refer to require indeed correlations beyond the lightcone which might or might not be considered as natural (the references are in the Morgan paper).

The validity of the fair sampling hypothesis has to be judged within the framework of a natural THEORY. As I seem to remember, the latter has been proven wrong in the framework of stochastic electrodynamics (which is very natural if you take classical physics seriously). Philosophical prejudices in this matter are of no interest whatsoever.

Careful


So, I take it that means you won't be providing an example of a local theory that makes predictions which violate Bell's inequalities? (as opposed to staking out territory in the experimental efficiency loophole)
 
  • #153
ttn said:
So, I take it that means you won't be providing an example of a local theory that makes predictions which violate Bell's inequalities? (as opposed to staking out territory in the experimental efficiency loophole)
I am not interested in playing stupid wordgames about terminology, I clearly indicated that a super-deterministic theory is Bell local in the sense that the outcome of an experiment is only influenced by the events in its past lightcone. It does not satisy however the no-conspiracy (or no correlation) condition. This was admitted by your hero himself in chapter 12 of the book you like to cite so much. The examples are in the reference list of the paper as I told you before. And as Vanesch points out, irreducible stochastic models (satisfying a reasonable definition of locality) can reproduce the EPR correlations (without appealing to any loophole whatsoever) - Bell does exclude these on grounds of his free will criterion but the latter do appear to satisfy another reasonable form of free will (see Price). For an introduction see http://arxiv.org/PS_cache/quant-ph/pdf/0202/0202064.pdf , a paper by Adrian Kent. So, as I said, Bell's theorem narrows more accurately local possibilities (local in a reasonable sense), but does exclude very little if we critically examine some of its assumptions - assuming we ignore the experimental loopholes so far.

Careful
 
Last edited by a moderator:
  • #154
Careful said:
**
Suppose that particles are all composites, and that in interactions, they are neither created nor destroyed, except in pairs of particle / antiparticle (and then only in virtual form). Then one can rewrite the usual particle interactions in terms of particles in a Bohmian fashion. That is, there will be no particle creation / annihilation, so the whole thing will look again like QM, a problem that Bohmian mechanics has provided a remotely convincing interpretation. **

As far as I recall, the defined worldlines are not Lorentz invariant, i.e. frame dependent; it seems impossible to me to reconcile that with any notion of objective reality. It is also clear that interactions change the particle number, I guess you would have to introduce then a stochastic element in the dynamics which is again dependent upon your choice of foliation as well as a seemingly (limited) ad hoc choice of *when* the particles of the incoming species disappear and the others appear. Or is there some way to avoid these issues recently ?
Careful

The worldlines in Bohmian mechanics haven't been Lorentz invariant since what, 50 years. If you have an objection to what the Nobel prize winning physicist thinks about this, well, you can't argue with him because he is dead, but you can always read chapter 12 of "The Undivided Universe" which is devoted to this subject. It's not like Bohm and friends didn't notice it.

My point was not to force you away from Lorentz invariance (I think it's good that humans have religious beliefs), but instead to show that there is a way of stuffing QFT into a Bohmian form. Bohmian form does not include Lorentz invariance.

Yes, interactions change the particle number, but I'm not proposing a particle solution. What I'm proposing is a preon solution. To get Bohmian mechanics to fit into the QFT form, one must suppose that the elementary particles are not, well, elementary.

In a preon model with preons that are never created nor destroyed, there is no "when" for particle creation and destruction. The assumption is that the (preon) particles are immortal. What appear and disappear are their bound states.

You only need to add one thing to this to get the something like the usual QFT, and that is the ability of particle / anti particles to appear. For this, use the Feynman interpretation that antiparticles are particles moving backwards in time.

And wasn't it Schwinger himself who invented "source theory" QED, a formalism which matches the usual QED but has no unconnected diagrams? That is, in his theory, there is no vacuum in that there are no diagrams except those that connect to external lines.

Let me put it into another way of looking at it. Suppose you were stuck back in the 19th century and you were convinced that atoms are never created nor destroyed. Then what do you say when someone comes up to you and points out that paper is consumed by fire? It's not the particles that are being created and destroyed, it is instead immutable preons switching their bound states.

Carl
 
  • #155
ttn said:
Can you propose some other good definition of local causality for stochastic theories? And don't tell me "signal/info locality" -- that's a different idea, right? Orthodox QM (treating the wf as a complete description of the ontology) and Bohmian Mechanics are both "signal/info local", yet clearly they are both nonlocal at a deeper level. They both involve FTL causation.

Orthodox QM is not "non-local at a deeper level" in the sense that it proposes a *physical mechanism* that conveys a non-local causation, because orthodox QM is JUST AN ALGORITHM to calculate probabilities of outcomes of experiment. If you see Bohmian mechanics that way, they are on the same level: they spit out probabilities, and one shouldn't look at their mathematical constructions as representing anything physically, because they don't. You could just as well look at the listings of a C-program or anything.
This is one of the reasons why I don't like OQM, because I'd like to have a description of nature, but it is not supposed to be one. It's just a calculational scheme.

Now, from the moment that you start assigning ontology to the wavefunction, then yes, the projection postulate gives you a non-local operation. But if this is seen as "C-code that calculates probabilities" then it is hard to say what is "in its past light cone", no ?

No, this is sliding from talking about a theory's fundamental dynamical probabilities, to talkign about empirical frequencies or something. As long as you remember you're talking about some particular candidate theory, there *is* a way "of telling". This is just exactly what a theory tells us. It tells us what various happenings depend on. It's true that if you just see some event happen, there's no way a priori to know what caused it. But, in the context of a proposed theory, there is no such problem.

Nothing "causes" probabilities, right ? But I guess that you mean: the formula for the probabilities in your theory, does it depend on input you have to give of events in the past light cone only, or others ?
Well, I then tell you that ANY theory has its probabilities depend on things outside of the past lightcone of where the event matters: namely just afterwards.
Probabilities are sensitive to what's in the FUTURE light cone, because mostly they flip then from a real value to 0 or 1 (because in the future, we KNOW what happened).

So if the "algorithm for the probability of event at A" is given as input, only what is in A's past lightcone, it might crank out 0.5.
If we also give it what happens at B (outside of B's past light cone), it might become 0.75.
And if we add the result of the measurement to it at event A' in A's future, then it will become, say, 1.

So a stochastic theory's predictions, or empirically established relative frequencies, are EQUIVALENT. They are just tables of probabilities, generated in the first case by an algorithm, and observed in the second case, by treating data.
A theory tells us what caused it (even if the explanation is merely stochastic) by telling us what the event (or its probability) *depends on* -- and then it makes sense to ask (still in the context of that theory) whether that dependence is or isn't local.

Well, then the probability of all events depend strongly on their future, because that makes their probabilities flip to 0 or 1.
In fact, from this viewpoint, stochastic theories even become deterministic: If you have the result, then you can predict the earlier probabilities with certainty to be 0 or 1.

This is why I am insisting that probabilities are not physical quantities as such, because they CHANGE as a function of what we know.

No, he defined it the way he defined it: stuff outside the past light cone shouldn't affect the probabilities the theory assigns to events. It is also, incidentally, true that for any stochastic theory you can find an underlying deterministic theory. But that really has nothing to do with locality or Bell's definition thereof.

It has much to do with it: Bell's idea is about "causal influence", which means that we are at least proposing a description of the underlying reality of nature in which such a concept could play a role.
But a stochastic theory doesn't. It's a computer program that cranks out probabilities, and is NOT a description of any reality, UNLESS it is a deterministic theory in which things like initial states are recognized to be ignored (as in statistical mechanics, or in Bohmian mechanics, for instance).

What do you mean it's sufficient? Who says? So Bohmian Mechanics is then consistent with relativity? Why in the world, then, would YOU believe in MWI rather than Bohm?!?

The *probabilistic predictions* of Bohmian mechanics, seen as a black box that cranks out probabilities (and not as some kind of ontological description of nature) are compatible with relativity, in the same way as the probabilistic predictions of OQM are (and in the latter case, it is often said that this is nothing else but a black box that calculates probabilities).

As they are equivalent algorithms, there is of course no reason to "believe" one over the other, as they crank out the same numbers (maybe not in the same computing time).
However, Bohmian mechanics doesn't posit itself as a black box cranking out probabilities, right ? It has the pretention to be an ontological description of nature. Well, THEN one has to open the box, and to look if all the formulations are local. If the internal machinery is local. And it isn't. It cannot be written in a Lorentz-invariant way, for instance.

The same happens of course if we would take OQM to be an ontological description of nature, and if we would take the wavefunction as an element of reality. Then we could also not formulate it in a Lorentz invariant way.

But if we both see them just as a machine out of which comes predictions of probabilities of observation, then both are on the same level (and actually totally equivalent ; it is then just a matter of which one is easier in its manipulation to make your choice).

For "boxes that crank out probabilities" but which do not have the pretention of giving us any ontological description of nature, we can take "signal locality" as a criterium, or "Bell locality" as a criterium.
They tell us different things.

Bell locality tells us whether we will, or not, be able to find a local, deterministic theory that can explain the predictions, based upon ignorance of initial state ; such a deterministic theory can then eventually serve as an ontological description - which our probability-spitting box doesn't have.

Signal locality tells us whether or not we will be able to phone to our grandma to tell her not to marry granddad, if the lorentz transformations are correct (mind you, I didn't say: if SR holds :-).

Quantum theory so regarded isn't a theory.

That's why I don't like it :smile:
I WOULD like to have a theory that pretends to describe "nature out there" but OQM is not supposed to be so, but just a calculational trick which helps us estimate outcomes of experiment (their probabilities) when we give it the preparation.

That's also why all the beable stuff doesn't really apply to OQM: it doesn't have the pretention to describe anything physical. It just relates "in" states with "out" states.

Huh? Info/Signal locality is just a constraint on the predictions of the theory (it has nothing to do with the underlying guts/mechanics of the theory). What's the problem applying it to deterministic theories? Those too make predictions, yes?

What I meant was that there are not different options for locality for a deterministic theory. A deterministic theory is local or not, depending on whether the DETERMINED outcome at a point depends, or not, on things outside of the lightcone of that event. Given that that outcome is a clear physical thing (as contrasted to the *probability* of the outcome), there's no discussion about what it might mean, to be local, for a deterministic theory. Locality is originally a concept that was only clear for deterministic theories.
A deterministic local theory is both Bell local and Signal local: you cannot have a deterministic theory which is NOT Bell local, but who is signal local.

You're equivocating between two very different things. Stochastic doesn't mean "has no ontology". If you don't think a stochastic theory can have an ontology (fields or whatever) what the heck is OQM?

Eh, I do think that. OQM is not an ontological description of nature, but just an algorithm. That's one of the reasons why I don't like it.

Whether the laws are deterministic or not, is a very different question from whether or not there's a "reality out there." If you really don't make this distinction, it explains why you've been so resistant to understanding Bell Locality correctly. Because even *talking* about Local Causality (which Bell Locality tries to make mathematically precise) obviously presupposes that there's a "reality out there" -- but then you think this already means we presuppose determinism and disallow stochasticity. No wonder you're confused...

Indeed :-p

Mind you, having only a stochastical theory (an "algorithm") doesn't mean that we deny the *existence* of an ontological reality, but only that the algorithm doesn't describe it.

For instance, think of the following situation: there's a 4-dim spacetime manifold, in which an entire list of events is fixed. They have no real relationship amongst themselves, "things just happen". This could be an ontological picture of a "totally arbitrary" universe.

And now, it might be that there are certain relationships in that 'bag of events' which are such that certain ratios of events are respected. Why ? It just is so. If we capture the calculational rules that do so, then that's a stochastical theory. Some algorithm that works more or less when doing statistics about essentially totally arbitrary sets of events.
This has no description power of course, it is just an observation of the respect of certain statistics. That's how I see irreducible statistical theories (such as OQM).

But it's not at all a funny thing *about his definition*. It's just a general point that you can never really have good reason to believe in irreducible stochasticness -- you can *always* get rid of this in favor of determinism by adding variables. And if you restrict your attention to locally causal theories, this general point remains true (of course). But you seem to think this is some kind of skeleton in the closet of Bell's definition. I just don't follow that at all.

Well, from the "random bag of events" story, you figure that capturing regularities in the distribution of arbitrary events (= stochastical theory) or to complete it with extra variables to turn this into a deterministic ontological description of nature, is a whole leap. The statistical rules are just calculational algorithms, while the latter is supposed to describe "what goes on" (while in fact, nothing goes on, and arbitrary events just seem to be distributed in ways which obey certain rules when counting, without any "cause" to it).

Please. Obviously, if you switch the definition of 'local' between the first and second half of a sentence, you can say all kinds of apparently-interesting (but actually false) things.

No, because "locality" for a deterministic theory (pretending at an ontological description) is entirely clear. For a stochastic theory, it depends on how one looks at it.

You're still missing the point that Bell Locality requires a complete state specification (lambda). So if you take seriously the idea that the quantum formalism is just a mere algorithm which doesn't make any claims about what does or doesn't exist, then it IS NOT BELL LOCAL. You can't even ask if it's bell local. It's not yet a *theory* in the sense required to apply Bell's criterion.

You've got it.
That phrase makes no sense. It isn't predictions that are or aren't Bell Local, it's theories. What you can say (and what you probably meant) is that, if the predictions violate the inequalities, then you know that there is no Bell Local theory which can make those predictions.

Yes. A shortcut.

In other words, you *always* assume that probabilities are not fundamental. In other words, you refuse a priori to consider the possibility of a genuinely stochastic theory.

Indeed. I can accept a stochastic theory as "capturing certain regularities of a totally arbitrary distribution of events - things happen" but not as any ontological description of nature.
And as such, there can be other notions of "locality" that apply to *algorithms* and not to *ontological descriptions*.

Someone who doesn't know about "Patrick's Theorem" (which I think was actually proved by Arthur Fine in '82, though it's really a pretty obvious point so I'm sure people knew it before then) might think, based on your way of phrasing this stuff, that we are left with a choice about whether to reject locality or determinism in the face of the Bell-inequality-violating data. It's the same as the confusion that is caused by this stupid recent terminology "local realism." What the hell is "realism"? Somebody tell me please what "realism" is assumed by Bell in deriving the inequality.

Maybe "realism" is the idea that the theory describes an ontology, or is just an algorithm ?
Bell assumed "beables", things that correspond to reality, in a theory.
I don't know what Bell would say about a computer program that spits out probabilities as a function of what one gives it as input
(data about the past light cone, about things happening at spacelike intervals, or data about the future of said event, where the result is hence known).

There isn't any -- at least, not any that can be remotely reasonably denied. Yet still the language caught on, and so now everybody thinks we *either* get to reject locality (which everybody says is crazy, because that means rejecting relativity) *or* reject "realism" (which therefore everybody is in favor of even though none of them know what the hell they mean by it!).

I agree with you: I want to keep both ! But "realism" (a potential description of an ontology) WAS already out of the window with OQM. Only a pattern in observed ratios of observations was to be the object of OQM, with some interdiction of thinking about an underlying ontological picture. Personally, I don't like that idea at all. And in fact, I think most people who pay lip service to it, don't really, and assign some form of ontology to the elements they manipulate. But the "official Bohr doctrine" says that there's no such thing as an "underlying ontology".

But let me repeat a crucial question here. If the lesson from all of this is that Bell Locality is *too strong*, and that *really* all relativity requires is *signal locality* then WHAT OBJECTION COULD YOU POSSIBLY HAVE AGAINST BOHMIAN MECHANICS? This position renders Bohmian Mechanics "local" -- as local as it needs to be to be consistent with relativity. And then why, please tell me, would any remotely sane person not opt for Bohm over OQM, MWI, and all other options? Leaving aside the issue of locality, Bohm is *clearly* the most reasonable option.

As I said, I think that Bell locality is the correct requirement for an ONTOLOGICAL description of nature (which, in my opinion, is also deterministic). However, signal locality is good enough for a probability algorithm if we abandon the idea of giving an ontological description of nature (and reduce to "things happen" in the big bag of events out there), and limit ourselves to observing certain regularities in the distribution of these events, which can be calculated through certain algorithmic specifications.
If we only require that these calculational rules remain invariant under change of observer, then signal locality is ok. If Bohmian mechanics is seen this way (as an algorithm to spew out probabilities) it is fine, as an signal-local procedure of calculating probabilities. The "particles and trajectories and non-local forces" are then not "beables" but just variables in the computer program that help you calculate probabilities.
The wavefunction and the projection are the same in OQM (and have never had any other pretention on OQM).
But then there's no real distinction between Bohm and OQM: they are both black boxes that spew out probabilities. One is not more or less reasonable than the other, because they come to the same numerical results, and both don't represent anything.

However, as a description of nature, where Bell locality is required (and, in my opinion, determinism too) - something OQM doesn't pretend to do, Bohm fails (and any theory that is equivalent to OQM for that matter).

So there IS no local ontological description of nature that can reproduce the OQM predictions. That's the "realism" part I suppose.

So there is just this "bag of events" and a few rules of the statistics about them, without there being an ontological description (apart from a long list of events)... unless we take it that all this is an illusion, to think that events are uniquely happening, and that all randomness is in our perception, and not in nature itself. That's MWI.
 
Last edited:
  • #156
**The worldlines in Bohmian mechanics haven't been Lorentz invariant since what, 50 years. If you have an objection to what the Nobel prize winning physicist thinks about this, well, you can't argue with him because he is dead, but you can always read chapter 12 of "The Undivided Universe" which is devoted to this subject. It's not like Bohm and friends didn't notice it. **

:rolleyes: It is not because I know that they noticed it, that this issue dissapears in thin air ! Of course you can say that you do not need Lorentz invariance at this level of reality (since the lack of does not lead to a falsifiable prediction) but what is the point then of BM, given that it does not solve the measurement problem either and complicates things. So, given that reality in BM is terribly non-local, frame dependent, does not solve the measurement problem, why should we appreciate it ?

**
My point was not to force you away from Lorentz invariance (I think it's good that humans have religious beliefs), but instead to show that there is a way of stuffing QFT into a Bohmian form. Bohmian form does not include Lorentz invariance. **

It is good to have religious belief as long as you do not try to support them by abusing theorems over and over again, ad nauseum

**
Yes, interactions change the particle number, but I'm not proposing a particle solution. What I'm proposing is a preon solution. To get Bohmian mechanics to fit into the QFT form, one must suppose that the elementary particles are not, well, elementary. **

Right, and here we agree very well, but you need to go a step higher - a gear up - you have to try to explain why the wavefunction which appears at the same time as statistical tool AND physical guidance mechanism in BM can undergo a terribly non local collapse or/and spontaneously create a new wave (depending on your interpretation). BM is incapable of doing so, and a solution for that problem requires much more...
 
Last edited:
  • #157
Careful said:
Right, and here we agree very well, but you need to go a step higher - a gear up - you have to try to explain why the wavefunction which appears at the same time as statistical tool AND physical guidance mechanism in BM can undergo a terribly non local collaps. BM is incapable of doing so, and a solution for that problem requires much more...

You know, the wavefunction does NOT collapse in BM. That's why I can make Bohmians nervous by saying that it has some MWI flavor to it :biggrin:

Let's give it a go and have them bite :-p

You can, in BM, continue to work with the non-collapsed wave function (as well as you can, FAPP, collapse it, because the part that is not relevant will not change any trajectory anymore in any significant way).

The thing that's much more fuzzy IMO in Bohmian mechanics, is the statement that "perception" is only due to the particle positions and not to the wavefunction, although the wavefunction (with all its ghost solutions - just as in MWI) is STILL part of the ontology of BM, and at the same time deny the perception of the perfect particle positions (in order to be able to satisfy the initial probability condition), and have them agree with the Born rule. I gave it probably less thought than I should, but "something feels fishy" there. If I, as an observer, know *perfectly well* my particle positions, then BM, if I'm not mistaking, will not give me the same predictions as QM, because I need initially to have some "fuzzyness" to it, which will be conserved under Bohmian dynamics ; fuzzyness which needs to correspond to the wavefunction's norm.
But then, this means that my perception of reality is also conditioned in part by this wavefunction (which contains other "ghost" terms).
And I'm really not very far from "MWI with a non-local token".
 
  • #158
Hi Patrick,

I guess we are now at the point where the very complicated issue of free will determines to what extend our laws of nature are wrong/fine : free will, the arrow of time, (relativistic) causality - all things which are inextracibly connected to each other... All choices we know of are so polarized up till now that we probably did not understand anything of it yet. The Bell inequalities deeply rely upon *one* polarization of these issues which explains (a) why the true lack of experimental violation still generates so much heat (here people can have the same notion of free will, but deny nature is non-local) (b) why all this discussion really belongs in the philosophy forum since it seems unlikely to me that the issue will be settled one day (there are so many reasonable alternatives possible). ttn tells us we cannot even define reality, while his entire reasoning is based upon the *reality* of Bell's notion of free will (and his presumptions of a conclusive Bell test in the future) !

Cheers,

Careful
 
Last edited:
  • #159
**You know, the wavefunction does NOT collapse in BM. That's why I can make Bohmians nervous by saying that it has some MWI flavor to it :biggrin: **

:smile: :smile: No Patrick, that depends how you interpret BM, as Bohmian you have to say there is a collapse of the wavefunction OR you are indeed slightly *changing* QM (but you can equally do that by slightly modifying Copenhagen) - but also here, some ``consciousness'' is needed (the act of the observer which suddenly creates a new (non-local ?) wave, which has no effect in the PAST lightcone). The latter option, which is what you write below, is worthwhile considering (and obviously I considered this before) albeit I presume it could lead to a fasifiable prediction if one is clever enough. Let's not play these silly games ...


**
You can, in BM, continue to work with the non-collapsed wave function (as well as you can, FAPP, collapse it, because the part that is not relevant will not change any trajectory anymore in any significant way). **

What you write further is indeed more or less what I was appealing to by mentioning the double part of the wave function. Basically, the use of the wave function in BM does not make sense to me vis a vis a one particle scenario.

Careful
 
Last edited:
  • #160
Careful said:
ttn tells us we cannot even define reality, while his entire reasoning is based upon the *reality* of Bell's notion of free will (and his presumptions of a conclusive Bell test in the future) !

The "free will" issue is not so clear, I'd say. For instance, I don't think that there is free will, but this is just an illusion of our passive perception (just as well in an MWI view as in a classical, deterministic view).

The point of Bell is not entirely depending on a notion of free will, but on the notion that a totally distinct process (even with the same past lightcone) will be statistically independent. A rat pushing one of 3 buttons, or an analog amplifier of resistor noise sampling a random value or whatever, although of course entirely determined by the same past light cone, should not, to all reasonable expectation, be FULLY statistically correlated with a light pulse traveling in an optical fibre.

Although in principle you are right that there MIGHT be such a strange correlation (because both are functions of the SAME initial conditions), it would invalidate about all experimental knowledge we have. I was pretty serious about the astrology example: if things are intrinsically correlated in such a way that we should expect *perfect* correlations between about any process that can select one of 3 possibilities (as I said, a rat pushing a button, randomly sampled resistor noise, a human pushing a button, the mechanics of rolling dice...) whenever they are used in an EPR experiment, then you can just as well state that ALL we've ever seen is just the appearance of one big coincidental correlation. There's no hope to ever deduce any law in nature in that case, or to deduce any causal relationship.

So, when we write P(A|a,T) instead of P(A|T), which should be the correct formulation, with a(T) and b(T) the two "decisions" taken to select one of the angles, then it is implicitly assumed that the functions that determine a(T) and b(T) pick out OTHER parts of T than the relevant parts for the light pulses passing the polarizers. T will more have to do with what happened in the light source in the latter, and with whatever was at the origin of the button-pushing process in the former, and it is indeed a hypothesis, but a very reasonable one, that they correspond to distinct parts, in that what is emitted by the light source is not entirely influenced by what happens in the brain of the rat, or vice versa.

You might do on the Planck scale what you want, if this is the conclusion, then we'd better concentrate on table turning, astrology and laying cards, because at least, these people doing that UNDERSTOOD that everything is correlated in very strange ways, and that scientists were totally deluded thinking that they could chunk up physical processes by testing them in the lab using statistical analysis.

As I said, if these correlations are to be taken seriously, an existing FTL telephone is even "local", because there's just this funny coincidence that the FTL phone talking to me tells me the exact same words as you speaking to it on Titan. There's no causal relationship, you were just supposed to say exactly the same thing as the FTL phone was telling me.
 
Last edited:
  • #161
**The "free will" issue is not so clear, I'd say. For instance, I don't think that there is free will, but this is just an illusion of our passive perception (just as well in an MWI view as in a classical, deterministic view).

The point of Bell is not entirely depending on a notion of free will, but on the notion that a totally distinct process (even with the same past lightcone) will be statistically independent. **

Right, so some call that assumption free will, others call it the independent apparatus assumption and link it to local rotational invariance. Anyway, it seems to be unlikely as the outcome of a deterministic theory and to say more about it, study is certainly required.

**A rat pushing one of 3 buttons, or an analog amplifier of resistor noise sampling a random value or whatever, although of course entirely determined by the same past light cone, should not, to all reasonable expectation, be FULLY statistically correlated with a light pulse traveling in an optical fibre. **

Well sure not, but the QM correlations are not that much higher either... what is your point precisely ?

**
Although in principle you are right that there MIGHT be such a strange correlation (because both are functions of the SAME initial conditions), it would invalidate about all experimental knowledge we have. **

But we know such correlations DO exist in irreducible statistical models (for example thermal baths). On the other hand, it is clear that the description of QM (as it stands now) ignores certain interactions which are certainly present in any realistic setup. Moreover, such problem -as you indicate - becomes even WORSE when we give a full quantum mechanical description of the Planck scale degrees of freedom; the QUANTUM statistics of such scenario which should provide our large scale QM theory of particles seems even more conspirational and ILL defined (as it stands now) than the CLASSICAL proposal. Basically, this scaling problem is inherent to quantum gravity and it appears to me that the deterministic scenario is much less exotic/conspirational and certainly easier (although already fairly impossible) to study. Try to see things from that point of view... quantum gravity would make our world even crazier, the deterministic anszatz is actually CONSERVATIVE in the sense that God is assumed to play with gears and slippers on the Planck scale.

I guess this adresses your later issues... This is what I am trying to tell you all the time, that IF we find a natural, deterministic PLANCK scale model which reproduces QM on large scales naturally, THEN this is the utmost supreme understanding of nature we can achieve (otherwise we are left with an even greater problem of why our world comes out of so many crazy possibilities).

You must realize that when a genius as Gerard is going for super-determinism then this is because the alternative to an understanding of nature is even much crazier if we leave QM untouched. Do you see now how poor our understanding of nature is if we take our theories to their consequences ?

Cheers,

Careful
 
Last edited:
  • #162
Careful said:
:rolleyes: It is not because I know that they noticed it, that this issue dissapears in thin air ! Of course you can say that you do not need Lorentz invariance at this level of reality (since the lack of does not lead to a falsifiable prediction) but what is the point then of BM, given that it does not solve the measurement problem either and complicates things. So, given that reality in BM is terribly non-local, frame dependent, does not solve the measurement problem, why should we appreciate it ?

Where did you get the (wrong) idea that BM *doesn't* solve the measurement problem?
 
  • #163
Vanesch, I don't have time to get into all this at such length, so just a few very quick notes.

* A black box algorithm that makes predictions but doesn't make any ontological commitments, isn't a theory. That doesn't mean it's a bad thing to have such algorithms. They're just not theories, that's all, because they're not trying to *explain* -- merely to describe.

* You raise this canard about all probabilities depending on stuff outside the past light cone, namely, stuff in the *future* (which you say renders all the probabilities 0 or 1). But this is based on your AGAIN having switched from the fundamental dynamical probabilities of a theory, to something epistemic. The latter is simply not what the Bell Locality condition is *about*, so all of your comments on this are pointless.

* You're still confused about determinism and ontology. Those aren't the same issue, and I find it very revealing that it is *you* who refuses, on principle a priori, to consider the possibility of a stochastic theory -- even when what got this spat started is your claim that *Bell* arbitrarily assumed determinism. Kettle? You're black.


*
A deterministic local theory is both Bell local and Signal local: you cannot have a deterministic theory which is NOT Bell local, but who is signal local.

Bohmian Mechanics is not Bell local, but is signal local.
 
  • #164
vanesch said:
Let's give it a go and have them bite :-p

Chomp, chomp!



The thing that's much more fuzzy IMO in Bohmian mechanics, is the statement that "perception" is only due to the particle positions and not to the wavefunction, although the wavefunction (with all its ghost solutions - just as in MWI) is STILL part of the ontology of BM,

What's the problem? All along we thought we were perceiving matter made of particles. BM just keeps that (and adds a spooky mysterious invisible thing which is orchestrating the movements of the particles).


and at the same time deny the perception of the perfect particle positions (in order to be able to satisfy the initial probability condition), and have them agree with the Born rule.

Huh? It's a theorem in BM that measurements (which remember in BM are made using devices that are made of particles obeying BM!) cannot give us more knowledge of the particle positions than is implied by applying the Born rule to their effective wave functions. There's no *extra* assumption about a "fuzziness" in perception that maintains Born.


But then, this means that my perception of reality is also conditioned in part by this wavefunction (which contains other "ghost" terms).
And I'm really not very far from "MWI with a non-local token".

As you said, you just haven't understood BM well enough on this point. See:

http://www.arxiv.org/abs/quant-ph/0308039
 
  • #165
ttn said:
* A black box algorithm that makes predictions but doesn't make any ontological commitments, isn't a theory. That doesn't mean it's a bad thing to have such algorithms. They're just not theories, that's all, because they're not trying to *explain* -- merely to describe.

But that is ALL that OQM pretends to do. OQM says: "there are just algorithms, and that's all you can have".
From that PoV, signal locality is good enough, no ?


* You're still confused about determinism and ontology. Those aren't the same issue, and I find it very revealing that it is *you* who refuses, on principle a priori, to consider the possibility of a stochastic theory -- even when what got this spat started is your claim that *Bell* arbitrarily assumed determinism. Kettle? You're black.

I don't think I'm confusing the issues. I think I make the distinction between both, but as I consider things like "dynamical probabilities" bull****, and that to me, probabilities can ONLY be "ignorance based", I claim that fundamentally stochastic theories are just algorithms. When you turn them into deterministic theories with random variables, then that's different, because now the "random variables" can be assumed to have a physical existence and value, and they are only random because of our ignorance about them.
So a theory that contains random variables, of which probabilities are assigned, but to which we can also assign some element of physical existence, are in my vocabulary, still deterministic theories, because we can consider that these random variables DO have specific values, but we are simply ignorant of them, which gives them their random character.

But a scheme that ends by spitting out a series of probabilities, and calls that "dynamical" is nothing else but an algorithm, and cannot contain a description of a *mechanism*.

So yes, I claim that the only possible ontological descriptions are deterministic (eventually containing random variables) in their approach.
I would like to see such a deterministic description of nature, but maybe it doesn't exist, in which case we have to limit ourselves to *non-descriptive* algorithms.

In the latter case, there's no issue in requiring Bell locality: signal locality will do (as we're not looking for a description of any ontology, but just of an algorithm that will allow us to calculate probabilities without any pretention of ever describing nature on an ontological level).

Bohmian Mechanics is not Bell local, but is signal local.

Exactly, so as a non-descriptive algorithm that spews out probabilities, it is just fine, as well as OQM (which never had any other pretention).
But then, as I said, there's no point in claiming that the particles and forces appearing in Bohmian mechanics have any physical meaning, not more so than the wave function in OQM.
Given that it spits out the same set of numbers as OQM, they are in fact two equivalent algorithms and there's not much point in discussing over it.

But I understand that Bohmians want to confer a kind of ontological status to their theory. In that case, of course, things change, because then we should check whether its *internal mechanism* is local. Given that it is a deterministic mechanism, its locality is equivalent to Bell locality of the predicted probabilities, and then it fails.

So, true, Bohmian mechanics is not worse (on the contrary) than OQM: both are acceptable (signal-local) algorithms to calculate probabilities.

But OQM doesn't go any further (doesn't propose any ontology). So that's where OQM stops.
Bohmians want to give their theory ontological status, and then we open the box, and see that the machinery inside is non-local. So this part of the story doesn't fit.
 
  • #166
ttn said:
Huh? It's a theorem in BM that measurements (which remember in BM are made using devices that are made of particles obeying BM!) cannot give us more knowledge of the particle positions than is implied by applying the Born rule to their effective wave functions.

Do you mean that, if I know the exact positions and motions of the particles in an "observer", I cannot extract more information than allowed by the uncertainty relations ?

In other words, suppose that particles {q1,q2,...q20} are "the observer" and particles {q21, ... q30} are "the system". Does it mean that if I know EXACTLY the positions of {q1...q20} over time, that I cannot know more than what's allowed by the uncertainty relations about q21...q30 ?

"Being the observer" means, to me, "knowing exactly one's state", so there's no probability distribution to be assigned to {q1...q20} here, because it is the observer, which "knows" its state perfectly well, its "knowledge" BEING the state.

I thought that this only came about if we also allowed for an initial uncertainty on {q1...q20}...
 
  • #167
vanesch said:
"Being the observer" means, to me, "knowing exactly one's state", so there's no probability distribution to be assigned to {q1...q20} here, because it is the observer, which "knows" its state perfectly well, its "knowledge" BEING the state.

I thought that this only came about if we also allowed for an initial uncertainty on {q1...q20}...


Ah, I see that the paper you quoted answers exactly that:
The possession by experimenters of such information must thus be reflected in correla-
tions between the system properties to which this information refers and the features of
the environment which express or represent this information. We have shown, however,
that given its wave function there can be no correlation between (the configuration of) a
system and (that of) its environment, even if the full microscopic environment Y—itself
grossly more than what we could conceivably have access to—is taken into account.

I didn't follow the entire paper, but ok, I have to admit that this is impressive if there's no other caveat somewhere...
 
  • #168
vanesh: But that is ALL that OQM pretends to do. OQM says: "there are just algorithms, and that's all you can have".
Thanks for all your posts vanesh. Most informative.

Sorry to bother you, but does the above mean that "particle" in the excerpt below is just a mathematical artifice?

"Bohm's particle is viewed as having a definite position and velocity at all times, with a probability distribution ρ that may be calculated from the wavefunction ψ. The wavefunction "guides" the particle by means of the quantum potential Q. Much of this formalism was developed by Louis de Broglie; Bohm extended it from the case of a single particle to that of many particles, and also, by considering the particles in the measuring apparatus, re-interpreted the equations to include observation..."
 
  • #169
ttn said:
Where did you get the (wrong) idea that BM *doesn't* solve the measurement problem?

My statement of course depends upon what you mean with the measurement problem. If you simply mean that you want to avoid the nonlocal collapse of the QM wave, then of course BM can do that - although it distinguishes itself then from standard QM in a measurable way. What this proposal does *not* adress is that upon a *non-physical* act of the observer (ah yes, no causal effects in the past lightcone here, no pre-determinism) a new local wave package (guiding wave) in position space is constructed. If you ask me : that is no solution to the measurement problem (even Einstein found this ``solution'' cheap.). I have plenty of other problems with Bohm - de Broglie (which I am not going to list here) :
(a) basically what is measurement ? (what do we call measurement of position of electron in atom)
(b) in a multiparticle system, God plays dice in configuration space. How can you have any hope of doing physics in this way ?
(c) Truely speaking, I cannot make sense out of non-local guidance mechanisms; what does it mean that our ignorance influences the dynamics of particles ?
(d) in QFT on curved spacetime, the foliation would determine the choice of trajectories, what is the physical meaning of all this ?
...

Actually Patrick, you keep on mentioning decoherence all the time, but don't you see that this is at least as conspirational as super-determinism ?? Take your decoherence argument to the Planck scale and try to figure out why long range entanglement (or even elementary particles) should exist ! People do study classical chaotic interacting models and do discover that correlated regions appear, such synchronisation effects are well known but poorly understood.

Careful
 
Last edited:
  • #170
vanesch said:
But that is ALL that OQM pretends to do. OQM says: "there are just algorithms, and that's all you can have".
From that PoV, signal locality is good enough, no ?

Look, there are just two different possible attitudes you could take here. You could take the "completeness doctrine" at face value, and say that the wave function in OQM provides a literal description of the physical state of quantum systems. Then it's really a theory in the sense I am using that term. Or, as you suggest, one could just forget about objective reality and use the QM formalism as a black box algorithm. But that is just failing to address the question at hand (about local causality), not answering it in a certain way (i.e., proving an example of a causally local theory, or proving that Bell Locality doesn't make sense or something).

I mean, maybe we need to go back to the beginning. There *is* an objective reality, right? So if you have some mathematical black box algorithm that allows you to predict things -- but which *doesn't* provide an account of that objective reality -- that's *fine*... it's not that I object to having such a thing.. it's just that it doesn't address the kind of question that a *theory* might address, which is what that reality is like. The mere fact that you can construct some algorithm to make predictionsn without telling an ontological story, doesn't somehow make the world disappear. You're just not *talking* about it. But the question of whether or not the causality out there in the world is or isn't relativistically local, remains. Your not talking about objective reality right now doesn't make that question magically disappear or become meaningless.




I don't think I'm confusing the issues. I think I make the distinction between both, but as I consider things like "dynamical probabilities" bull****, and that to me, probabilities can ONLY be "ignorance based", I claim that fundamentally stochastic theories are just algorithms.

So you just accept as an a priori truth that objective reality is deterministic. OK, I mean, I actually lean that way too. I wouldn't claim it as an a priori truth, but certainly all other things being equal it's better to have a deterministic theory than not -- especially since you could never possibly have a strong argument for the stochasticity in a given theory being irreducible (Patrick's theorem). But, nevertheless, as a strategic point, I think it is very important to point out that Bell's inequalities in principle apply both to local deterministic and to local stochastic theories. You don't want to even consider the latter. Ok, fine, but some other people do, and it's important for them to know that they're barking up the wrong tree. If you're made uncomfortable by the non-locality Bell's Theorem proves must be present in any deterministic theory, then you should be *very* uncomfortable, because you CANNOT RESTORE LOCALITY BY DROPPING DETERMINISM.

And that is true whether or not your philosophical sensibilities permit you to take irreducible stochasticity seriously.

BTW, Patrick, does this mean you are unwilling to consider the GRW theory as a serious version of QM?




When you turn them into deterministic theories with random variables, then that's different, because now the "random variables" can be assumed to have a physical existence and value, and they are only random because of our ignorance about them.
So a theory that contains random variables, of which probabilities are assigned, but to which we can also assign some element of physical existence, are in my vocabulary, still deterministic theories, because we can consider that these random variables DO have specific values, but we are simply ignorant of them, which gives them their random character.

Is GRW then "really" a deterministic theory? How about orthodox QM with a "cut" put in at some artibtrary level of "macroscopicness" (however that is measured)?



In the latter case, there's no issue in requiring Bell locality: signal locality will do (as we're not looking for a description of any ontology, but just of an algorithm that will allow us to calculate probabilities without any pretention of ever describing nature on an ontological level).

I'm sorry, this just doesn't make any sense. "Signal locality" is about whether you can transmit a message faster than light. Bell Locality is about whether there are FTL causal influences. They're not just different "formulations" of the same concept, locality. They're about two very different things. So it's not an issue of "signal locality will do". If what you're interested in is whether it's possible to send signals, then yeah, signal locality will do. If, alternatively, what you're interested in is whether or not there exist FTL causal influences out there in the world, then only Bell Locality will do. And if you're interested in ordering some food, look at a menu. None of the 3 "will do" for the two other purposes.



Exactly, so as a non-descriptive algorithm that spews out probabilities, it is just fine, as well as OQM (which never had any other pretention).
But then, as I said, there's no point in claiming that the particles and forces appearing in Bohmian mechanics have any physical meaning, not more so than the wave function in OQM.

Have you gone completely crazy? Now you don't think Bohm's theory really means it when it posits that particle-plus-wf ontology?


Given that it spits out the same set of numbers as OQM, they are in fact two equivalent algorithms and there's not much point in discussing over it.

Yes, they make the same predictions. But on the other hand THEY ARE COMPLETELY DIFFERENT THEORIES because they posit completely different ontologies.


But I understand that Bohmians want to confer a kind of ontological status to their theory.

That's not quite phrased right. Bohmians think that Bohm's theory provides the best available candidate picture of the world. That picture is the ontology of the theory, just like some other picture is the ontology of MWI or of GRW. What is the confusion here?


In that case, of course, things change, because then we should check whether its *internal mechanism* is local. Given that it is a deterministic mechanism, its locality is equivalent to Bell locality of the predicted probabilities, and then it fails.

"Bell locality of the predicted probabilities"? Sheesh. I can only correct your refusal to understand the meaning of "Bell Locality" so many times...


So, true, Bohmian mechanics is not worse (on the contrary) than OQM: both are acceptable (signal-local) algorithms to calculate probabilities.

You're drunk or something. They're the *same* algorithm to calculate probabilites. Where they differ is in the ontology they posit (well, and the clarity of their formulations). You're now saying that really we shouldn't take the ontology of Bohm's theory seriously, and we should just consider it as another black box algorithm... except it's really just *the same* black box algorithm?? Did you overdose on some kind of positivism pills or something?


Bohmians want to give their theory ontological status, and then we open the box, and see that the machinery inside is non-local. So this part of the story doesn't fit.

Forget about "opening the box" of a theory. Start with the existence of a real world out there. Insist that there aren't any FTL causal influences. Derive an inequality from this. Test this empirically and find that it's violated. Infer that there *do* exist nonlocal causal influences in nature. That already means that no local theory is going to work. Bohm's theory is just then one among many possible empirically viable non-local theories. But nobody's infering anything about nature merely by "looking inside the Bohmian box." The point is just the reverse - you infer that any theory at all is going to have to have nonlocal mechanisms "in its box", because we already know going in that NATURE is nonlocal.
 
  • #171
Careful said:
My statement of course depends upon what you mean with the measurement problem.

I don't think this is a controversial point. The measurement problem (for OQM) is that it provides two different and incompatible dynamical laws depending on whether or not a "measurement" is happening, but it never defines that term. So the theory (quoting my pal Bell) is unprofessionally vague and ambiguous. (Relatedly, some people think of the measurement problem as the Schroedinger cat problem -- if you try to construct a non-vague theory by simply getting rid of the second kind of dynamics, then the theory no longer predicts that measurments have definite outcomes, which is contrary to fact.)

Bohm's theory solves the measurement problem unambiguously. It doesn't give two different dynamical rules. There is just one kind of dynamics, and everything (even the "stuff" that measurement apparatuses are made of) are all treating on an equal footing. And the theory actually predicts that measurements have outcomes -- pointers on detectors are made of particles, and these always end up in some definite place (because they're always at some definite place).


(b) in a multiparticle system, God plays dice in configuration space. How can you have any hope of doing physics in this way ?

Huh? The dynamics of Bohm's theory is completely deterministic. If there's dice playing, it's only at the initial conditions.

(c) Truely speaking, I cannot make sense out of non-local guidance mechanisms; what does it mean that our ignorance influences the dynamics of particles ?

Um, it doesn't. Methinks you don't really understand Bohm's theory very well if you think that, according to it, "our ignorance influences the dynamics of particles."
 
  • #172
** if you try to construct a non-vague theory by simply getting rid of the second kind of dynamics, then the theory no longer predicts that measurments have definite outcomes, which is contrary to fact.) **

Of course that is not true, you have to change the Schroedinger equation too. By the way, perhaps this is not an issue amongst Bohm lovers, but some others might think differently.

** Bohm's theory solves the measurement problem unambiguously. It doesn't give two different dynamical rules. There is just one kind of dynamics, and everything (even the "stuff" that measurement apparatuses are made of) are all treating on an equal footing. And the theory actually predicts that measurements have outcomes -- pointers on detectors are made of particles, and these always end up in some definite place (because they're always at some definite place). **

Huh ?? The issue is that you simply don't KNOW where the particle is although it is somewhere and following a definite trajectory. So, you still have to indicate when it is that you ``percieve'' it at some spot and consequently generate a new wavepackage to guide it. Moreover, Copenhagen also has only one DYNAMICAL rule, the projection postulate has nothing to do with dynamics. In classical physics, the act of perception (and the accuracy with which we achieve this) would not change anything to the way we describe the system dynamically; in BM this is not the case at all.


**
Huh? The dynamics of Bohm's theory is completely deterministic. If there's dice playing, it's only at the initial conditions. **

I think you don't understand that my comment is against giving a PHYSICAL interpretation to interactions which are irreducibly confined to configuration space (hello Newton, actually it is even worse than that). If you take QM ,as it stands now, simply to be an algorithm (and not a physical theory) then I do not mind so much that God plays dice in configuration space, but for a physical deterministic (apart from the measurement act) one, I certainly do. Moreover a Bohmian theory of QFT is certainly going to be stochastic and not deterministic.

**
Um, it doesn't. Methinks you don't really understand Bohm's theory very well if you think that, according to it, "our ignorance influences the dynamics of particles." **

Me think that you cannot read between the lines. Our ignorance is of course in the probability density of the wave function (we do not know where the particle is) and what makes it so strange is that this entity governs the dynamics of the particle through the quantum potential. Hence, the fact that this particle (or another particle in the same sample) *could* be somewhere else (in the future) is actually influencing the motion of the particle under consideration (now). If you don't find that strange, then I don't know what is to you.

Careful
 
  • #173
Careful said:
Methinks that you cannot read between the lines. Our ignorance is of course in the probability density of the wave function (we do not know where the particle is) and what makes it so strange is that this entity governs the dynamics of the particle through the quantum potential. Hence, the fact that this particle (or another particle in the same sample) *could* be somewhere else (in the future) is actually influencing the motion of the particle under consideration (now). If you don't find that strange, then I don't know what is to you.

People are so steeped in their axiomatic particles that they invent parallel universes and even time travel to explain them. They just won't see the solution: the quanta you are dealing with ARE NOT PARTICLES. A quantum is NOT all in one place. It is NOT a point. A gallon is not a particle. A coulomb is not a particle. A quantum IS NOT LOCAL.

But people don't listen, and instead we have a god damn crackpot theological "debate" going round in circles for fifty years pretending to be physics. Absolutely tragic.
 
  • #174
Careful said:
Basically, the use of the wave function in BM does not make sense to me vis a vis a one particle scenario.

I agree with you, but you've already heard my cure for that, which is to add an arrow to time by splitting the wave and particle duality into future and past, respectively, with respect to the observer.

Locality is obeyed in the wave propagation, and locality is observed in the observation of the particles. Where it disappears is in the transformation of wave to particle. More precisely, I mean to say that if it were not for wave collapse, none of the odd behavior of QM would exist.

The assumptions that influence is traveling faster than light all use the assumption that the wave and particle descriptions can simultaneously apply to the same event. That should be objected to for the same reason you object to the Bohmian one particle idea. That is, what the heck are the parts of the wave that the particle's trajectory does not traverse for?

Carl
 
  • #175
Farsight said:
People are so steeped in their axiomatic particles that they invent parallel universes and even time travel to explain them. They just won't see the solution: the quanta you are dealing with ARE NOT PARTICLES. A quantum is NOT all in one place. It is NOT a point. A gallon is not a particle. A coulomb is not a particle. A quantum IS NOT LOCAL.

But people don't listen, and instead we have a [edited for content] crackpot theological "debate" going round in circles for fifty years pretending to be physics. Absolutely tragic.
I agree with the sentiment -- but I think you are at least as guilty of this as Careful.
 
  • #176
Careful said:
** if you try to construct a non-vague theory by simply getting rid of the second kind of dynamics, then the theory no longer predicts that measurments have definite outcomes, which is contrary to fact.) **

Of course that is not true, you have to change the Schroedinger equation too. By the way, perhaps this is not an issue amongst Bohm lovers, but some others might think differently.

So you're talking about GRW? It's a fine theory -- probably the second best available option.


** Bohm's theory solves the measurement problem unambiguously. It doesn't give two different dynamical rules. There is just one kind of dynamics, and everything (even the "stuff" that measurement apparatuses are made of) are all treating on an equal footing. And the theory actually predicts that measurements have outcomes -- pointers on detectors are made of particles, and these always end up in some definite place (because they're always at some definite place). **

Huh ?? The issue is that you simply don't KNOW where the particle is although it is somewhere and following a definite trajectory. So, you still have to indicate when it is that you ``percieve'' it at some spot and consequently generate a new wavepackage to guide it.

I don't follow the last part. There is no new dynamics for measurements in BM (no "new wavepackages" need to be "generated").



Moreover, Copenhagen also has only one DYNAMICAL rule, the projection postulate has nothing to do with dynamics.

So then why don't people simply drop the collapse rule and formulate the theory with sch-evolution only? Oh right, because of the measurement problem.

Let me put it less sarcastically: if there's some aspect of the mathematics which a theory requires in order to make correct contact with experiment, that bit of mathematics is dynamics. If you don't think it is, you are free to construct a new theory with a simpler dynamics -- ie, which simply never mentions the thing you think isn't real, isn't dynamical.



Me think that you cannot read between the lines.

Sorry, I didn't see anything written there.


Our ignorance is of course in the probability density of the wave function (we do not know where the particle is) and what makes it so strange is that this entity governs the dynamics of the particle through the quantum potential. Hence, the fact that this particle (or another particle in the same sample) *could* be somewhere else (in the future) is actually influencing the motion of the particle under consideration (now). If you don't find that strange, then I don't know what is to you.

If I understand your worry here, it's that in BM the wf is "merely epistemological" in that its only role is to provide a probability distribution for which positions/trajectories are actually realized. But this is just based on a confusion. The wf is not merely epistemological in BM. It is physically real, as real as the particles and their positions. What exists is particles being guided along their trajectories by the wf. Any epistemological character the wf has is *secondary* and, indeed, must be *derived* from its ontological/dynamical character. (But no worry, it can be so derived.)
 
  • #177
Farsight said:
People are so steeped in their axiomatic particles that they invent parallel universes and even time travel to explain them. They just won't see the solution: the quanta you are dealing with ARE NOT PARTICLES. A quantum is NOT all in one place. It is NOT a point. A gallon is not a particle. A coulomb is not a particle. A quantum IS NOT LOCAL.

But people don't listen, and instead we have a god damn crackpot theological "debate" going round in circles for fifty years pretending to be physics. Absolutely tragic.

No Farsight, what is crackpottish is the following; a Bohmian coming to you and telling :
(a) A particle is not a point but we enrich reality by putting in point particles :smile:
(b) Bohmian mechanics is the right way to see reality while NO attempt to connect with the reality of GR is made at all (!)
(c) Actually, the interpretational problems I gave you before *are* due to a hidden form of super-determinism, reversing arrow of time and so on. I would expect any theory which claims to be somehow more *real* to make such link EXPLICIT in terms of (local) physical processes (and no, Bell does not forbid that !).

The problem with Bohmpkes (like we say in dutch) and all these other super quantum oriented people writing about these issues, is that they forget there is other physics out there.

Careful
 
Last edited:
  • #178
**More precisely, I mean to say that if it were not for wave collapse, none of the odd behavior of QM would exist.**

Wel, that is no solution right ?? You still did not say where and when this transformation happens, and what the correct mechanism is to trigger it.

However (!), I do feel something for your proposal, it at least tries to adress QM in a more realistic way (but then in 5 dimensions). In the same way, I can say something good about GRW which adresses the problem of emergent classicality by putting in reasonable collapse-time and length scales, the same goes for the proposal by Penrose. The problem lies of course in the relativistic invariance, and although these approaches certainly go in the good direction - there is still lacking some deeper insight. I don't know whether a superdeterministic, local Planck scale proposal will make it but (!) (a) it is not ruled out (b) it is the most controlled, *least* conspirational approach to QG one can imagine (the world of LQG and superstring theory is much wilder) (c) emergent phenomena in the LQG like approaches certainly have to pass a similar scaling procedure. Therefore, it seems to me the most logical starting point to try the ``most simple'' idea first ...



Careful
 
Last edited:
  • #179
Hurkyl said:
I agree with the sentiment -- but I think you are at least as guilty of this as Careful.
I don't think so - I have no sentiments to determinism whatsoever. What I *do* have sentiments to is that QM lovers who have never thought about QG and what to do with QM on much higher energy scales, do not
(a) realize that a straightforward application of QM would make our macroworld a priori much more conspirational than a deterministic scenario would.
(b) recognize that Bell's assumptions are like a swiss cheese.

Moreover, as I said before, none of these interpretations/modifications do really solve something.
The problem as I see it, is that the ``new'' generation has given up the goal of *understanding* nature and this for the wrong reasons (expressed by some people in the past).

Careful
 
Last edited:
  • #180
**So you're talking about GRW? It's a fine theory -- probably the second best available option. **

GRW is non local, so why would I like it (although it adresses some of the issues I worry about and I have considered it previously)?

**
I don't follow the last part. There is no new dynamics for measurements in BM (no "new wavepackages" need to be "generated").
**

But what is observation then ? At some point you have to give the particle a CLASSICAL meaning, you should make it interact with classical fields (while the *wave function* does not interact with the latter); For example in the double slit experiment, there is nothing in the wave function which tells you that the particle will end up at any particular place on the screen, the only way to avoid this is to add an extra ingredient like the reduction (which tells you that somewhere it will appear, be we don't know where and when - and if it happens then we have to use an appropriate projection operator) or a *distinguished* classical element which undergoes *more* interactions than the wave does. This split again seems to be entirely arbitrary, so there really is no unified dynamics.


**
So then why don't people simply drop the collapse rule and formulate the theory with sch-evolution only? Oh right, because of the measurement problem. **

:rolleyes: First of all, Vanesch would protest here. Second, most people do see that Bohmian mechanics gives the problem a more reasonable face, but since no real solution is presented and the conflict with GR and SR is blatant, most see it as window dressing.


**If I understand your worry here, it's that in BM the wf is "merely epistemological" in that its only role is to provide a probability distribution for which positions/trajectories are actually realized. But this is just based on a confusion. The wf is not merely epistemological in BM. It is physically real, as real as the particles and their positions. What exists is particles being guided along their trajectories by the wf. **

No, you still do not understand what I say (although Farsight seems to) : what you say is moreover trivial and can be found in any textbook. The problem is not the fact that I can see the wavefunction in two ways (that is just cheap blablaba), the issue is that it is very hard to believe that this non-local quantity which contains as well information about the future and the past can serve as a physical guidance mechanism; without people giving a more in depth mechanism which is closer to GR as why this should be so.


Careful
 
Last edited:

Similar threads

  • · Replies 3 ·
Replies
3
Views
1K
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 40 ·
2
Replies
40
Views
2K
  • · Replies 28 ·
Replies
28
Views
4K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 43 ·
2
Replies
43
Views
5K
Replies
3
Views
2K
  • · Replies 491 ·
17
Replies
491
Views
36K