A short course on Bohm's theory.

vanesch
Staff Emeritus
Science Advisor
Gold Member
Messages
5,109
Reaction score
20
I would like to invite people who want to study Bohm's theory in this thread. The aim is to *study* the theory, because people who (think they) are confortable with quantum theory (such as me) sometimes don't know much about Bohm's formulation of it (such as me!). As there are some proponents of this theory around (ttn ?) maybe they can enlighten us.
DrChinese posted an interesting link already: http://plato.stanford.edu/entries/qm-bohm/

I have in front of me: "Quantum Theory" from David Bohm, his textbook on QM. However, it dates from 1951 (I have a Dover reprint) and seems to be on the Copenhagen theory ! Maybe it was the completion of this book which pushed Bohm not to be content with this approach ?

So, professor, I'm listening !

cheers,
Patrick.
 
Physics news on Phys.org
I did some reading up on Bohm (thanks to DrChinese's link).

If I understand well the premisses of Bohmian mechanics, it goes as follows:

There is a wavefunction on configuration space psi(q1,...qN) which will evolve exactly as dictated in conventional quantum theory, with the schroedinger equation. So, in Hilbert space notation, we work in the "configuration basis" (the position basis) |q1,...qN>.
Next, it is postulated that there is a real configuration, given by the variables {Q1...QN} which is the "real" state of the system, but of which we have to POSTULATE that initially, this configuration has a probability of |psi(q1...qN)|^2 to be equal to {q1,...qN} in a statistical mechanics way.

It is further postulated that the evolution equation in time of these values Q1..QN are given by:

dQk/dt = j_k/rho

with j_k and rho the probability current and density in configuration space, as given by standard QM.

Given the evolution equation for the {Q1...QN} it is clear that, considering the conservation equation d rho/dt + div j = 0, at ANY time the initial population of Q1...QN will remain distributed according to psi* psi.

So it is as if, in the position basis (or more generally, the configuration basis) we have a wave function psi evolving purely unitary, and we consider a kind of "token" that indicates WHICH one is now the real one. But because of our initial uncertainty on where was the token initially, we end up having a statistical distribution in position basis which is the same as the one given by the Born rule upon position measurement (or "configuration measurement").

I take it that Bohm somehow also postulates that every measurement, at the end of the day, is a position measurement of something (the pointer states?).
(in the above, I use "position" and "configuration" mixed ; it is clear that there is a preferred basis in Bohm's theory, and it is the one I'm talking about).

Do I have it right to think that the "initial wavefunction" is part of the state description, but that the distribution of the initial {Q1...QN} is simply due to lack of knowledge on my part ?

If so, I hit a conceptual difficulty.

Imagine I do a 2-slit experiment. So I have an initial wavefunction which describes the two slits, and I'm supposed to accept an initial ignorance distribution of {X} corresponding to psi* psi.

After measurement of the impact, however, I can track back exactly through WHICH slit my particle went, because knowing the initial psi, there is no limit to the accuracy by which I can reconstruct the trajectories. So AFTER the measurement of X (which, by itself, can be made arbitrarily precise) I can reconstruct in the past, the track back.
But that would mean that with the information I gained now, I KNOW the initial value of {X} and it is not distributed anymore as psi* psi, but psi should now be replaced with a deltafunction on X. However, that will not give me the right final density.

The problem seems to be that the information obtained by the measurement changes the ignorance I have about my initial distribution of {Q1...QN}, in that it picks out one single value. And then my nice relationship between psi* psi and my probability distributions don't go anymore.

How is this handled by Bohm defenders ?

The second conceptual difficulty I have goes as follows.
Imagine a closed system, including me, but within a container that doesn't allow for any external influence. This system can be described by a huge number of particles, so its configuration space is enormous, but with a finite number of q_i.
Now, imagine I do a spin measurement, and I observe it. According to unitary QM (which is shared with Bohm), the overall wavefunction is now in a superposition:
|me+> |spin+> + |me->|spin->

I take it that, given that Bohm postulates unitary QM everywhere (I like that :approve:) we arrive at such a wave function in configuration space.
But now I've SEEN that it was spin+. I'm now going to do an experiment with the spin of that particle. In a "collapse" thing (Copenhagen), the fact that I've seen this spin + would indicate that the new wave function is simply |me>|spin+>. But we cannot do that in Bohmian mechanics, which postulates strict unitary evolution.
So what will be the distribution now of the "tokens", if I consider this as an initial situation ? It should be 50/50.
And if I do a NEW measurement of spin ? I should get 50% chance of having spin+ and 50% chance of having spin -.
But that cannot be ! I know it is spin + (and a second measurement will confirm this). So OR after this first measurement, the link between my psi* psi and my distribution of tokens is broken (my wave function says: 50-50, and I know 100-0) which is in contrast with one of Bohm's postulates, OR my wavefunction has to collapse, with the same problems as in Copenhagen.

How does a Bohmian respond to this ?

cheers,
Patrick.
 
Last edited:
vanesch said:
I would like to invite people who want to study Bohm's theory in this thread. The aim is to *study* the theory, because people who (think they) are confortable with quantum theory (such as me) sometimes don't know much about Bohm's formulation of it (such as me!). As there are some proponents of this theory around (ttn ?) maybe they can enlighten us.
DrChinese posted an interesting link already: http://plato.stanford.edu/entries/qm-bohm/

That article by Sheldon Goldstein is a great place to start. Highly recommended.

There was also a very good paper in AmJPhys about 6 months ago written by Roderich Tumulka. It's a "dialogue" between someone asking questions about Bohm's theory and someone who mostly answers them. Very readable and very illuminating on some typical FAQs. Maybe the paper is on arxiv? ... Yes! Here it is:

http://www.arxiv.org/abs/quant-ph/0408113



I have in front of me: "Quantum Theory" from David Bohm, his textbook on QM. However, it dates from 1951 (I have a Dover reprint) and seems to be on the Copenhagen theory ! Maybe it was the completion of this book which pushed Bohm not to be content with this approach ?

That's it exactly. Bohm came up with his hidden variable theory in 1952 -- just after thinking through the Copenhagen approach in great detail while writing that 1951 book.
 
ttn said:
There was also a very good paper in AmJPhys about 6 months ago written by Roderich Tumulka. It's a "dialogue" between someone asking questions about Bohm's theory and someone who mostly answers them. Very readable and very illuminating on some typical FAQs. Maybe the paper is on arxiv? ... Yes! Here it is:

http://www.arxiv.org/abs/quant-ph/0408113

Yeah, nice. But I'm now more and more convinced that my objection holds: even if we have initially taken for granted that, in a closed system in quantum state psi, the true configuration is unknown with a distribution given by |psi|^2, and even if I accept without any problem that the way Bohm's mechanics is build, this stays so during unitary evolution, there IS a clash from the moment you make a measurement:
the wavefunction continues to evolve in the same way as if it weren't a measurement, but the measurement HAS increased your knowledge and HAS changed the "ignorance" probability distribution of the "true configuration" ; so after a measurement, it is NOT TRUE ANYMORE that the ignorance of your state is still given by |psi|^2 EXCEPT if you now collapse psi.
You can try to weasel out in open systems, by somehow postulating conditional wavefunctions and so on, but it won't work in a closed system. And in fact these arguments are JUST AS BAD as the claims that DECOHERENCE SOLVES THE MEASUREMENT PROBLEM. This last claim is NOT true in QM, and it is just as untrue in Bohmian mechanics.
The only thing which is achieved in both is that for all practical purposes, the LOCAL density matrix diagonalises in the pointer states (in Bohm: configuration variables). BUT THE STATISTICAL SELECTION OF ONE is just as much a difficulty in Bohm as it is in decoherence.

cheers,
Patrick.
 
vanesch said:
So it is as if, in the position basis (or more generally, the configuration basis) we have a wave function psi evolving purely unitary, and we consider a kind of "token" that indicates WHICH one is now the real one.

Which what is the real what? You mean something like: which branch of the wf is real? I guess that's OK, but it's more precise to talk about a specific point in the configuration space being the actual configuration of the particles. That way you don't get into any trouble when it's not clear what basis to use for the wf (and hence "how many branches" there are, etc.).


But because of our initial uncertainty on where was the token initially, we end up having a statistical distribution in position basis which is the same as the one given by the Born rule upon position measurement (or "configuration measurement").

Yup.


I take it that Bohm somehow also postulates that every measurement, at the end of the day, is a position measurement of something (the pointer states?).
(in the above, I use "position" and "configuration" mixed ; it is clear that there is a preferred basis in Bohm's theory, and it is the one I'm talking about).

Is this really an extra postulate? Can you think of an example of a measurement of something that isn't, in fact, a measurement of the position of something?



Do I have it right to think that the "initial wavefunction" is part of the state description, but that the distribution of the initial {Q1...QN} is simply due to lack of knowledge on my part ?

Yes, plain old ordinary classical-stat-mech style uncertainty.

If so, I hit a conceptual difficulty.

Imagine I do a 2-slit experiment. So I have an initial wavefunction which describes the two slits, and I'm supposed to accept an initial ignorance distribution of {X} corresponding to psi* psi.

After measurement of the impact, however, I can track back exactly through WHICH slit my particle went, because knowing the initial psi, there is no limit to the accuracy by which I can reconstruct the trajectories. So AFTER the measurement of X (which, by itself, can be made arbitrarily precise) I can reconstruct in the past, the track back.
But that would mean that with the information I gained now, I KNOW the initial value of {X} and it is not distributed anymore as psi* psi, but psi should now be replaced with a deltafunction on X. However, that will not give me the right final density.

I'm with you up until the last part. Yes, you can "retrodict" the trajectory of the particle, e.g., infer from the spot where it lands which slit it went through. So... you can "beat the uncertainty principle" in the past, but not in the future. I'm not sure what you're worried about with your last sentence, though. You find out where the particle landed, and then infer back to where it was at the beginning. Now you want to say: "aha, if you pretend you knew all along that that's where it started, you'd have an initial delta function probability distribution and hence predict that it lands with certainty at this one particular spot on the screen! But that's crazy since that's impossible according to QM." Or something like that, yes? Well, it's not crazy at all! The new, delta-function initial probability distribution does indeed predict that the particle will land at some one particular spot on the screen later -- the very spot where it does in fact land! So what's the problem??

Maybe you're worried about what happens next, i.e., after the measurement, we somehow have better-than-\psi^2 knowledge of the probability. But that's not true. Looked at from the perspecitve of regular quantum theory, you did a position measurement, so the wave function collapsed to something like a delta function. So there is no contradiction between knowing where the particle is now and having the probability be given by |\psi|^2.

Of course in Bohm's theory there's no actual collapse, no separate dynamical process. Collapse is something theorists do when they find it convenient (as I think Bell said). But I'm sure we'll get to that issue soon enough...


The problem seems to be that the information obtained by the measurement changes the ignorance I have about my initial distribution of {Q1...QN}, in that it picks out one single value. And then my nice relationship between psi* psi and my probability distributions don't go anymore.

How is this handled by Bohm defenders ?

I still don't see what the problem is. It's true that, when you go back and reconstruct that past event (the particle starting near the slits and then landing ... right there) you'll know more than you could have known watching it "live". But so what? This extra knowledge doesn't spoil the later/subsequent use of P ~ |\psi|^2, and, as far as the past reconstruction is concerned, it merely ensures that the thing you know happened, actually happens. No problem!


The second conceptual difficulty I have goes as follows.
Imagine a closed system, including me, but within a container that doesn't allow for any external influence. This system can be described by a huge number of particles, so its configuration space is enormous, but with a finite number of q_i.
Now, imagine I do a spin measurement, and I observe it. According to unitary QM (which is shared with Bohm), the overall wavefunction is now in a superposition:
|me+> |spin+> + |me->|spin->

Schroedinger's cat!

I take it that, given that Bohm postulates unitary QM everywhere (I like that :approve:) we arrive at such a wave function in configuration space.
But now I've SEEN that it was spin+.

OK, so that means the two branches of the wf are widely separated in the configuration space (the atoms in the needle of your measuring instrument are here rather than there, all the electrons in your brain that stored your memory of seeing the needle point a certain way are in different places than they would have been otherwise, etc...). And one of them contains the actual configuration point. That is, one or the other of those two things actually happened. You said you saw that it came out spin+. OK, so that's the one that *happened*.

Also (and here's where the "collapse" comes in), the dynamics is local in configuration space. So picture the two branches of the wave function as two "lumps" of nonzero value at two widely separated points in the config space. The equation that governs the subsequent evolution of the "particle position" (in the configuration space, so... shorthand for the positions of all the particles in the system) gives the velocity in terms of the value of the wf *where the particle is sitting*. So that means, whichever "lump" of the wf doesn't contain the particle, is going to have *no effect* on the dynamics of the particle, now or ever (unless those two lumps are made to overlap at some point, but decoherence shows how extremely unlikely that is). OK? So you can simply decide to *ignore* that empty part of the wf and live life from now on as if that empty part had been "collapsed away". Of course, it hasn't *really* been collapsed away, but so long as it doesn't influence the motions of the particles, who cares? You can just ignore it. Anyway, that's how the collapse works in Bohm's theory. But now back to the cat...


I'm now going to do an experiment with the spin of that particle. In a "collapse" thing (Copenhagen), the fact that I've seen this spin + would indicate that the new wave function is simply |me>|spin+>. But we cannot do that in Bohmian mechanics, which postulates strict unitary evolution.

You can do it, for the reasons I outlined above.


So what will be the distribution now of the "tokens", if I consider this as an initial situation ? It should be 50/50.
And if I do a NEW measurement of spin ? I should get 50% chance of having spin+ and 50% chance of having spin -.

No way! If you just arbitrarily strip off the part of the wf referring to you and pretend that the state is |spin+> + |spin->, *then* you could say that there ought to be a 50/50 chance of getting spin-. But that isn't the state! Leave the "Patrick" (or was it a cat?) factors in the wf and then perform an additional spin measurement. You'll of course find that you always get spin+, since you already *told* us that you got spin+ the first time -- i.e., we already *know* that the actual configuration point after the first measurement is in the spin+ (and "patrick believes spin+") branch of the wf. Make sense? :smile:

OK, I'm sure that will raise some additional questions/thoughts...
 
vanesch said:
Yeah, nice. But I'm now more and more convinced that my objection holds: even if we have initially taken for granted that, in a closed system in quantum state psi, the true configuration is unknown with a distribution given by |psi|^2, and even if I accept without any problem that the way Bohm's mechanics is build, this stays so during unitary evolution, there IS a clash from the moment you make a measurement:
the wavefunction continues to evolve in the same way as if it weren't a measurement, but the measurement HAS increased your knowledge and HAS changed the "ignorance" probability distribution of the "true configuration" ; so after a measurement, it is NOT TRUE ANYMORE that the ignorance of your state is still given by |psi|^2 EXCEPT if you now collapse psi.

Yeah, I think what's throwing you here is that you're not being consistent about what you mean by "closed system." If you're including yourself and all your favorite lab equipment inside the system, then you have to include the factors for all those things in the wf you write down. And if you don't want to include all that junk in your "system", well, good luck *measuring* the state of anything while keeping the system closed!


You can try to weasel out in open systems, by somehow postulating conditional wavefunctions and so on, but it won't work in a closed system. And in fact these arguments are JUST AS BAD as the claims that DECOHERENCE SOLVES THE MEASUREMENT PROBLEM. This last claim is NOT true in QM, and it is just as untrue in Bohmian mechanics.

I don't agree. The actual configurations (i.e., definite particle trajectories) add to decoherence just what is needed to solve the measurement problem. Decoherence shows that the different "lumps" of the wf in the configuration space get to be non-overlapping and it shows that under ordinary circumstances the lumps *stay* well-separated -- i.e., you pretty much cannot ever get them to overlap ("interfere") again. Well, in ordinary QM, you still need a collapse postulate to pick one of these as real. But not so in Bohmian mechanics -- the actual configuration is already there, and has been there all along. So given that there already is one of those lumps that is picked out as "special" (in the sense that it contains the actual configuration) decoherence is what justifies tossing out all those now pointless (dynamically impotent) lumps. So Bohmians do collapse the wf -- only, they are able to say that it is merely something physicists do for convenience, rather than some kind of unitarity-violating physical process.



The only thing which is achieved in both is that for all practical purposes, the LOCAL density matrix diagonalises in the pointer states (in Bohm: configuration variables). BUT THE STATISTICAL SELECTION OF ONE is just as much a difficulty in Bohm as it is in decoherence.

No, I think it's no trouble at all in Bohm.
 
ttn said:
No way! If you just arbitrarily strip off the part of the wf referring to you and pretend that the state is |spin+> + |spin->, *then* you could say that there ought to be a 50/50 chance of getting spin-. But that isn't the state! Leave the "Patrick" (or was it a cat?) factors in the wf and then perform an additional spin measurement. You'll of course find that you always get spin+, since you already *told* us that you got spin+ the first time -- i.e., we already *know* that the actual configuration point after the first measurement is in the spin+ (and "patrick believes spin+") branch of the wf. Make sense? :smile:

OK, I'm sure that will raise some additional questions/thoughts...

No, I'm talking about the entangled state
1/sqrt(2) {|me+>|spin+> + |me->|spin->}

I think you agree with me that this is the wavefunction after measurement, right ?
Now, I KNOW that the "token" is in the first branch, but the wavefunction is still the above superposition, which is the "initial state" to continue with. And it was postulated that the "initial uncertainty of the token" must go like |psi|^2, so according to this view, the token should be with 50% chance in the first branch, and with 50% chance in the second.

If I take as initial state
1/sqrt(2) {|me+> |spin+> + |me->|spin->} and I do AGAIN a spin measurement, I will find a state:
1/sqrt(2){|me++>|spin+> + |me-->|spin->} through unitary evolution, and I was supposed to take as initial distribution of tokens a 50-50 distribution (remember: I had to start out with an initial distribution equal to |psi|^2 in order for Bohm's theory to be on par with QM).
So I now have the paradoxical situation that AFTER the second measurement, I should have 50 % chance to have measured twice + and 50 % chance to have measured twice -.
But this cannot be true ! After the first measurement, I ALREADY KNEW that the token was in the first branch.
So for the second measurement, I'm apparently NOT allowed anymore to take the function "psi" and accord a probability of the initial distribution of {Q1,...} according to |psi|^2 !

So, depending on what I know, the initial uncertainty on the {Q1...} variables can be given by a probability distribution of |psi|^2, or it cannot, depending ? But then why should we assume initially a distribution according to |psi|^2 ? If sometimes it isn't ?

cheers,
Patrick.
 
ttn said:
Yeah, I think what's throwing you here is that you're not being consistent about what you mean by "closed system." If you're including yourself and all your favorite lab equipment inside the system, then you have to include the factors for all those things in the wf you write down. And if you don't want to include all that junk in your "system", well, good luck *measuring* the state of anything while keeping the system closed!

Yes, I want to include it all. It is possible. After all, in a closed room you can do measurements, no ? Or is that prohibited ? Is the result of a measurement dependent on the fact that the door is open ?

EDIT: I should maybe add that the |me> states in the previous message are "the rest of the room, including me, and my measurement apparatus".

I understand the argument about the separation in configuration space. It is the same as the high dimensionality in decoherence, and comes down that we can consider individual EVOLUTION of the branches without - for all practical purposes - any interference from the other branches.

But the thing I'm having difficulties with is that, in order for Bohm to get the right results out, he has to POSTULATE that the initial distribution of {Q} must be equal to |psi|^2.
Clearly, it is not always the case, for instance in the trivial case I showed, when I (part of the closed room) KNOW already in which branch the token must be. But if it is not always the case, then how fundamental is this ? And if it is not a fundamental thing that the initial distribution of the {Q} is given by |psi|^2, then why should it be so in the first place ?

cheers,
patrick.
 
Last edited:
ttn said:
Is this really an extra postulate? Can you think of an example of a measurement of something that isn't, in fact, a measurement of the position of something?

The color of light ?

cheers,
Patrick.
 
  • #10
ttn said:
The new, delta-function initial probability distribution does indeed predict that the particle will land at some one particular spot on the screen later -- the very spot where it does in fact land! So what's the problem??

Yes, but on the condition that you keep the wavefunction to be the original wavefunction, and do not replace it with a deltafunction. Because if you do so, your trajectories will change, and your particle will NOT land where it landed, and from which you tracked back.

cheers,
Patrick.
 
  • #11
vanesch said:
No, I'm talking about the entangled state
1/sqrt(2) {|me+>|spin+> + |me->|spin->}

I think you agree with me that this is the wavefunction after measurement, right ?

Sure.


Now, I KNOW that the "token" is in the first branch, but the wavefunction is still the above superposition, which is the "initial state" to continue with. And it was postulated that the "initial uncertainty of the token" must go like |psi|^2, so according to this view, the token should be with 50% chance in the first branch, and with 50% chance in the second.

Sure, nothing *forces* you to throw away the now-dynamically-irrelevant parts of the wf. So in principle you are free to believe that there is a 50% chance for each of the two branches to "contain the token." But then that contradicts what you said in the original setup of this example -- namely, that you learned that the outcome was *in fact* spin+. But if you want to carry along the empty part of the wf in your calculations, you're free to do so.


If I take as initial state
1/sqrt(2) {|me+> |spin+> + |me->|spin->} and I do AGAIN a spin measurement, I will find a state:
1/sqrt(2){|me++>|spin+> + |me-->|spin->} through unitary evolution, and I was supposed to take as initial distribution of tokens a 50-50 distribution

So what's the problem? If -- AS YOU CLAIMED BEFORE! -- the "token" was in the + branch before the second measurement, it will remain there after the second measurement too. Of course, if you pretend you don't know where the token is, you will say there's a 50/50 chance for +/-. What's the problem? :-p


So I now have the paradoxical situation that AFTER the second measurement, I should have 50 % chance to have measured twice + and 50 % chance to have measured twice -.
But this cannot be true ! After the first measurement, I ALREADY KNEW that the token was in the first branch.

Look, either you somehow knew that the actual configuration after the first measurement was +, or you didn't know that. If you didn't, there's no problem saying there's a 50/50 chance after another measurement, obviously. If you did, then you know that the chances *aren't* actually 50/50 because you have more information about the actual configuration. You evolve both the wf and the "token location" forward in time, and you find (duh) that the wf is |me++>|spin+> + |me-->|spin-> AND THAT THE TOKEN IS IN THE FIRST TERM. To claim that there is really a 50/50 chance that the second outcome will be -, you have to *contradict* what you said earlier -- namely, that the token was in the first term.

Is your point just that this means the probability isn't always psi^2? i.e., How come it's possible sometimes to know the location of the "token" with better precision than |\psi|^2? Is that what you're after?

I think you just have to remember that the "location of the token" suffers merely "regular old classical type" uncertainty here. So if you learn something about it, you should incorporate that knowledge into your future predictions. The |\psi|^2 law isn't like a fundamental law of nature, it's just something that seems to describe how particles seem empirically to be distributed within their wfs. If you learn more, use it (typically by tossing out now-irrelevant parts of the wf).


So for the second measurement, I'm apparently NOT allowed anymore to take the function "psi" and accord a probability of the initial distribution of {Q1,...} according to |psi|^2 !

I'm not entirely sure why you'd want to. Doing so would literally contradict what you claim to have just learned. It's like saying: flip a coin; there's a 50% chance for heads; you see it land tails; can't I still say there's a 50% chance of heads? Well, you can say that, sure. Maybe what you mean is that there's a 50% chance of heads for the next flip or something. But if what you mean is that there's a 50% chance that the tails staring you in the face is really a heads, I'm not sure what would possesses you to say that.

But in principle you can do this -- in your words, you are certainly "allowed" to (say, ignore what you learned and) thus use the full

|spin+>|me+>+|spin->|me->

state. But I don't think any contradictions will arise from this -- if you don't later *renege* on the claim that the "token" is in the ++ part of the config space! For example, if you do this, you'll find yourself in the "me++" state later, never in the "me--" state...


So, depending on what I know, the initial uncertainty on the {Q1...} variables can be given by a probability distribution of |psi|^2, or it cannot, depending ? But then why should we assume initially a distribution according to |psi|^2 ? If sometimes it isn't ?

The question of where the psi^2 distribution *comes from* in Bohmian mechanics is an important one. Let's just leave it for another message...
 
  • #12
Just to make a resume of my "objections" (which are more nitpicking I suppose):

The link between the "initial probability distribution of the token" and |psi|^2 is not 100% clear. If starting from scratch, you take them equal, but if not starting from scratch and already knowing some stuff, you assign other probabilities to your initial situation.

On one hand this is not so dramatic. After all, it is a bit like in statistical mechanics: you can propose a certain probability distribution of the microstate phase space ; but on knowledge of say, the position of one molecule, this changes the probabilities, without affecting macroscopic parameters such as entropy or temperature.
However, what is a bit disturbing is that the link between the initial distribution of {Q} and |psi|^2 seems to be broken ; so it is not axiomatic. So then it should be _demonstrated_ that this distribution must be given by |psi|^2 somehow. But I can imagine that this can be done (imposing, maybe, certain conditions on the initial state of the universe).

Once |psi|^2 and the distribution of tokens are completely separate, another issue comes in. It is the fact that the wavefunction is an essential part of the description of the system. So it is not because, in the wave function:
|me+>|spin+> + |me->|spin->, I know that the token is with the first branch, that the second, empty one looses its meaning. It exists, but is token-less. So we cannot deny it. We can deny it for all practical purposes, because of the size of the configuration space, and the lack of overlap, except in EPR like situations. This is identical with MWI in fact, except that there now is a "token mechanism" that implements the Born rule if somehow, we apply the right initial conditions.

But I now have a problem with an EPR setup.

Imagine I have 2 entangled electrons, in spin states |z+>|z-> - |z->|z+>

Imagine I take the second particle on a spaceship and take it 100 lightyears from here, while the first particle remains in a container on earth.
500 years later, the descendants of the travellers decide on whether they will do an x-spin measurement or a z-spin measurement with a stern-gerlach machine.
About around the same time, on earth, a similar decision is made.

How does this translate in the Bohmian formulation ?

I would think that locally on earth, whatever do the travellers, this will not influence the measurement on earth, because the wavefunction is unaffected by the measurement results: it continues to evolve unitarily, doesn't it.
But then we are in Bell Locality conditions, aren't we ?

So how do we obtain the QM results ?

Could you elaborate on such an EPR situation in Bohm ?

cheers,
Patrick.
 
  • #13
vanesch said:
But the thing I'm having difficulties with is that, in order for Bohm to get the right results out, he has to POSTULATE that the initial distribution of {Q} must be equal to |psi|^2.
Clearly, it is not always the case, for instance in the trivial case I showed, when I (part of the closed room) KNOW already in which branch the token must be. But if it is not always the case, then how fundamental is this ?

Not very fundamental at all.


And if it is not a fundamental thing that the initial distribution of the {Q} is given by |psi|^2, then why should it be so in the first place ?

There are several schools of thought. Bohm himself speculated that there might be some sub-quantum randomness that would make the particles jiggle their way into the psi^2 distribution as a kind of "maximum entropy" equilibrium state. Valentini wrote some pretty cool papers on this in the 1990's and showed that you can formulate Bohm's idea clearly. Others, however, prefer a different approach based on the fact that psi^2 is the only natural candidate for a probability density because it is the only distribution which will remain true if it is true at some one time. This suggests it's not crazy to think that "god" distributed the "token" according to psi^2 at the beginning of the universe; then it can be shown that subsystems will obey it now.

And there are some other ideas out there too for thinking about where this comes from. I personally haven't spent a lot of time worrying about them, though. I think once you recognize the non-fundamentality of the born rule in Bohm's theory, it doesn't much matter "why" the born rule is true. A perfectly good answer is just to say: the born rule is based purely on experiment. Given the deterministic dynamics of Bohm, we just have to find out *from experiment* how particles are sprinkled into their guiding waves. And psi^2 has always done the trick perfectly.

cheers,
patrick.[/QUOTE]
 
  • #14
vanesch said:
The color of light ?

How do you measure that exactly? With a prism? Or were you thinking of direct perception of the color with human eyeballs?
 
  • #15
ttn said:
The |\psi|^2 law isn't like a fundamental law of nature, it's just something that seems to describe how particles seem empirically to be distributed within their wfs.

Ok, that's the point. But then, if it isn't a fundamental law of nature, one should explain me why I should take THAT as an initial distribution when I'm ignorant. But I don't claim that it cannot be done. Only, this somehow has to come out of the dynamics now and cannot be postulated.

cheers,
Patrick.
 
  • #16
vanesch said:
Yes, but on the condition that you keep the wavefunction to be the original wavefunction, and do not replace it with a deltafunction. Because if you do so, your trajectories will change, and your particle will NOT land where it landed, and from which you tracked back.

Oh, OK, I get your point.

This is a good illustration of the relative fundamentality of the wave function and the psi^2 probability distribution. In normal situations, the psi^2 prob dist seems to be the correct "best guess". (Abnormal situations here evidently include ones in which you have some additional information that you could use to winnow down the probability distribution in some way -- in which case, you ought to use it.) But, in principle, there's no reason why say god can't know the exact locations of all the particles. Then he'd be able to predict with certainty the outcomes of all sorts of experiments that we can only guess at with born rule probabilities. But this knowledge of god's wouldn't change the wave functions. In particular, the wave functions wouldn't become delta functions just because for god P(x) = delta(x).

So, when you are doing "retrodiction", you can be just like god. You can know exactly where the particle was 5 minutes ago -- i.e., you can have knowledge that is much better than psi^2. But that's just knowledge. It doesn't make the wf change.
 
  • #17
vanesch said:
Ok, that's the point. But then, if it isn't a fundamental law of nature, one should explain me why I should take THAT as an initial distribution when I'm ignorant. But I don't claim that it cannot be done. Only, this somehow has to come out of the dynamics now and cannot be postulated.

The Valentini papers I mentioned argue that you can derive this (in some sense) from the dynamics.

But I don't think I agree with you that this is necessary. I mean, it would be cool if you could do it, no doubt. But it's not like it makes the theory wrong if you just have to accept the psi^2 rule as an additional postulate to make the predictions match experiment.
 
  • #18
ttn said:
How do you measure that exactly? With a prism? Or were you thinking of direct perception of the color with human eyeballs?

Yes, for instance. The reason is that Bohm takes it easy with "the preferred basis problem" by just saying that it has to be the position basis. Now, decoherence people go through a great many effort and come to the conclusion that for macroscopic systems, often, the "decoherent basis" is ... well, the position basis. At least, for charged or massive things.
But this is different for the EM field. There, the coherent states (classical plane EM waves) are closer to the natural basis. Hence my point. Now, as we, human beings, are mainly made up of massive and charged matter, the position basis will probably do. And if you dissect my eyeball, you'll probably say that in the end, I'm measuring the position of potassium and sodium ions in my optical nerve cells. If I were a creature made out of photons, however, the result would be different.

Now, I can still throw something at you. I guess that when you talk about configuration space, that this is in the Hamiltonian sense. Now, what stops me from applying a canonical transformation, and redefine Q as the old canonical momenta "p", and take this as my configuration space ?

cheers,
Patrick.
 
  • #19
vanesch said:
On one hand this is not so dramatic. After all, it is a bit like in statistical mechanics: you can propose a certain probability distribution of the microstate phase space ; but on knowledge of say, the position of one molecule, this changes the probabilities, without affecting macroscopic parameters such as entropy or temperature.

Yes.


Once |psi|^2 and the distribution of tokens are completely separate, another issue comes in. It is the fact that the wavefunction is an essential part of the description of the system. So it is not because, in the wave function:
|me+>|spin+> + |me->|spin->, I know that the token is with the first branch, that the second, empty one looses its meaning. It exists, but is token-less. So we cannot deny it. We can deny it for all practical purposes, because of the size of the configuration space, and the lack of overlap, except in EPR like situations. This is identical with MWI in fact, except that there now is a "token mechanism" that implements the Born rule if somehow, we apply the right initial conditions.

An objection that some MWI people make against Bohm is the following: since all these empty branches of the wf are "really out there", what makes you think you're not *in* one of them? Isn't there a "me-" wave function factor out there somewhere in config space who just measured "spin-" for his particle and will get "spin-" again if he measures again and so forth. And since that story is exactly the same as the story for the corresponding + terms (the only difference being that the + terms have "the token" sitting in them, but...) what's the difference really? Maybe you only *think* that the + terms are real. After all, the guy in the - terms thinks he is real! etc.

I don't have any great answer at the ready for that, so maybe I'm not doing a lot for my case here by bringing it up. But it seemed to be the direction you were heading and I do think it's an interesting point.



But I now have a problem with an EPR setup.

Imagine I have 2 entangled electrons, in spin states |z+>|z-> - |z->|z+>

Imagine I take the second particle on a spaceship and take it 100 lightyears from here, while the first particle remains in a container on earth.
500 years later, the descendants of the travellers decide on whether they will do an x-spin measurement or a z-spin measurement with a stern-gerlach machine.
About around the same time, on earth, a similar decision is made.

How does this translate in the Bohmian formulation ?

Bohm's theory is nonlocal. Sending one particle through some magnetic fields with gradients in some particular direction will cause the distant particle to deviate a bit from the path it would otherwise have taken.

I would think that locally on earth, whatever do the travellers, this will not influence the measurement on earth, because the wavefunction is unaffected by the measurement results: it continues to evolve unitarily, doesn't it.

I don't understand what you mean exactly. What the travellers do (e.g., which axis they point their SG magnets along) *will* influence the particle on earth. Look at the guidance condition in Bohmian mechanics (v = ...). The velocity of each particle depends on the entire configuration! That's the nonlocal part of the mechanism. When one particle gets "kicked", any of its entangled bretheren will feel the kick!


But then we are in Bell Locality conditions, aren't we ?

Bohm's theory violates Bell Locality!
 
  • #20
vanesch said:
Now, I can still throw something at you. I guess that when you talk about configuration space, that this is in the Hamiltonian sense. Now, what stops me from applying a canonical transformation, and redefine Q as the old canonical momenta "p", and take this as my configuration space ?


If you can figure out how to talk about things like measurement-device-pointers in the momentum basis, more power to you.
 
  • #21
ttn said:
An objection that some MWI people make against Bohm is the following: since all these empty branches of the wf are "really out there", what makes you think you're not *in* one of them? Isn't there a "me-" wave function factor out there somewhere in config space who just measured "spin-" for his particle and will get "spin-" again if he measures again and so forth. And since that story is exactly the same as the story for the corresponding + terms (the only difference being that the + terms have "the token" sitting in them, but...) what's the difference really? Maybe you only *think* that the + terms are real. After all, the guy in the - terms thinks he is real! etc.

I don't have any great answer at the ready for that, so maybe I'm not doing a lot for my case here by bringing it up. But it seemed to be the direction you were heading and I do think it's an interesting point.

Well, I can surely tell you that Bohm is much CLOSER to MWI than I thought ! In fact, Bohm has an ugly mechanism to distribute tokens with leverarms that are unattenuated over 100 lightyears, and MWI has an ugly mechanism involving minds and subjective experiences :-p

But - to my surprise in fact - we agree on the most important part: the unitary part of QM is to be taken seriously. No-one ****s the wave function :smile:

I guess it is somehow a matter of taste which ugliness is more acceptable ; probably this indicates where something goes *seriously* wrong in both theories - my hope goes out to gravity (but I have to say that the progress made by superstring people is not very promising... they should fail if we are going to get out of this !)



Bohm's theory is nonlocal. Sending one particle through some magnetic fields with gradients in some particular direction will cause the distant particle to deviate a bit from the path it would otherwise have taken.



I don't understand what you mean exactly. What the travellers do (e.g., which axis they point their SG magnets along) *will* influence the particle on earth. Look at the guidance condition in Bohmian mechanics (v = ...). The velocity of each particle depends on the entire configuration! That's the nonlocal part of the mechanism. When one particle gets "kicked", any of its entangled bretheren will feel the kick!




Bohm's theory violates Bell Locality!

I haven't integrated this completely yet. I tried to figure out an EPR case (which is a bit nasty in the usual formulation in Bohm, because spin states are a bit special in Bohm's approach).
I think I need a hand in working out EPR from the Bohm point of view.

But I'll do that tomorrow: going to bed right now !

cheers,
patrick.
 
  • #22
vanesch said:
I haven't integrated this completely yet. I tried to figure out an EPR case (which is a bit nasty in the usual formulation in Bohm, because spin states are a bit special in Bohm's approach).
I think I need a hand in working out EPR from the Bohm point of view.

Yeah, the EPR scenario with spin is kind of ugly for just the reason you state. It's worked out in detail in "The Quantum Theory of Motion" by Peter Holland (a kind of textbook on the Bohm approach). But there's nothing all that illuminating in it, if you ask me. The high points are: (1) performing a certain measurement on particle A tweaks the trajectory of particle B just exactly enough to (2) reproduce exactly the QM predictions.

It would be nice to have a simpler example of this nonlocal connection, to bring out exactly what it does and what it doesn't do -- that is, why you have to have it to reproduce the right predictions, but also why it is "hidden" by the (assumed) psi^2 distributions.

OK, time for other work...
 
  • #23
Just for your information: I have to say I'm pleasantly surprised by Bohm's theory, which I now view more as "MWI with a token" (a token, such as in a finite-state machine diagram: you have all the potential states on the diagram, but there is ONE that is the active one).
Before, I thought that the complete description of the state was in the {Q1...QN}, and that the wave function business was just an intermediate help; but it is clear that the wavefunction, in unitary evolution, is "just as real" in Bohm as in MWI, and that it never collapses.
So in this wavefunction, "light waves" can scatter off mirrors that aren't there (but only the empty wavefunction of the mirror was there: you understood it: we're in another branch or world than the one with the token), and this piece of scattered wavefunction off a ghost of a mirror will still pull and push on the "real" dynamics of Q1...QN (only, in most cases we won't be able to recognize it and this will just be "noise" on Q1...QN). Also, the {Q1...Q5} of some electrons here on Earth are constantly, immediately and in an unattenuated way influenced by {Q15,Q16} of a proton in the nucleus of a white dwarf somewhere in the Andromeda galaxy, because 7 billion years ago this proton formed a hydrogen atom with these electrons respectively, before being hit by a cosmic ray and propelled towards andromeda and hence is entangled with this electron.
Mmmm... there is a much stronger coupling between
1) VERY REMOTE things (you talk about non-locality!)
2) different "dead" branches of the wavefunction
and my actual, local, real state {Q1,Q2}
than in any "pure quantum" view, where this is just attributed to irreducible randomness, but you have the advantage of having an "objective token" which differentiates the "currently active branch" of the wavefunction from the others.

These couplings sound a bit like astrology, no ? :rolleyes: But somehow I do have some sympathy for this view ; after all, saying: your future will be determined strongly by what happens right now inside a star in the Andromeda galaxy to a proton ; or saying: your future is uncertain,
are about equivalent statements :smile:

What I don't like in this approach, as I think I'm understanding it now, are the definite choices it imposes where we used to think that there were symmetries. For instance, the choice of the configuration basis and the obvious lack of lorentz symmetry. But that's the price to pay for having a token which objectively indicates a branch of the wavefunction to be 'the one'. So I don't know if, given this aspect, working in the Bohm formalism gives you a lot of inspiration for the next thing to come, if you want the results to stick to certain symmetries you've violently broken in the 'token machinery'. So I don't know if it is a very useful exercise to delve into the precise machinery of Bohm for that token.

You can in fact just stick to ordinary QM, and each time you "measure" something, you say that somehow, you've won some information about that token that is jumpin' around, without thinking too much about how exactly that token is jumpin' around :-)

cheers,
Patrick.
 
Last edited:
  • #24
ttn said:
Bohm's theory is nonlocal. Sending one particle through some magnetic fields with gradients in some particular direction will cause the distant particle to deviate a bit from the path it would otherwise have taken.

I don't understand what you mean exactly. What the travellers do (e.g., which axis they point their SG magnets along) *will* influence the particle on earth. Look at the guidance condition in Bohmian mechanics (v = ...). The velocity of each particle depends on the entire configuration! That's the nonlocal part of the mechanism. When one particle gets "kicked", any of its entangled bretheren will feel the kick!

Bohm's theory violates Bell Locality!

OK, I follow the non-local idea. But where are the hidden variables that make it a realistic theory?

Bell's Theorem relies on the explicit notion that when we measure A and B, there exists another measurement C which could have been performed. This is the requirement of realism, which I guess maps to the non-contextuality discussed in Goldstein's article. Bell's Theorem says that if I hypothesize C, then I cannot match the predictions of QM. I don't think there is any question that we expect BM to match the predictions of Copenhagen QM on this issue.

Therefore there is NO C. I don't see how this argument changes with Bohmian Mechanics. I know it is possible to hand-wave past this part of the discussion, by saying there is no problem - the theory is non-local. Yeah, I've heard that plenty of times. OK then, BM is non-local and there is still no A, B and C. Here is Bell's conclusion:

...thus the quantum mechanical expectation value cannot be represented, either accurately or arbitrarily closely, in the form P(a,b)=\int d\lambda p(\lambda)A(a,\lambda)B(b,\lambda)

and the only assumptions are that i) the A() and B() functions are independent; and ii) that there is a C that existed independently. So with BM we take i) as false, ok, fine. Where does that make ii) true? You still don't get statistics that allow for an independently "real" C. At the end of the day, it seems to me that the results are dependent on the observations made and there is still no more complete specification of the system possible.

Yet, Bell specifically refers to BM in his paper as being a hidden variable interpretation. But it seems to me that in Bohm's interpreation, the results are NOT predetermined at t=0 and there is no greater underlying reality. Can anyone remove me from my state of confusion?
 
  • #25
DrChinese said:
OK, I follow the non-local idea. But where are the hidden variables that make it a realistic theory?

Well, I'm still a bit fuzzy on what it means for a theory to be "realistic". (Actually, that isn't what's fuzzy -- what's fuzzy is what exactly it means for a theory *not* to be realistic. Doesn't a theory by definition contain some statements about some physical objects, at least in physics? Is orthodox qm with wave functions and a collapse postulate realistic??)

But the first question is easy to answer: the "hidden variables" are the particle positions. In Bohmian mechanics, all particles have definite positions at all times (and hence follow definite trajectories) whether or not their wave functions are position eigenstates. So, e.g., in the "Einstein's Boxes" type of example, you split the wave function in half, and the particle simply gets carried along with one or the other of the half-packets (which one depends on the initial position of the particle before the splitting). So when you open the boxes and look for the particle, you find it in one and only one box, not because of any weird nonlocal collapse process, but simply because the particle was in that box all along.

Of course as has been pointed out many times, this makes it clear that "hidden variable" is a really stupid bit of terminology for Bohmian particle positions. Since measurements of the positions of particles actually reveal the real pre-existing positions of those particles, the particle positions are anything but hidden. If anything is "hidden" it's the wave function, since that is the thing you can never actually determine by measuring something. So maybe, in a more rational parallel universe, they refer to regular QM as the hidden variable theory...

Bell's Theorem relies on the explicit notion that when we measure A and B, there exists another measurement C which could have been performed. This is the requirement of realism, which I guess maps to the non-contextuality discussed in Goldstein's article. Bell's Theorem says that if I hypothesize C, then I cannot match the predictions of QM. I don't think there is any question that we expect BM to match the predictions of Copenhagen QM on this issue.

Therefore there is NO C. I don't see how this argument changes with Bohmian Mechanics. I know it is possible to hand-wave past this part of the discussion, by saying there is no problem - the theory is non-local. Yeah, I've heard that plenty of times. OK then, BM is non-local and there is still no A, B and C. Here is Bell's conclusion:

...thus the quantum mechanical expectation value cannot be represented, either accurately or arbitrarily closely, in the form P(a,b)=\int d\lambda p(\lambda)A(a,\lambda)B(b,\lambda)

and the only assumptions are that i) the A() and B() functions are independent; and ii) that there is a C that existed independently. So with BM we take i) as false, ok, fine. Where does that make ii) true? You still don't get statistics that allow for an independently "real" C. At the end of the day, it seems to me that the results are dependent on the observations made and there is still no more complete specification of the system possible.

I'm extremely confused by your notation. In Bell's formula, "A" and "B" refer to the outcomes of (say) spin measurements on two particles, along axes given by "a" and "b" respectively. What is this "C" that you are speaking of?

Yet, Bell specifically refers to BM in his paper as being a hidden variable interpretation. But it seems to me that in Bohm's interpreation, the results are NOT predetermined at t=0 and there is no greater underlying reality. Can anyone remove me from my state of confusion?

Even a deterministic theory can't predict what will happen when there is a free choice decision about what to measure. It's like saying: according to classical Newtonian mechanics, when the pitcher pitches the ball at such and such a velocity, where will it end up at the end of the play? Well, there's no way to answer that unless you know whether the batter swings at the ball or not.

Likewise in the Bohmian analysis of spin measurements. No, Bohm's theory can't predict what the outcomes of the two spin measurements will be, even if you pretend you knew the exact initial configuration of the particles. But if you specify the details of the measurements (e.g., particle 1 will encounter a SG device with its magnetic field in the z-direction, etc.) then Bohm's theory (assuming knowledge of the exact initial conditions) would allow you to calculate exactly what the outcomes will be.

It is "realist" in the obvious general sense that it is able to talk about the actual state of things prior to any measurements. The measurement result doesn't "magically pop into existence" upon measurement the way it does in orthodox QM. Rather, the measurement simply reveals the endpoint of a perfectly definite trajectory that the electron took through the SG's magnetic fields. It is not, however, "realist" in the sense of attributing definite pre-measurement spin values to the particles. In Bohmian mechanics, spin isn't even a *property* of the particles; rather, it is carried by the wave function. The *only* property the particle has is its position. So what happens in a SG device is that the magnetic fields cause the wave function to split into two spatially separated components, one of which emerges from the "spin up output port", the other from the "spin down output port." And just as in "Einstein's Boxes", the electron is simply carried along with one or the other of those packets depending on what the initial position happened to be.

David Albert gave a nice example of how this means spin is "contextual" according to Bohm's theory. I'll tell you about that example in another message if you're interested, because right now I got to go! :smile:
 
  • #26
DrChinese said:
Therefore there is NO C. I don't see how this argument changes with Bohmian Mechanics. I know it is possible to hand-wave past this part of the discussion, by saying there is no problem - the theory is non-local. Yeah, I've heard that plenty of times. OK then, BM is non-local and there is still no A, B and C. Here is Bell's conclusion:

...thus the quantum mechanical expectation value cannot be represented, either accurately or arbitrarily closely, in the form P(a,b)=\int d\lambda p(\lambda)A(a,\lambda)B(b,\lambda)

and the only assumptions are that i) the A() and B() functions are independent; and ii) that there is a C that existed independently. So with BM we take i) as false, ok, fine. Where does that make ii) true? You still don't get statistics that allow for an independently "real" C.

The way I understood Bohmian mechanics (from only a few days ago!), is as follows:
The state of the particle PAIR is described by {Q1,Q2, psi(q1,q2,s1,s2)}. Note that the wave function is explicitly a part of the state, and it is only IN COMBINATION with the wavefunction that Q1 and Q2 make sense ; note also that the wavefunction is a double spinor (via s1 and s2) but that the particles do not have a spin value S1 or S2.
Now, imagine that Alice (particle 1) makes a spin measurement along the z-axis with a SG apparatus. By doing so, she will add a term in the hamiltonian that makes psi evolve into two wavepackets as a function of q1 and s1, one for "spin 1 z up" and the other one for "spin 1 z down", and through a very complicated machinery, this will pull upon the variables Q1 and Q2.
If Alice had performed an x-spin measurement, she would have oriented her magnetic field in the x-axis, psi would have split differently in two wave packets as a function of q1 and s1 (due to another term in the Hamiltonian), and this would have pulled DIFFERENTLY onto Q1 and Q2.
Alice will only measure Q1 after the evolution, but this Q1 will have evolved differently according to whether she applied a z-field or an x-field. In the same way, Q2 will have evolved differently (but Alice has no direct access to Q2, only Bob has).

Now if Bob then measures according to x or z or another angle, he will work with different Q1 AND Q2 initial values and initial psi according to what Alice did. Moreover, Q1 and Q2 will not only have different initial values as a function of what Alice did, it will also EVOLVE differently according to what Alice did because the psi are different. And of course this evolution will also be different according to whether Bob measures x or z or another angle, because this will change the term in the hamiltonian, hence psi, and hence Q1 and Q2 evolution. At the end, he measures the value of Q2.

Is that correct, professor ?

It is in fact amazing that there is still information-locality in this theory, everything being connected with everything instantaneously and without any attenuation with distance. It is as if systems that are entangled are somehow connected with rigid rods !
Of course, this view is blatantly in contradiction with lorentz invariance (or even with a canonical transformation in phase space, I think), because you need to say whether it was Bob or Alice who measured first.

Nevertheless, the wave function evolves EXACTLY in the same way as in QM, unperturbed by any measurement of Q1 or Q2. It is of course affected by the interactions in the measurement apparatus, which add terms to the hamiltonian and thus modify the evolution.

So, to answer your question, there was not even an A or a B, let alone a C. A spin measurement is just an illusion according to Bohm. It is just that the wave function is a spinor which acts as if the particle had a magnetic moment, and hence gets extra terms in the hamiltonian when you put it in a magnetic field. But in the end, you just measure the only measurable thing a particle has, namely its position Q1. We are deluded in thinking there is some property of the particle that is "spin". It is just something in the wave function that affects its position evolution, and we're just measuring different positions according to different encoded values in Q1, Q2 and the wave function of which the evolution is determined instantaneously by the performed measurement of the other person (who thinks he's doing a spin measurement).

Again, is that correct, professor ?

cheers,
Patrick.
 
  • #27
Thanks both of you for taking time to explain. Both of you appear to agree on BM descriptions of the following:

i. There is no C when measuring A and B. This makes perfect sense to me.

ii. The outcome is not determinate at time t=0. The determinism creeps in because the wave function considers influences which are explicitly non-local. Because the entangled particles are non-separable, you are not free to discuss the evolution of one particle alone because that really has no meaning.

iii. There is no direct signaling from measurement apparatus A to measurement apparatus B. The relationship is indirect via interactions with the entangled particles.

ttn, you asked what is the definition of "realistic". Mine may be different than what some others give: Is there a C when we measure A and B? This definition goes back to the EPR paper and its condition covering the "elements of reality":

EPR: "(2)...When the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality."

Bell: "It follows that c is another unit vector [in addition to a and b]."

But I can also see that realism could be defined as being a specific mechanism to describe the observed effects - the hidden variables simply still do not provide for the possibility of any further specification of the system. Under that definition BM stands as clearly non-local realistic to me. Under the mine, it is non-local and also non-realistic.
 
  • #28
DrChinese said:
EPR: "(2)...When the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality."

Bell: "It follows that c is another unit vector [in addition to a and b]."

Yes, that is I think what Bohmians call "naive operator realism". But according to Bohm we're deluded in thinking there is something such as a definite spin. There are only positions of particles, and "spinor wavefunctions". We're just doing a complicated experiment of interactions of magnetic fields on particles, and see how they move in magnetic fields. But they don't have such a thing as spin. We're DELUDED :biggrin:

But I can also see that realism could be defined as being a specific mechanism to describe the observed effects - the hidden variables simply still do not provide for the possibility of any further specification of the system. Under that definition BM stands as clearly non-local realistic to me. Under the mine, it is non-local and also non-realistic.

Hehe, it is funny that I got against my MWI view exactly the same remark: things we measure and observe should correspond to something "real" out there, from ttn, a Bohmian.

But ok, Bohmian mechanics rose in my appreciation, honestly. I see it now as a first heroic effort, against all of the Copenhagen stuff, to build a consistent theory. And - they will have a hard time admitting it - but Bohmians are in fact first-generation MWI-ers ! They postulate reality to the wavefunction, together with its absolute unitary evolution. As such, they do away with the cleveage between "measurement processes" and "physical interactions", the inconsistency in Copenhagen QM, because what Albert, today, considers as a measurement, Bruce, tomorrow, will analyse as a physical interaction. And the wave function comes out totally different.
This is the good part.

Next comes their state in the configuration space. I call it a "token" in a preferred basis. It has the nice thing to give an objective reality to a measurement without, in any way, to have to switch from "evolution" to "measurement".
What is terribly ugly, however, is the BLATANT nonlocality in the evolution equations of this token. It depends just as much as what happens to a proton inside a star on Andromeda as on what you are doing in the lab. What is also a bit ugly is to postulate a preferred basis. It violates violently any symmetry of the dynamics (which seems to be reserved for the dynamics of the wave function only). But, it has to be said, that it solves quite some enigmas in QM.
I consider it a bit as a Lorentzian aether theory. It is a vast improvement over the inconsistency between galilean relativity and Maxwell stuff (read: Copenhagen QM), but it introduces needles symmetry violations.
Nevertheless, it contains the essential: pure unitary evolution of the wave function !
So, I consider Bohmian mechanics as quite superior to Copenhagen QM, and containing the roots of further improvements. but, as it is, it is too ugly, it breaks too many nice symmetries...

cheers,
Patrick.
 
  • #29
I must take this moment to interject and state what an excellent thread this has been so far. "No-one ****s the wave function" get's my vote for best physics related quote of the day.
 
  • #30
vanesch said:
Yes, that is I think what Bohmians call "naive operator realism". But according to Bohm we're deluded in thinking there is something such as a definite spin. There are only positions of particles, and "spinor wavefunctions". We're just doing a complicated experiment of interactions of magnetic fields on particles, and see how they move in magnetic fields. But they don't have such a thing as spin. We're DELUDED :biggrin:

Well, you might be, but I'm not! :smile: Seriously though, I think there is a pretty radical difference between the kind of delusion I was talking about in the context of MWI, and the kind of delusion you are thinking of here. In the former case, we really are deluded about everything we believe -- I think the state of the world is such that there's a coffee cup in front of me right now, but it just ain't so, and so on for *everything*.

In Bohmian mechanics, however, we were merely "deluded" in the sense of perhaps too-quickly jumping to a specific model about what it means for a particle to have a spin. We all think, in the back of our minds, that this means electrons are like little rotating charged basketballs with a certain rotation axis -- that's what all the textbooks tell us, right before they tell us that really we're not supposed to think of it that way literally. But we can't help it because they don't tell us how we *are* supposed to think of it literally! Anyway, to whatever extent we do think of spin that way, yes, we are deluded. But that is easily fixable. Now we know how to think of it correctly -- it is a "contextual property" of particles (i.e., not really a property at all!). It is a kind of funny two-valuedness that lives in the wave function and makes the particles act in a certain "two-valued" way in certain experimental arrangements. So be it. If we are wedded to the stupid basketball picture (which we weren't supposed to believe anyway), too bad for us.

By the way, having mentioned the phrase "contextual property", now seems like a good time to explain the cute little illustration that David Albert mentions in his book ("QM and experience", highly recommended). Say we have a spin 1/2 particle (with some particular wf and some particular Q_0) and we send it towards a stern-gerlach device with its B-field gradient pointing in the +z direction. And say that the particle emerges from the "spin up along z" output port of the device. So if we are thinking of the electron not according to the Bohm picture, but as a little rotating basketball, we would say: aha, the spin axis is in the +z direction.

Now rewind, so we have exactly the same particle in exactly the same state coming toward the same SG device, but now rotate the SG device 180 degrees so the B-field gradient is now in the -z direction. What happens? Well, if we believe the electron is a spinning basketball, the "F = - mu dot grad B" force will be in the opposite direction as before, meaning the particle will be deflected down instead of up, meaning it will once again emerge from the output port labelled "spin up along z". That is, if we think of the electron as a spinning basketball, this "upside down" SG device is just as good as the rightsideup device -- they are both devices which "measure the z component of the electron's spin."

However, according to Bohmian mechanics, the particle in the second case will emerge from the "spin DOWN along z" port of the apparatus, assuming it comes in with exactly the same wave function and initial position. Something like: the particle gets pulled into whichever packet it is "on the side of" when the packets start splitting apart in the device. So this means that, if Bohmian mechanics is right, the two devices (rightside up and upside down) are not equivalent -- use one you get one answer, use the other you get a different answer for "the same" question. Of course, the point is just to illustrate that, if Bohm is right, they aren't really the same question at all. Spin measurements don't reveal the value of some pre-existing property, "spin", that particles possesses in addition to other properties (position, mass, charge, ...). In fact spin measurements don't reveal the pre-existing value of anything, i.e., they aren't measurements of anything, i.e., they aren't even really measurements.

This is an example of what it means for spin to be a "contextual property" -- technically, "contextual" means that the value you get depends not just on which QM operator corresponds to your measurement, but more -- how, specifically, that measurement is performed. (The two devices above both correspond to the \sigma_z operator, right? So according to QM, what's the difference between them? They both measure \sigma_z.) But according to Bohm, they are not equivalent. Hence the cautionary remarks about "naive realism about operators."






Hehe, it is funny that I got against my MWI view exactly the same remark: things we measure and observe should correspond to something "real" out there, from ttn, a Bohmian.

They do! I mean, according to Bohm, the "spin measurements" *do* tell you *something* about the real world -- e.g., what the wf of the particle was and/or where the particle was. It's just that you have misinterpreted things if you naively assume that what you learn is the value of some pre-existing "spin" property of the electron. It's not a failure of realism... just a failure of your reasoning!

But ok, Bohmian mechanics rose in my appreciation, honestly. I see it now as a first heroic effort, against all of the Copenhagen stuff, to build a consistent theory.

That's a good statement. You called me a Bohmian, which I guess I am. But I am more of a reactionary, really. It is because Bohm is so good relative to how widely it is known/understood that I feel I should tip the balance slightly in its favor by advocating it. In a different world, where Bohm was well-known and presented as a possible way of thinking about QM in all the textbooks, maybe I'd advocate something else. But it needs a fair hearing.


And - they will have a hard time admitting it - but Bohmians are in fact first-generation MWI-ers ! They postulate reality to the wavefunction, together with its absolute unitary evolution. As such, they do away with the cleveage between "measurement processes" and "physical interactions", the inconsistency in Copenhagen QM, because what Albert, today, considers as a measurement, Bruce, tomorrow, will analyse as a physical interaction. And the wave function comes out totally different.
This is the good part.

I would prefer to think of this as "building a consistent theory, as against Copenhagen" rather than "first generation MWI-ers"... but whatever...


What is terribly ugly, however, is the BLATANT nonlocality in the evolution equations of this token.

Yup. Now you understand why I was at pains to bring out the nonlocality of orthodox Copenhagen QM, too. Bohm is blatantly nonlocal. Orthodox QM is rather vaguely, confusingly, opaquely nonlocal -- it's nonlocal, no question about it, but the whole thing is so damn confusing and fuzzy it's hard to know exactly what it's saying about anything. And this has unfortunately helped Copenhagen maintain its hegemony over hidden variable theories. So I am doing my best to spread the truth and see that Bohm actually gets a fair hearing, instead of being simply ruled out of court because it is nonlocal.

By the way, Bell once said that it was "to the great credit" of Bohm's theory for bringing out the nonlocality that was inside QM all along, but hidden away by all the fuzziness and obscurity regarding measurement and collapse and all that. And of course, Bell was motivated to think about nonlocality when he "saw the impossible done" -- i.e., saw Bohm's papers showing that (contra von Neumann's alleged proof to the contrary) a hidden variable theory *was* possible.

And of course as a Bohmian, I like this perspective. It makes the blatantness of Bohmian nonlocality into a good thing! :smile:


It depends just as much as what happens to a proton inside a star on Andromeda as on what you are doing in the lab.

I don't think it is likely that many particles on Earth remain entangled with any particles on andromeda. The decoherence effect is too strong. Of course, in principle, there's nothing to prevent this "wild" nonlocality. You could make a pair of particles in the singlet state and let them fly until they were separated by a billion light years (and shield the hell out of them to keep them entangled) and still, letting one of them enter a region with a certain magnetic field would cause the other, instantaneously, billions of light years away, to deflect.


What is also a bit ugly is to postulate a preferred basis.

One of the nice points made in Roderich Tumulka's dialogue (that I cited a few posts back) is that the idea of a symmetry among different bases is just a myth. The symmetry is broken by the Hamiltonian. As he writes in that paper (if memory serves) "you might as well Fourier transform Maxwell's equations"!

Plus, even leaving that debate aside, if you're going to have a preferred basis, surely the position basis is the most natural, most obvious candidate. Especially if you're thinking of QM as a theory that emerged to describe subatomic *particles* -- particles, if they have any "preferred" property at all, have *positions*.


So, I consider Bohmian mechanics as quite superior to Copenhagen QM, and containing the roots of further improvements. but, as it is, it is too ugly, it breaks too many nice symmetries...

Yup, there are definitely some issues that one can raise against Bohm. No doubt. But I think any reasonable person who really understands both Bohm and orthodox QM, cannot possibly think that orthodox QM is a better theory. I know you still probably prefer MWI, but at least we can agree that Bohm beats Copenhagen hands down.

Now what about all the rest of you Copenhagenish lurkers?
 
  • #31
You neither one have convinced me. I am happy with "correlation not influence" and reject both MWI (I wish the early "relative state" interpretation of Everett were more robust) and "blatant" violation of local Lorentz symmetry.

As for collapse of the wave function, I am not happy with it but that doen't mean I think either Bohmism or MWI is an adequate explanation. Maybe a better one will come along some day, and I'm content to wait.
 
  • #32
Locrian said:
"No-one ****s the wave function" get's my vote for best physics related quote of the day.

:smile: :smile:
 
  • #33
ttn said:
They do! I mean, according to Bohm, the "spin measurements" *do* tell you *something* about the real world -- e.g., what the wf of the particle was and/or where the particle was.

I was just poking fun at you, in order to illustrate that "states of deludedness" can happen, and that the lesson is that the only thing we know about are our observations. Spin measurements "seem to indicate" that there is something like spin, but there isn't. Well, in the same way, "opening your eyes" seems to indicate that there is something such as a sure world outside, but there might not be... (ok, now we could introduce relative amounts of deludedness :-)


In a different world, where Bohm was well-known and presented as a possible way of thinking about QM in all the textbooks, maybe I'd advocate something else. But it needs a fair hearing.

Don't worry, there are many many such worlds in which this happens right now :-p. In fact, if you're a Bohmian who would like to see more respect paid to Bohm, then he should convert to an MWI, because there are many worlds now where Bohm has a nobel prize !

In fact, even for a Bohmian, there are many such worlds in the wavefunction ; only, they don't have the token.


I would prefer to think of this as "building a consistent theory, as against Copenhagen" rather than "first generation MWI-ers"... but whatever...

I think Bohmians are MWI-ers who ignore it ! The essence of MWI is that the wave function evolves unitarily, period. Because, from the moment you accept that, you get "multiple branches" (terms) in your wavefunction, including different "bodystates". So relative to each "bodystate" in the wavefunction corresponds an "external world". That's Everett's main idea, and that's why it is in fact more appriopriately called "relative state interpretation".
But because the obvious "observation of one world" is now not so obvious anymore, you have to add things or hope that somehow they will magically appear. In Bohmian mechanics, the added stuff is the {Q1...QN} part of the state description, which I call a "token", so there is an explicit "choice mechanism" - which turns out to be deterministic in this particular case.
I also add a choice mechanism: it is my mind.
You can do other things, whatever, according to what you like.
But the essence is that the wavefunction of your body is in an entangled state with what you have observed and interacted with... for the rest of its days.

Yup. Now you understand why I was at pains to bring out the nonlocality of orthodox Copenhagen QM, too. Bohm is blatantly nonlocal. Orthodox QM is rather vaguely, confusingly, opaquely nonlocal -- it's nonlocal, no question about it, but the whole thing is so damn confusing and fuzzy it's hard to know exactly what it's saying about anything. And this has unfortunately helped Copenhagen maintain its hegemony over hidden variable theories. So I am doing my best to spread the truth and see that Bohm actually gets a fair hearing, instead of being simply ruled out of court because it is nonlocal.

By the way, Bell once said that it was "to the great credit" of Bohm's theory for bringing out the nonlocality that was inside QM all along, but hidden away by all the fuzziness and obscurity regarding measurement and collapse and all that. And of course, Bell was motivated to think about nonlocality when he "saw the impossible done" -- i.e., saw Bohm's papers showing that (contra von Neumann's alleged proof to the contrary) a hidden variable theory *was* possible.

And of course as a Bohmian, I like this perspective. It makes the blatantness of Bohmian nonlocality into a good thing! :smile:

Well, if you mean that it was a good thing to point out that there was a serious problem in taking wave function collapse seriously, I can only agree with that. But honestly, were people so stupid then, 60 years ago ? If you collapse something describing the state of EVERYTHING, then surely it must be non-local, right ? Hey, a pity I wasn't around in the 30ies. I could have told them :smile:

I don't think it is likely that many particles on Earth remain entangled with any particles on andromeda. The decoherence effect is too strong.

Nonono, you missed it. Decoherence is MORE entanglement, not less. Once entangled, always entangled. Our protons and electrons in our body are strongly entangled, for the rest of their days, with everything they've interacted with. And the only way to undo that is to make them interact in the opposite way, namely, having them do interference experiments with what they entangled with. And it is the impossibility of doing these experiments, with mind-boggling complexity, which makes that LOCALLY (in quantum theory) we can forget about that entanglement, and consider this has given us a statistical MIXTURE of local states. But NOT in Bohmian mechanics! The more you entangle, there, the more objective influence the remote things have on the guiding equation ; applying the same reasoning as in decoherence, you can then say that you can forget about that entanglement if you replace all those STRONG interactions by random noise pulling and pushing your {Q1...QN} values locally. But the fact that we just call that "noise" doesn't mean that there is not this strong pulling, of which we've lost track. The rattling around of that proton on Andromeda pulls JUST AS HARD on the Q1 of that electron on my nose than does the fist of a disgrundled Bohmian :smile:

Of course, in principle, there's nothing to prevent this "wild" nonlocality. You could make a pair of particles in the singlet state and let them fly until they were separated by a billion light years (and shield the hell out of them to keep them entangled) and still, letting one of them enter a region with a certain magnetic field would cause the other, instantaneously, billions of light years away, to deflect.

Every single interaction in the past has created such indestructible entanglement for ever ! It is not because I've not been carefully to preserve the degree of liberty to a later measurement, as in your example, that the entanglement is gone.

One of the nice points made in Roderich Tumulka's dialogue (that I cited a few posts back) is that the idea of a symmetry among different bases is just a myth. The symmetry is broken by the Hamiltonian. As he writes in that paper (if memory serves) "you might as well Fourier transform Maxwell's equations"!

Well, the best description of the EM field we have, namely QED, works naturally in that transformed basis to build up Fock spaces. Hey, I have all the difficulty in the world on this forum to make another physicist see that photons can have a position !


Yup, there are definitely some issues that one can raise against Bohm. No doubt. But I think any reasonable person who really understands both Bohm and orthodox QM, cannot possibly think that orthodox QM is a better theory. I know you still probably prefer MWI, but at least we can agree that Bohm beats Copenhagen hands down.

Now what about all the rest of you Copenhagenish lurkers?

Well, I have to say that this discussion on Bohm altered my view on MWI a bit. I still think that my explanation is "closest to the formalism" in that it respects all of its basic premisses which guided us in the first place to that formalism. But the mere existence of Bohmian theory, which agrees with MWI on the essential, namely strict unitary evolution, means that this part is what is "strong" and then you invent a story to explain "what branch of the wavefunction is the "real" one". I do that with minds, you do it with a token. It's probably both wrong :redface: But I still prefer mine, just because it respects the pillars on which the theory was build in the first place - and if that means I'm deluded, well, I've always been deluded about something, so that's no big news.

cheers,
Patrick.
 
Last edited:
  • #34
selfAdjoint said:
As for collapse of the wave function, I am not happy with it but that doen't mean I think either Bohmism or MWI is an adequate explanation. Maybe a better one will come along some day, and I'm content to wait.

That's a possibility, and I think that most people who say they adhere to Copenhagen (which is in fact not Copenhagen, but von Neuman!), you know: interactions -> unitary evolution ; measurements -> collapse take on the view which is also acceptable: namely that quantum theory is an epistemological theory allowing us to know probabilities of results of measurements, but doesn't contain any ontology ("real description of the world").
I think that's a perfectly fair position, which can still be refined: QM doesn't give an ontological description of the world because that doesn't exist (that's one viewpoint), or the other is: QM doesn't give an ontological description because we haven't yet found the right theory that will give us one. (I guess that's your position).

I think it is a perfectly acceptable position.
As I said, it is only if you want to have a "story" that you enter into these issues. However, I think that if you want to devellop a physical intuition, that "having a story" is important. But not logically required.

cheers,
Patrick.
 
  • #35
vanesch said:
I think it is a perfectly acceptable position.
As I said, it is only if you want to have a "story" that you enter into these issues. However, I think that if you want to devellop a physical intuition, that "having a story" is important. But not logically required.

For example (and JUST as an example) here's a theory of collapse which is not Lorentz invariant at the Planck scale, but is indistinguishable from Lorentz invariant at achievable scales. Well, at the Planck scale we will need some new theory of spacetime anyway, so this is, at least informally, an "effective" theory of wave function collapse, that preserves all the causality we can see.
 
  • #36
Thanks for this excellent link ,i ahve some questions...

a)Is Bohmian mechanics valid for the Relativistic wave mechanics?..and for Quantum Fields?..thanks.
 
  • #37
vanesch said:
That's a possibility, and I think that most people who say they adhere to Copenhagen (which is in fact not Copenhagen, but von Neuman!), you know: interactions -> unitary evolution ; measurements -> collapse take on the view which is also acceptable: namely that quantum theory is an epistemological theory allowing us to know probabilities of results of measurements, but doesn't contain any ontology ("real description of the world").
I think that's a perfectly fair position, which can still be refined: QM doesn't give an ontological description of the world because that doesn't exist (that's one viewpoint), or the other is: QM doesn't give an ontological description because we haven't yet found the right theory that will give us one. (I guess that's your position).

I think it is a perfectly acceptable position.

I too can sympathize with the "practical" attitude of just using the "standard rules" and not worrying about the ontology (because we haven't yet found the right one). But I guess I don't think it's "acceptable" to believe that "QM doesn't give an ontological description of the world because that doesn't exist".

I don't know for sure, but I bet they make Catholic priests turn in their union cards if they stop believing in god. Likewise, I think physicists should have to turn in their union cards if they don't believe in physical reality.
 
  • #38
vanesch said:
I think Bohmians are MWI-ers who ignore it ! The essence of MWI is that the wave function evolves unitarily, period.

Perhaps one of the lessons of Bohm's theory is that this is a very misleading thing to define as "the essence of MWI". Shouldn't a theory's "essence" be something that is both central to it and that distinguishes it most fundamentally from other theories? If so, then neither Bohm nor MWI can claim to have unitary-wf-evolution (i.e., a solution to the measurement problem!) as its essence. Rather, we should say something like "the essence of MWI is horrible widespread delusion and bizarre ontology" while "the essence of Bohm is that particles have definite trajectories and everything makes sense." :smile:



Well, if you mean that it was a good thing to point out that there was a serious problem in taking wave function collapse seriously, I can only agree with that. But honestly, were people so stupid then, 60 years ago ? If you collapse something describing the state of EVERYTHING, then surely it must be non-local, right ? Hey, a pity I wasn't around in the 30ies. I could have told them :smile:

Honestly, the more I learn about the history of QM, the more I think the unfortunate answer is: YES, people *were* so stupid 60-80 years ago. Which I guess makes (almost) everyone since even stupider for taking the stupidity so seriously...


Nonono, you missed it. Decoherence is MORE entanglement, not less. Once entangled, always entangled. Our protons and electrons in our body are strongly entangled, for the rest of their days, with everything they've interacted with. And the only way to undo that is to make them interact in the opposite way, namely, having them do interference experiments with what they entangled with. And it is the impossibility of doing these experiments, with mind-boggling complexity, which makes that LOCALLY (in quantum theory) we can forget about that entanglement, and consider this has given us a statistical MIXTURE of local states. But NOT in Bohmian mechanics! The more you entangle, there, the more objective influence the remote things have on the guiding equation ; applying the same reasoning as in decoherence, you can then say that you can forget about that entanglement if you replace all those STRONG interactions by random noise pulling and pushing your {Q1...QN} values locally. But the fact that we just call that "noise" doesn't mean that there is not this strong pulling, of which we've lost track. The rattling around of that proton on Andromeda pulls JUST AS HARD on the Q1 of that electron on my nose than does the fist of a disgrundled Bohmian

OK, yes, given your meaning you are of course correct. Decoherence is more entanglement. But it is simultaneously less of the kind of entanglement that can (with large probability) give rise to surprising spacelike correlations. So I don't think it's quite right to say "the more you entangle, the more objective influence the remote things have on the guiding equation." This is true in principle, but in practice, we actually *know* some things (e.g., the andromeda galaxy is over THERE) which permit us to know that certain other things *don't* any longer affect the molecules in my fist (to any appreciable degree), etc.

But I'll confess to feeling like I ought to think about this more. On the one hand, you're right that the entanglement persists forever, period. On the other, it's perfectly legitimate to narrow one's scope to a small subsystem using the notion of a conditional wave function -- and this doesn't involve any assumption or approximation like the "entanglement noise" being small. How to reconcile those facts exactly?




Well, the best description of the EM field we have, namely QED, works naturally in that transformed basis to build up Fock spaces. Hey, I have all the difficulty in the world on this forum to make another physicist see that photons can have a position !

I'd be interested to hear something about this debate. I gather you were arguing that it is possible for photons to have (definite?) positions? I seem to recall that it is a notoriously difficult (and I think unsolved?) problem to define a covariant position operator for photons. Is that what the person you were debating with was arguing? What was your response? etc...


Well, I have to say that this discussion on Bohm altered my view on MWI a bit. I still think that my explanation is "closest to the formalism" in that it respects all of its basic premisses which guided us in the first place to that formalism.

If you include locality, perhaps you are correct. But for me, having some kind of continuity with "common sense" views of reality, etc., is the most important thing. But it's at least nice that Bohm gives you a new perspective on things, regardless of whether you become a Bohmian. As I said before, I'm really just concerned to try to make people more aware of this theory. I think anyone who understands it cannot help but find it fascinating as an example of a different way of thinking about quantum physics -- and if that ultimately helps break the stranglehold of Copenhagen, it'll be a good thing, no matter what ends up turning out to be the correct theory.

But the mere existence of Bohmian theory, which agrees with MWI on the essential, namely strict unitary evolution, means that this part is what is "strong" and then you invent a story to explain "what branch of the wavefunction is the "real" one".

MWI people always put it this way, and it annoys me. :smile: It makes it sound like Bohm's theory is just some ad hoc way of doing a certain job that needs to be done to clean up MWI. But if you look at the whole thing from a more historical perspective, it is *far* from ad hoc to attribute positions to particles, to think that the particle trajectories are influenced by a wave of some kind, etc. This is essentially what everybody thought (just based purely on experimental evidence like cavity radiation, compton scattering, the photoelectric effect, davisson-germer, franck-hertz, 2 slit experiments, etc...) before Bohr's philosophy made everyone turn stupid. :smile:

There is a really wonderful book by Jim Cushing ("Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony") that explores this history in detail if anyone is interested in thinking more about how Bohmian ideas were lost in the ascendency of Copenhagen but how, had things gone only slightly differently, they might have won out.
 
  • #39
ttn said:
I'd be interested to hear something about this debate. I gather you were arguing that it is possible for photons to have (definite?) positions? I seem to recall that it is a notoriously difficult (and I think unsolved?) problem to define a covariant position operator for photons. Is that what the person you were debating with was arguing? What was your response? etc...

It is in the recent thread about white photons... Ok, I didn't really talk about a position measurement of a photon ; I just argued that superpositions of momentum states can exist, and took as an example a femtosecond laser...

If you include locality, perhaps you are correct. But for me, having some kind of continuity with "common sense" views of reality, etc., is the most important thing. But it's at least nice that Bohm gives you a new perspective on things, regardless of whether you become a Bohmian. As I said before, I'm really just concerned to try to make people more aware of this theory. I think anyone who understands it cannot help but find it fascinating as an example of a different way of thinking about quantum physics -- and if that ultimately helps break the stranglehold of Copenhagen, it'll be a good thing, no matter what ends up turning out to be the correct theory.

I can only encourage this. Only, I think you attack a straw man. No serious theoretical physicist sticks to Copenhagen as an ontology anymore ! (well, there are maybe a few left). The "Copenhagen crowd" is mostly the "shut-up-and-calculate" crowd ; as DrChinese has in his signature here: the map is not the territory !
So that's perfectly all right to me: they have a "map" allowing them to find their way (calculate results) in the "territory" (the real world). Making good maps is very important ; even if they are made out of paper, and the territory isn't.

MWI people always put it this way, and it annoys me. :smile: It makes it sound like Bohm's theory is just some ad hoc way of doing a certain job that needs to be done to clean up MWI. But if you look at the whole thing from a more historical perspective, it is *far* from ad hoc to attribute positions to particles, to think that the particle trajectories are influenced by a wave of some kind, etc. This is essentially what everybody thought (just based purely on experimental evidence like cavity radiation, compton scattering, the photoelectric effect, davisson-germer, franck-hertz, 2 slit experiments, etc...) before Bohr's philosophy made everyone turn stupid. :smile:

Well, I think that that is correct, and who knows, maybe Bohm is the way to go. But the reason why I still resist to accepting Bohm (although I have to say that the more I learn about it, the more it is indeed fascinating) is my main objection from the beginning. You talked about Catholic priests having to turn in their union cards when they don't believe anymore in the catholic dogmas. Well, mine is the same, but different from you.

Our "bare intuition" has often been a wrong guide, until we educated our intuition enough so that we got it "right". So intuition can be quite flexible. A few hundred years ago, "heat" was not seen as "very tiny particles rattling around in the lump of matter" and when you think a bit about it, that's in fact rather un-intuitive. But now this picture has become so "evident" that it doesn't give us any intuitive problems. In the 19th century, chemists spoke about "the atomic hypothesis" in order to explain the regularities they found, but they were affraid of making a fool of themselves by considering atoms too litterally. This simply indicates that what seem straightforward, intuitive concepts for us, today, were such intuitive challenges that the proponents of the ideas always started out by not taking their NEW principles and ideas seriously, but just as a tool to explain stuff.

But *real* progress (better predictions of results of experiment) has always been made by people who did take the new concepts and principles for real, putting their "refusal" of their intuition aside.

So I think that, as a physicist, you should not turn in your union card if you refuse to take your intuitive feeling of how the world ought to be as an unalterable truth. However, I think you ought to turn in your union card if you do not take the principles on which successful theories have been build and by which people HAVE been making progress, seriously.

Our most succesfull working theories, today, are quantum field theory and general relativity. Both have been invented by distilling the essential principles out of what was known before, and then stick to them.

Let us look at general relativity. If Einstein had been stuck with Lorentz Aether theory (which is formally equivalent with SR) he would probably never have thought of general covariance as the guiding principle for GR, which is a generalization of Lorentz invariance. Now, ONCE this work has been done, you can always go back and try to piece the puzzle together in a different and maybe less revolutionary way, such as an Aether theory that incorporates gravity or so. But fact is, you'd never have found the formalism of GR in the first place !

In the same way, QFT grew out of the combination of two basic principles: Lorentz invariance (and locality as required by SR), and the superposition principle. It is because people tried to stick by all means to these principles that QFT saw the light. I don't think you'd ever have invented QFT if Bohm's theory had been the starting point. After the fact, I've understood that some efforts exist (and succeed ?) to adapt Bohm's view to QFT. But again, it is an after-the-fact fitting. I don't think that Bohm's view contained the right seeds in order to allow people to invent QFT.

Now, of course, what are the great and revolutionary guiding principles leading to progress in one century are the oldfashioned, intuitive holdons one tries to stick to at all cost in the next. So "great principles" have only a finite lifetime as creative forces, and we will probably witness this during the heroic battle between GR and QM. Knowing what to stick to, and what to let go is the essence of genius in physics. But once you make your choice, you should stick to it. Using them to make progress, and denying them to build an ontology is, in my eyes, blasphemy ! Denying them to replace them by others, and make progress (or hit the wall), that's also ok. But, as president of the association of defense of poor and abused physics principles, I vehemently protest against this unethical behaviour towards physics principles of abusing them to make progress, and then letting them down to tell the story.

:-p

cheers,
patrick.
 
  • #40
vanesch said:
I can only encourage this. Only, I think you attack a straw man. No serious theoretical physicist sticks to Copenhagen as an ontology anymore ! (well, there are maybe a few left). The "Copenhagen crowd" is mostly the "shut-up-and-calculate" crowd ; as DrChinese has in his signature here: the map is not the territory !
So that's perfectly all right to me: they have a "map" allowing them to find their way (calculate results) in the "territory" (the real world). Making good maps is very important ; even if they are made out of paper, and the territory isn't.

What bothers me is alleged mapmakers who don't believe there's such a thing as the territory referred to by the map. Obviously I'm not against mapmaking, and I'm not saying anything like the map ought to be made of rocks and dirt because the territory is. I just think we shouldn't ever permit ourselves to forget (or explicitly deny!) that maps are maps *of territory*. That's what it *means* to be a map.

Summarizing: anti-realism is not a legitimate position for a physicist to take, and it is certainly no foundation for a good theory in physics.



Our "bare intuition" has often been a wrong guide, until we educated our intuition enough so that we got it "right". So intuition can be quite flexible. A few hundred years ago, "heat" was not seen as "very tiny particles rattling around in the lump of matter" and when you think a bit about it, that's in fact rather un-intuitive. But now this picture has become so "evident" that it doesn't give us any intuitive problems. In the 19th century, chemists spoke about "the atomic hypothesis" in order to explain the regularities they found, but they were affraid of making a fool of themselves by considering atoms too litterally. This simply indicates that what seem straightforward, intuitive concepts for us, today, were such intuitive challenges that the proponents of the ideas always started out by not taking their NEW principles and ideas seriously, but just as a tool to explain stuff.

None of these changes you refer to involve the literal undermining of all knowledge. There are certainly occasional overturnings of more or less widely held interpretations of certain things (heat is a fluid, the sun goes around the earth, etc.) but never before have scientists taken seriously the idea that the whole "out there" -- the territory -- is radically different from what we see. Ultimately, science of any kind is based on direct perception of the world. If you can't trust the most basic experiences, how can you trust measuring devices, let alone weird interpretations of theories that are grounded in huge collections of results from such measuring devices?



But *real* progress (better predictions of results of experiment) has always been made by people who did take the new concepts and principles for real, putting their "refusal" of their intuition aside.

I wouldn't put it that way. Real progress is made by people who put *ungrounded* or uncertain or poorly motivated assumptions aside and look at the facts in an unbiased way. That's not the same as setting aside common sense.


So I think that, as a physicist, you should not turn in your union card if you refuse to take your intuitive feeling of how the world ought to be as an unalterable truth. However, I think you ought to turn in your union card if you do not take the principles on which successful theories have been build and by which people HAVE been making progress, seriously.

On that we agree, no doubt. But I suspect we would want to formulate a bit differently exactly what those principles are, the ones that have led to success in the past.


Our most succesfull working theories, today, are quantum field theory and general relativity. Both have been invented by distilling the essential principles out of what was known before, and then stick to them.

QFT is successful in the same way regular quantum mechanics is -- it's an amazingly and in some ways inexplicably effective black box for predicting the results of certain types of experiments. But it is also ugly and vague and unprofessional in the same ways as non-relativistic QM, basically because the same foundational problems that exist for QM are simply inherited by the later, fancier quantum theories. So I'm not sure I'd say that QFT is "our most successful working theory." Our most successful algorithm or black box or whatever, sure, but I expect more from a theory.


Let us look at general relativity. If Einstein had been stuck with Lorentz Aether theory (which is formally equivalent with SR) he would probably never have thought of general covariance as the guiding principle for GR, which is a generalization of Lorentz invariance. Now, ONCE this work has been done, you can always go back and try to piece the puzzle together in a different and maybe less revolutionary way, such as an Aether theory that incorporates gravity or so. But fact is, you'd never have found the formalism of GR in the first place !

Interestingly, Einstein went a long way back toward believing in the ether after formulating GR. See the book "Einstein and the Ether" which came out a few years ago (I can't remember the author and I don't have the book with me right now)...

But I do agree with your point here. If GR had never happened, I think it would be positively silly to believe in SR as opposed to a Lorentz Ether type view. (I don't mean one should believe in the Ether, just that there would be no good reason to believe either one rather than the other.) It's precisely because the basic principle of SR led to such an obviously important development (GR) that I think people were subsequently entirely correct to reject the ether theories and believe SR.

Of course, now I believe things have changed -- the violations of Bell's inequality argue, very strongly I think, that one needs more spacetime structure than relativity permits... which basically means one needs something like a preferred frame, i.e., <shudder> an ether...

So, the history continues to be written.

Oh yes, one other point. You mentioned the possible existence of an ether-type theory of gravity/GR, as if it was a fantasy. But such a thing exists! In fact a few different people have "discovered" such a theory over the years. The best treatment I know is the textbook by Janossy, which Bell cites in one of his papers. Anybody who has enjoyed learning about the Bohmian alternative to regular QM, might find this similarly alternative version of GR equally interesting...



In the same way, QFT grew out of the combination of two basic principles: Lorentz invariance (and locality as required by SR), and the superposition principle. It is because people tried to stick by all means to these principles that QFT saw the light. I don't think you'd ever have invented QFT if Bohm's theory had been the starting point. After the fact, I've understood that some efforts exist (and succeed ?) to adapt Bohm's view to QFT. But again, it is an after-the-fact fitting. I don't think that Bohm's view contained the right seeds in order to allow people to invent QFT.

Given the systematic suppression of Bohmian ideas over the last 50 years -- the bogus impossibility proofs, misinformation, absolute silence in all the textbooks, etc. -- I have trouble taking seriously any comment like this (and I hear them all the time). Who can say what we would know now if Bohmian ideas had won the day in 1927? Maybe there'd be five or six widely scattered followers of Bohr, who barely managed to scrape out a paper every couple years on the hydrogen atom in nonrelativistic approximation while meanwhile the vast hordes of Bohmians are putting the finishing touches on the one true theory of quantum gravity, solving the world's energy crisis, and leaping from building to building in a single bound.

That's stupid fantasy, of course, but it's, I think, equally fantastic to expect the (literal) half-dozen or so people who have seriously thought about Bohm's theory in the last 50 years to somehow equal the output of the tens of thousands of physicists working along Copenhagenish lines. Yes, maybe the deBroglie-Bohm picture didn't contain the seeds for QFT. But maybe it contained (and still contains) the seeds for something better -- something that will work equally well as an algorithm but which will also be a much crisper theory with a much clearer ontology. The only way to find out is to give it a fair chance -- which means people have to know about it (and *it*, not some straw man bastardization), which means it has to be talked about in a serious way in textbooks for students, etc...


Now, of course, what are the great and revolutionary guiding principles leading to progress in one century are the oldfashioned, intuitive holdons one tries to stick to at all cost in the next. So "great principles" have only a finite lifetime as creative forces, and we will probably witness this during the heroic battle between GR and QM. Knowing what to stick to, and what to let go is the essence of genius in physics. But once you make your choice, you should stick to it. Using them to make progress, and denying them to build an ontology is, in my eyes, blasphemy ! Denying them to replace them by others, and make progress (or hit the wall), that's also ok. But, as president of the association of defense of poor and abused physics principles, I vehemently protest against this unethical behaviour towards physics principles of abusing them to make progress, and then letting them down to tell the story.

You make it sound (again :zzz: ) as if these "stories" are some kind of pointless afterthought or side issue. It's as if you think Real Science is exclusively about predicting the results of experiments, and we should leave it to the poets, philosophers, and bums to worry about "ontology" and "telling stories". But look at history. I can't think of a single one of the major scientific developments that didn't involve a huge leap forward in "story telling". Was the Copernican revolution *primarily* about getting the predictions to be more accurate? Hardly. In fact, Copernicus' original model was actually *worse* than the souped up versions of Ptolemy that existed at the time. Of course, in the *long* run, Copernicus' new ontology opened up huge new vistas of science (including your favorite sort of "progress"!), but that is *not* what makes it a revolution. It's a revolution because it represents a major step forward in our understanding of what the world is like, i.e., how to tell a more correct and more complete story.

And thus we end where we began, with a tirade against anti-realism (and its various offshoots like positivism and phenomenalism and pragmatism). A true mapmaker cares about more than merely whether the product will get people to their destination successfully. It is also a value for the map to *accurately represent* the territory -- to provide a simple, graspable model or picture of what the territory is actually like. Of course doing the latter always helps you do the former, and doing the former often helps you decide whether or not you've actually done the latter. But nevertheless, I believe the latter is of value even abstracting away from any correlative improvements in the former. Finding out what the territory is like is one of the motivations to make maps. Finding out what nature is like is one of the motivations to make physics theories.
 
  • #41
vanesch said:
I have in front of me: "Quantum Theory" from David Bohm, his textbook on QM. However, it dates from 1951 (I have a Dover reprint) and seems to be on the Copenhagen theory !

cheers,
Patrick.

Ha, I have one of the Prentiss Hall editions. 1958, I think. Don't ask me how I have this because I was just a wee lad back then... :smile:
 
  • #42
ttn said:
None of these changes you refer to involve the literal undermining of all knowledge. There are certainly occasional overturnings of more or less widely held interpretations of certain things (heat is a fluid, the sun goes around the earth, etc.) but never before have scientists taken seriously the idea that the whole "out there" -- the territory -- is radically different from what we see.

I'm not sure of that. If you really look at GR the world IS very weird too ! Planets don't go around the sun. The sun and planets all go on "straight lines", and you cannot really say anymore who's central and who's going in circles: Copernicus was wrong after all ! But the weirdest thing in GR (which it gets from SR) is about the total illusion we have about time ! It is even so weird that our grammar is not adapted to it. What we think being the future, present and the past is something like left, here and right, depending on how you look upon it. Worse ! What most people think are the basic entities, namely spacetime events, are not even well-defined. Read up on the "hole problem" (which is just a conceptual problem, not a formal one). The only thing that is well-defined are equivalence-classes of spacetime manifolds (through general covariance). GR completely overturned any view of the world we had !

Ultimately, science of any kind is based on direct perception of the world. If you can't trust the most basic experiences, how can you trust measuring devices, let alone weird interpretations of theories that are grounded in huge collections of results from such measuring devices?

But you CAN trust them, up to the point of what they were designed for. I really don't see the point in fact. It is not because our language and picture have changed that they suddenly tell you wrong things: only you have to translate it into the new language and picture now. Look at a balance. For centuries we thought that it measured the mass of an object. Turns out it measures the force Earth exerts on the object. Turns out it measures in fact an acceleration times mass because there's no such thing as a force excerted by Earth on an object... But "1kg of potatoes" is still what we always knew it was.

I wouldn't put it that way. Real progress is made by people who put *ungrounded* or uncertain or poorly motivated assumptions aside and look at the facts in an unbiased way. That's not the same as setting aside common sense.

I would say that that is exactly what it is !

QFT is successful in the same way regular quantum mechanics is -- it's an amazingly and in some ways inexplicably effective black box for predicting the results of certain types of experiments.

No matter how low an opinion you may have of that achievement, it wouldn't have come about if the first and utmost requirement of the people doing it didn't have in mind that they had to stick to lorentz invariance all the way.


But I do agree with your point here. If GR had never happened, I think it would be positively silly to believe in SR as opposed to a Lorentz Ether type view. (I don't mean one should believe in the Ether, just that there would be no good reason to believe either one rather than the other.) It's precisely because the basic principle of SR led to such an obviously important development (GR) that I think people were subsequently entirely correct to reject the ether theories and believe SR.

That's the world on its head ! It is because some people believed in SR that they were able to think up GR !

Of course, now I believe things have changed -- the violations of Bell's inequality argue, very strongly I think, that one needs more spacetime structure than relativity permits... which basically means one needs something like a preferred frame, i.e., <shudder> an ether...

So, the history continues to be written.

Ah, you start to see my point: imagine you are right, and that we have to go back to a "preferred frame" (in fact that's what I understand string theorists do). Then at a certain point, you had to switch fundamentally your view of the world (SR/GR) in order to be able to devellop GR in the first place, and a century later you had again to switch fundamentally your view (you think, back to Newtonian frameset). But that indicates that views of reality are very dependent on what's available and what will get you to make progress. But if it changes regularly, how can it be "a true description of nature" then?

Oh yes, one other point. You mentioned the possible existence of an ether-type theory of gravity/GR, as if it was a fantasy. But such a thing exists! In fact a few different people have "discovered" such a theory over the years. The best treatment I know is the textbook by Janossy, which Bell cites in one of his papers. Anybody who has enjoyed learning about the Bohmian alternative to regular QM, might find this similarly alternative version of GR equally interesting...

I'm aware of some of them, and I think they make the same error as Bohm's theory in a relativistic setting: they spit on their guiding principles. In fact, they allow you to integrate the achievements that were brought forth by new principles into the good old paradigm of Euclidean space, a real time axis and points in space on which things pull and push.

Given the systematic suppression of Bohmian ideas over the last 50 years -- the bogus impossibility proofs, misinformation, absolute silence in all the textbooks, etc. -- I have trouble taking seriously any comment like this (and I hear them all the time). Who can say what we would know now if Bohmian ideas had won the day in 1927? Maybe there'd be five or six widely scattered followers of Bohr, who barely managed to scrape out a paper every couple years on the hydrogen atom in nonrelativistic approximation while meanwhile the vast hordes of Bohmians are putting the finishing touches on the one true theory of quantum gravity, solving the world's energy crisis, and leaping from building to building in a single bound.

That's stupid fantasy, of course, but it's, I think, equally fantastic to expect the (literal) half-dozen or so people who have seriously thought about Bohm's theory in the last 50 years to somehow equal the output of the tens of thousands of physicists working along Copenhagenish lines. Yes, maybe the deBroglie-Bohm picture didn't contain the seeds for QFT. But maybe it contained (and still contains) the seeds for something better -- something that will work equally well as an algorithm but which will also be a much crisper theory with a much clearer ontology. The only way to find out is to give it a fair chance -- which means people have to know about it (and *it*, not some straw man bastardization), which means it has to be talked about in a serious way in textbooks for students, etc...

I can only agree with the fact that Bohm's theory is misrepresented, and that it wins in getting known. But I can understand the approach: the aim of the professor or author is to get the student to make a paradigm shift. So if the professor gives this hold-on to the student he will stick to it, and never allow himself to swallow all this QM nonsense ! Hey, particles with forces on it and a field, I'm home ! And then the professor gets stuck. Because HOW is he now going to tell to his happy student about the Dirac equation or the KG equation ? Hey, if it is to bring in fancy equations, the student will try to bring in, say, Navier-Stokes equation to quantize ! And how is the professor going to tell about which terms you can, and which you cannot have in the lagrangian if they don't have to be Lorentz scalars ?
See, if you break the guiding principles before they did their job, well, you don't have any guiding principles left to guide you. You absolutely don't see why you can write A^mu j_mu, but not, say, (A^0)^2 + A^i (j_i)^2.

cheers,
Patrick.
 
  • #43
vanesch said:
But you CAN trust them, up to the point of what they were designed for. I really don't see the point in fact. It is not because our language and picture have changed that they suddenly tell you wrong things: only you have to translate it into the new language and picture now. Look at a balance. For centuries we thought that it measured the mass of an object. Turns out it measures the force Earth exerts on the object. Turns out it measures in fact an acceleration times mass because there's no such thing as a force excerted by Earth on an object... But "1kg of potatoes" is still what we always knew it was.

Yes, that's right. But I agree most with the statement that "you have to translate it into the new language and picture." So it's not exactly that we were *wrong* to think that the balance measures mass. Even after you understand gravity and mass vs. weight better, it's still not *wrong* to say that the balance measures the mass -- it just does so indirectly.

But anyway, the general point is that I think I see the history of science differently than you. You talk a lot about paradigm shifts, whereas I see it more as a continuous or hierarchical development. Yes, there are surprising new things sometimes that make us reinterpret certain other things, but, just to take an obvious example, no future paradigm shift is going to overthrow chapter 1 of the Feynman lectures (where he talks about how "matter is made of atoms" is the most important discovery of science up to this point). No matter what we learn in the future, it will be some kind of further detail on what atoms are or how they interact or who knows what, but never will we discover that it was just wrong to think that matter is made of atoms.


That's the world on its head ! It is because some people believed in SR that they were able to think up GR !

Yeah, that's true. But somebody always has to believe in something and pursue it before it can be widely accepted -- before it *should be* widely accepted. All that much more credit to Einstein for being ahead of his time and being wiling to stick his neck out and pursue the logical implications of a certain principle.

Again, I don't at all disagree with you that this pursuing of principles is important. Of course it is. I just take a slightly longer-term approach to judging which principles ought to be stuck to. You seem to focus on whatever's hot this month or this century -- if lorentz invariance has worked well for the last 50 years, then run with it 100% at all costs without worrying about anything else that came before. Well, I want to worry about what came before. In light of my view of scientific progress as a slow, largely uni-directional development, I think those principles that have stood the test of time the longest -- i.e., that have served as foundational principles for the sexy new foundational principles of last week or last century -- should be given the most weight. So something like scientific realism always has to trump something like lorentz invariance. It's kind of a "respect your elders" thing... :-p

It also strikes me as a bit odd that you are the one pushing for such strong allegiance to principles which, by your own admission, are probably going to be overthrown in the next paradigm shift. What, other than a belief that those principles actually reflect some deep fact of nature, could justify this allegiance to them?


Ah, you start to see my point: imagine you are right, and that we have to go back to a "preferred frame" (in fact that's what I understand string theorists do). Then at a certain point, you had to switch fundamentally your view of the world (SR/GR) in order to be able to devellop GR in the first place, and a century later you had again to switch fundamentally your view (you think, back to Newtonian frameset). But that indicates that views of reality are very dependent on what's available and what will get you to make progress. But if it changes regularly, how can it be "a true description of nature" then?

It's precisely to avoid having to constantly "switch fundamentally your view of the world" that I am advocating what, I think, is a slightly more conservative attitude. It was right to run with lorentz invariance, no doubt, but all along it should have been held in people's minds with a mental asterisk saying -- there are still some questions about what this means about the world/ontology/storytelling side of things.

If the basic fundamentals really changed regularly, I agree, it would be impossible to claim that the latest ones can be trusted as a reliable map of nature. But if you take a somewhat less flavor-of-the-month view about what counts as "fundamental principles" then it simply ceases to be *true*, historically, that the fundamentals are always changing.

It's also worth noting that it's a good thing for science that there is a spectrum of attitudes on this. I don't think it's legitimate for physicists to openly endorse anti-realism (for example) but it's totally legitimate for people to simply take an agnostic/practical attitude and not worry too much about the ontology behind the algorithms. Some people will be more comfortable with this than others, and it's good for science that there are both especially practical people (who always want to take risks and push boundaries) and "wise old gentlemen" (who take a cautionary or conservative attitude toward the latest gadgets).


I'm aware of some of them, and I think they make the same error as Bohm's theory in a relativistic setting: they spit on their guiding principles. In fact, they allow you to integrate the achievements that were brought forth by new principles into the good old paradigm of Euclidean space, a real time axis and points in space on which things pull and push.

Fair enough, but I say: until we're really more sure about the real meaning of those underlying principles, we should keep as many alternatives on the table as possible. I mean, there's simply no way to know a priori whether something like lorentz invariance is a truly fundamental property of nature/spacetime, or merely an emergent phenomenon coming from a fundamentally non-lorentzian world.


I can only agree with the fact that Bohm's theory is misrepresented, and that it wins in getting known. But I can understand the approach: the aim of the professor or author is to get the student to make a paradigm shift. So if the professor gives this hold-on to the student he will stick to it, and never allow himself to swallow all this QM nonsense ! Hey, particles with forces on it and a field, I'm home ! And then the professor gets stuck. Because HOW is he now going to tell to his happy student about the Dirac equation or the KG equation ? Hey, if it is to bring in fancy equations, the student will try to bring in, say, Navier-Stokes equation to quantize ! And how is the professor going to tell about which terms you can, and which you cannot have in the lagrangian if they don't have to be Lorentz scalars ?
See, if you break the guiding principles before they did their job, well, you don't have any guiding principles left to guide you. You absolutely don't see why you can write A^mu j_mu, but not, say, (A^0)^2 + A^i (j_i)^2.

That's an argument in favor of relativity, not against Bohm's theory, right? Or am I misunderstanding?
 
  • #44
ttn said:
But anyway, the general point is that I think I see the history of science differently than you. You talk a lot about paradigm shifts, whereas I see it more as a continuous or hierarchical development. Yes, there are surprising new things sometimes that make us reinterpret certain other things, but, just to take an obvious example, no future paradigm shift is going to overthrow chapter 1 of the Feynman lectures (where he talks about how "matter is made of atoms" is the most important discovery of science up to this point).

Yes, in the same way that I have no problem with action at a distance when I'm thinking "Newtonian" and about planets. I have this opinion that nature (real nature out there, yes, I think there is something like that, although I don't know how much it is "out there") has this layered structure like Microsoft software (I hope only that it is better designed :smile:). You can put it in the mode "total newbie" and then you get a very intuitive picture and a simple formalism. Next, you can switch it to "Newbie with some knowledge of calculus". Ok, now you get Euclidean space, a real axis of time, some dust points and some forces pulling on it. You can include "matter is made of atoms" here, too. You have objects on your desktop which do things for you.
Next, you can go to "advanced modes". And here, things get weird. You didn't think at all nature was _like that_. But you get a much stronger formalism with it. And there are many options in the "advanced modes"... Now, you have to think about files, protections, links...
Maybe one day you'll go to "expert mode", looking at the digital circuitry and the architecture of the software running all that stuff. You'll now see that what you thought was a "file as a list of bytes" was in fact way more complicated.
But that doesn't mean that when you are firing up your browser, you can't think of it as a little drawing on your desktop which does a thing for you. It is a usefull representation at a certain level of competence. In fact, it is even a better representation, when you're just working with your computer, than the more sophisticated view. Ok, big shock: on your disk, there is no such thing as a little drawing. Paradigm shift when you go from the icons on your desktop to the sectors on your disk !

No matter what we learn in the future, it will be some kind of further detail on what atoms are or how they interact or who knows what, but never will we discover that it was just wrong to think that matter is made of atoms.

And never it will be wrong to think of planets to go around the sun, or of the Netscape icon as a little drawing that does things for you.
But not if you're going to tweak with its binary code !

Again, I don't at all disagree with you that this pursuing of principles is important. Of course it is. I just take a slightly longer-term approach to judging which principles ought to be stuck to. You seem to focus on whatever's hot this month or this century -- if lorentz invariance has worked well for the last 50 years, then run with it 100% at all costs without worrying about anything else that came before.

It is because I think there is no real reason why the former paradigm has key elements of the next paradigm. It might be, but to me, each "layer" is fundamentally different. Like, the icons for the novice user are NOT found in the deeper layers. It is not because people told you about files, and you felt much better about drawings on the desktop, that you can tell yourself that underneath "files" will again be drawings on the desktop. No, this time it will be bit streams. And then it will be logical circuits. And then it will be transistors on silicon. And then, it will be quantum theory of solids :smile:.
But never again little drawings on a desktop.
I'd like to give up lorentz invariance for something far "deeper" such as general covariance. But not to get back to Euclidean. That's over. For ever.

It also strikes me as a bit odd that you are the one pushing for such strong allegiance to principles which, by your own admission, are probably going to be overthrown in the next paradigm shift. What, other than a belief that those principles actually reflect some deep fact of nature, could justify this allegiance to them?

The formalism ! I switch principles when I switch formalism. But I don't know the next one, and I'm not smart enough to think of a new one on my own. I think that the mathematical formalism which elegantly leads to new results must also contain the basic principles "of the century".

It's precisely to avoid having to constantly "switch fundamentally your view of the world" that I am advocating what, I think, is a slightly more conservative attitude. It was right to run with lorentz invariance, no doubt, but all along it should have been held in people's minds with a mental asterisk saying -- there are still some questions about what this means about the world/ontology/storytelling side of things.

If the basic fundamentals really changed regularly, I agree, it would be impossible to claim that the latest ones can be trusted as a reliable map of nature. But if you take a somewhat less flavor-of-the-month view about what counts as "fundamental principles" then it simply ceases to be *true*, historically, that the fundamentals are always changing.

But then I stick to MY very first fundamental principle: I'm in the center of the world, and everything happens as a function of myself. That was at least my prevailing theory when I was about 3 years old :-p
(And I also believed in magic and Santa Claus.) It is in fact very very reassuring that I discover that our most advanced theories are just exactly screaming out *that* ! :smile:

Fair enough, but I say: until we're really more sure about the real meaning of those underlying principles, we should keep as many alternatives on the table as possible. I mean, there's simply no way to know a priori whether something like lorentz invariance is a truly fundamental property of nature/spacetime, or merely an emergent phenomenon coming from a fundamentally non-lorentzian world.

Oh, but I'm pretty sure that the world is not "truely lorentzian". Only this must come from something deeper that naturally reduces to lorentzian invariance in the right limits.

That's an argument in favor of relativity, not against Bohm's theory, right? Or am I misunderstanding?

Both ! If the student is not allowed to write equations which do not respect Lorentz invariance, and if this is a great help in finding the right formulations in QFT for instance, he'll turn green when he suddenly has to write down the guiding equations !
See, it is hard to swallow that you have to stick 90% of your time to this way of writing equations, and that it is a great help because it avoids you taking going astray, and suddenly you have to write an equation that clashes in all possible respects with it.
And remember that you NEED the lorentz-invariant part for the evolution of the wavefunction. It is not that you replace a theory by another one. You just add a piece to it (the {Q1...QN} state and its guiding equation).

cheers,
Patrick.
 
  • #45
Both ! If the student is not allowed to write equations which do not respect Lorentz invariance, and if this is a great help in finding the right formulations in QFT for instance, he'll turn green when he suddenly has to write down the guiding equations !

That's funny. They don't turn green now when they are told about wave function collapse...

Then again, I have recently learned there is confusion about this particular point among more than just students. I was having a discussion by email with a faculty member from a very prestigious university; he had written me a note about one of my papers, and we argued for a while back and forth about what Bell's theorem proved, etc., etc. Typical stuff. But eventually I parried all of his insults and got him to assert that in relativistic quantum theory (as opposed to that non-relativistic stuff that, he claimed, was leading me astray) the wave function collapse propagates out from the measurement event at the speed of light, along the future light cone. Now leaving aside the fact that it's not even clear what this would mean since the wf is defined on configuration space -- and the fact that it simply isn't *true* that this is how relativistic quantum theories work -- there is of course the fatal problem that, if this were true, QM would no longer predict violations of Bell's inequalities!

Hmmm, I guess the point is -- you and I can at least agree that if people are going to grab a principle like lorentz invariance and run with it, they should at least be damn careful to be consistent. In paritcular, if one is going to raise a fuss over the non-lorentz-invariance of the Bohmian guidance formulas, then one shouldn't let the same kind of thing slide in unobjected-to in the theory one favors as an alternative. Sigh...

:smile:
 
  • #46
ttn said:
But eventually I parried all of his insults and got him to assert that in relativistic quantum theory (as opposed to that non-relativistic stuff that, he claimed, was leading me astray) the wave function collapse propagates out from the measurement event at the speed of light, along the future light cone.

Hehe :smile: Honestly, there could be something like that (and gravity might provide it). Now I don't know if that faculty member of that prestigious university was being very naive or extremely clever !
If, indeed, he thought that the "Copenhagen" collapse simply took place as an outgoing lightspeed wave then he didn't understand a thing about what EPR is all about of course, which is worrisome for that prestigious university :smile:

But if he is a distinguished theorist, already since a long time he should have switched to MWI :-p. And maybe he did find a way in MWI to have the worlds "collapse" once everything is in the past lightcone ; this is a bit how Penrose imagines things. But the problem is that that is a picture that is not based on a formalism ; so we should first work out a formalism for that (say, a theory of quantum gravity if it is gravity that does this thing) before setting up a picture, otherwise we're dabbing in the dark.

cheers,
Patrick.
 
Back
Top