Is quantum indeterminacy a result of experiment design?

gstafleu
Messages
7
Reaction score
0
Is "quantum indeterminacy" a result of experiment design?

Stemming from a debate about free will, I, a total lay person, have been doing some reading on quantum indeterminacy (QI). I have come up with some questions, I hope this forum is an appropriate place for them.

Basically the question is this: where does the idea of QI come from. The two scenarios that seem to pop up most often are Schrödingers cat and the two-slit electron experiment. But in both cases it seems to me that the indeterminacy that shows up is exactly what was designed into the experiment.

In case of the cat the design incorporates a known indeterminate element: the decay of an atom. We do not have a model that will predict the exact time when a given atom will decay, but we do have a statistical model, and that model is incorporated into the cat experiment. That model predicts that, if we open the box at t=Lambda, we have a 50% chance of a dead cat. Before we open the box we of course don't know the state of the cat, but that is what was designed into the experiment. How does this show any indeterminacy beyond that which was designed into the experiment?

With the double slit experiment we seem to have a similar situation. The indeterminacy designed into the experiment is that we don't know exactly which slit a given electron will go through. But we do know that 50% of the electrons will go through one, 50% through the other slit. The actual probability distribution turns out to be a wave function, which is perhaps a bit unusual, but no more than that. As a result the distribution of electrons on the detector screen displays an interference pattern. So far I fail to see any indeterminacy beyond what was designed into the experiment.

We can then change the experiment by removing the indeterminacy: we add some sort of apparatus that tells us through which slit each electron passed. This apparently changes the probability distribution from a wave to a classic particle yes-or-no distribution. OK, so that is maybe strange, but I still see no indeterminacy here.

In both cases the outcome of the experiment is completely predictable. In the first case we have a probabilistic experiment (we don't know which slit each electron goes through), hence we get a probabilistic outcome. Indeterminacy would only pop up if the interference pattern was not consistent. When we change the experiment such that we remove the indeterminacy, we get a result that is equally predictable and compatible with the new experimental set-up.

So where does this term "quantum indeterminacy" come from? I must be missing something here.
 
Last edited:
Physics news on Phys.org
Your question is a little confusing--"indeterminacy" usually just means the fact that QM is probabilistic, ie non-deterministic, but you seem to accept this in your question. By "indeterminacy" are you talking about something more like the idea that quantum particles and systems don't have well-defined properties when they aren't being measured, such as the idea that the cat in the Schroedinger's cat thought-experiment is not in a definite state until the box is opened, or the idea that the particles in the double-slit experiment don't go through one slit or the other when their path is not measured? If that's what you mean, I don't think "indeterminacy" would really be the right word for it, although I'm not sure what term should be used, maybe "no local hidden variables".

In any case, the difficulty with imagining particles have definite properties even when not measured is probably best illustrated by the weird correlations between measurements of different members of entangled pairs of particles, what Einstein called "spooky action at a distance". Here's something I wrote about this on another thread:
Photons have two types of polarization, linear polarization and circular polarization, and they are "noncommuting" variables which means you can't measure both at once, just like with position and momentum. If you measure circular polarization, only two possible outcomes are possible--you'll either find the photon has left-handed polarization, or right-handed polarization. With linear polarization, you can only measure it along a particular spatial axis, depending on how you orient the polarization filter--once you make that choice, then again there are only two outcomes possible, the photon either makes it throught the filter or it doesn't. And if you have two entangled photons, then if you make the same measurement on both (either measuring the circular polarization of each, or measuring the linear polarization of both in the same direction) then you'll always get opposite answers--if one photon makes it through a particular polarization filter, the other photon will never make it through an identical filter.

To understand the EPR experiments, You can actually ignore circular polarization and just concentrate on linear polarization measured on different axes. If you do not orient both filters at exactly the same angle, then you are no longer guaranteed that if one photon makes it through its filter, the other photon will not make it through--the probability that you will get the opposite answers in both cases depends on the angle between the two filters. Suppose you choose to set things up so that each experimenter always chooses between three possible angles to measure his photon. Again, whenever both experimenters happen to choose the same axis, they will both get opposite answers. But by itself, there is nothing particularly "spooky" about this correlation--as an analogy, suppose I make pairs of scratch lotto cards with three scratchable boxes on each one, and you find that whenever one person scratches a given box on his card and finds a cherry behind it, then if the other person scratched the same box on their own card, they'd always find a lemon, and vice versa. You probably wouldn't conclude that the two cards were linked by a faster-than-light signal, you'd just conclude that I manufactured the pairs in such a way that each had opposite pictures behind each one of its three boxes, so if one had cherry-cherry-lemon behind the boxes, the other must have lemon-lemon-cherry behind its own.

In the same way, you might conclude from polarization experiment that each photon has a preexisting polarization on each of the three axes that can be measured in the experiment, and that in the case of entangled photons, both are always created with opposite polarization on their three axes. For example, if you label the three filter orientations A, B, and C, then you could imagine that if one photon has the preexisting state A+,B-,C+ (+ meaning it is polarized in such a way that the photon will make it through that filter, - meaning it won't make it through), then the other photon must have the opposite preexisting state A-,B+,C-.

The problem is that if this were true, it would force you to the conclusion that on those trials where the two experimenters picked different axes to measure, both photons should behave in opposite ways in at least 1/3 of the trials. For example, if we imagine photon #1 is in state A+,B-,C+ before being measured and photon #2 is in state A-,B+,C-, then we can look at each possible way that the two experimenters can randomly choose different axes, and what the results would be:

Experimenter #1 picks A, Experimenter #2 picks B: same result (photon #1 makes it through, photon #2 makes it through)

Experimenter #1 picks A, Experimenter #2 picks C: opposite results (photon #1 makes it through, photon #2 is blocked)

Experimenter #1 picks B, Experimenter #2 picks A: same result (photon #1 is blocked, photon #2 is blocked)

Experimenter #1 picks B, Experimenter #2 picks C: same result (photon #1 is blocked, photon #2 is blocked)

Experimenter #1 picks C, Experimenter #1 picks A: opposite results (photon #1 makes it through, photon #2 is blocked)

Experimenter #1 picks C, Experimenter #2 picks picks B: same result (photon #1 makes it through, photon #2 makes it through)

In this case, you can see that in 1/3 of trials where they pick different filters, they should get opposite results. You'd get the same answer if you assumed any other preexisting state where there are two filters of one type and one of the other, like A+,B+,C-/A-,B-,C+ or A+,B-,C-/A-,B+,C+. On the other hand, if you assume a state where each photon will behave the same way in response to all the filters, like A+,B+,C+/A-,B-,C-, then of course even if they measure along different axes the two experimenters are guaranteed to get opposite answers with probability 1. So if you imagine that when multiple pairs of photons are created, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C-/A-,B+,C+ while other pairs are created in homogoneous preexisting states like A+,B+,C+/A-,B-,C-, then the probability of getting opposite answers when you measure photons on different axes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get opposite answers in less than 1/3 of trials where you measure on different axes, provided you assume that each photon has such a preexisting state.

But the strange part of QM is that by picking the right combination of 3 axes, it is possible for the probability that the photons will behave in opposite ways when different filters are picked to be less than 1/3! And yet it will still be true that whenever you measure both along the same axis, you get opposite answers for both. So unlike in the case of the scratch lotto cards, we can't simply explain this by imagining that each photon had a predetermined answer to whether it would make it through each of the 3 possible filters. This is the origin of the notion of "spooky action at a distance".

But notice that this spookiness can't be exploited to send messages--each experimenter only gets to choose which of the three filter orientations to use, but they have no influence over whether the photon actually makes it through the filter they choose or is blocked by it, that seems to be completely random. It's only when they get together and compare their results over multiple trials that they notice this "spooky" statistical property that when they both happened to pick the same filter orientation, both photons always behaved in opposite ways, yet when they picked two different filter orientations, both photons behaved in opposite ways less than 1/3 of the time.
Technically, the prediction that the photons must have opposite polarizations at least 1/3 of the time when measured along different axes here is predicted by something called "Bell's theorem", which you can read more about here or http://en.wikipedia.org/wiki/Bell's_Theorem]here--any[/PLAIN] system that obeys "local realism" (particles having definite preexisting states before measurement, where the state of one cannot affect the state of another faster-than-light) must obey something called the "Bell inequality", but quantum mechanics violates the Bell inequality, thus disproving local realism.
 
Last edited by a moderator:
gstafleu said:
We can then change the experiment by removing the indeterminacy: we add some sort of apparatus that tells us through which slit each electron passed. This apparently changes the probability distribution from a wave to a classic particle yes-or-no distribution. OK, so that is maybe strange, but I still see no indeterminacy here.

you don't see the interference anymore because you don't have coherent wave anymore (you got all the range of frequecies and directions for the wave vector)-meaning that by measuring the possition for definite value you created uncertainty in the momentum...

thats the uncertainty priciple, you can't know for sure the position and the momentum, or the time duration of a reaction and its energy, or the spin of a particle at more then one axis simultaniuosly...

when you measure one of these properties you change your system and get an uncertainty at the other property.
 
JesseM said:
Your question is a little confusing--"indeterminacy" usually just means the fact that QM is probabilistic, ie non-deterministic, but you seem to accept this in your question.

Yes, there are lots of models that have a probabilistic component, e.g. weather predictions, so assuming that isn't that strange. This generally does not let us to assume that it is "fundamentally" impossible to come up with a non-probabilistic model. Usually the conclusion is that we don't know enough and/or don't have enough computational resources.

As I said, it was a philosophic discussion--about free will, determinism, that kind of thing--that led me to think about this. In (some) philosophic circles it is quite usual to loosely remark that QM has introduced a fundamental randomness, or non-determinism if you will, into the universe. This may be so, but if so neither the cat nor the slits seem to indicate this.

Maybe the only part of QM that does introduce a fundamental "unknowability" is, as fargoth mentioned, Heisenberg's uncertainty principle.

Thanks for the explanation of and the pointers to Bell's inequality. As far as I can tell from Harrison's article, this experiment throws doubt on at least one of the following three assumptions:
1. Logic is valid (to be precise the assumption that (P Or Not P) is always true.)
2. There is a reality separate from its observation (hidden variables)
3. Locality (c is the speed limit).

Interesting as this is (my personal favorite is that (P Or Not P) may not always hold :-), this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).
 
gstafleu said:
Yes, there are lots of models that have a probabilistic component, e.g. weather predictions, so assuming that isn't that strange. This generally does not let us to assume that it is "fundamentally" impossible to come up with a non-probabilistic model. Usually the conclusion is that we don't know enough and/or don't have enough computational resources.
Sure, in classical physics there may be situations where the exact details of the position and velocity of every particle at the microscopic level cannot be know, so you can only make statistical predictions, but it's assumed that if you could have complete information the situation would actually be completely deterministic. There actually is an "interpretation of QM" (not a theory, since it makes no new predictions) like this, invented by David Bohm and sometimes called "Bohmian mechanics", in which each particle has a well-defined position and trajectory at all times, and they are guided by a "pilot wave" which can transmit information instantaneously (thus sacrificing locality), and whose behavior is totally deterministic. In this theory, the perceived randomness is due to our lack of information about the precise state of each particle and the pilot wave, so you make an assumption called the quantum equilibrium hypothesis which treats every possible state that's compatible with your knowledge as equally likely for the purposes of making statistical predictions. See this article on Bohmian mechanics, or this explanation of how the pilot wave works. Aside from the fact that, like all interpretations, it's impossible to test, a lot of people object to the fact that Bohmian mechanics is a bit contrived and inelegant, not to mention the fact that it must violate relativity at a fundamental level even if not at an empirical level (but Bohmian mechanics was only developed to explain nonrelativistic quantum mechanics, I don't think there have been any fully successful attempts to extend it to quantum field theory, which is relativistic). This and other objections to Bohmian mechanics, along with answers from its supporters, are discussed in this paper:

http://arxiv.org/abs/quant-ph/0412119

There is also the "many-worlds interpretation" (see here and here), which is supposed to be deterministic at the level of the entire multiverse, with the apparent randomness due to the experimenters splitting into multiple copies who get different results; but there seem to be a lot of problems showing how to derive the correct probabilities from the many-worlds interpretation without making additional assumptions.
gstafleu said:
Maybe the only part of QM that does introduce a fundamental "unknowability" is, as fargoth mentioned, Heisenberg's uncertainty principle.
But this is a limitation on our ability to measure noncommuting variables like position and momentum simultaneously, so it's only "unknowability" about hidden variables, if you believe there's some objective truth about the value of these variables at all times.
gstafleu said:
Thanks for the explanation of and the pointers to Bell's inequality. As far as I can tell from Harrison's article, this experiment throws doubt on at least one of the following three assumptions:
1. Logic is valid (to be precise the assumption that (P Or Not P) is always true.)
2. There is a reality separate from its observation (hidden variables)
3. Locality (c is the speed limit).
Interesting as this is (my personal favorite is that (P Or Not P) may not always hold :-), this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).
Well, wouldn't any theory that attempts to explain the apparent randomness of QM in terms of lack of information about some underlying deterministic reality, like with the apparent "randomness" of classical systems when you don't have all the necessary information about every particle's position and velocity, necessarily involve "hidden variables" of some sort?
 
Last edited:
JesseM said:
But this is a limitation on our ability to measure noncommuting variables like position and momentum simultaneously, so it's only "unknowability" about hidden variables, if you believe there's some objective truth about the value of these variables at all times.
the uncertainty principle stems from the fact that we are handling a wave packet in the standard QM.
a pulse is constructed from many frequencies, the shorter it is, the more frequencies it has.
if you say you'd ignore the fact that things are represented by waves of probability amplitude, and say that in the double slit experiment there is a path this particle will go through which is already determined (hidden var), but you just can't know it without changing the momentum,
you can't explain the dark spots, in which the particle can't be.
you don't have interference if the particle behaves classically, it must move through both slits and interfer (even one particle will interfere with itself), once the wavefunction collapse to one of the slits (if the particle moves through just one slit), there will be no interference.
 
Last edited:
fargoth said:
the uncertainty principle stems from the fact that we are handling a wave packet in the standard QM.
Sure, but the analogous version of this for waves in classical mechanics doesn't mean there is any information about the wave we are lacking, it just means you have to sum multiple wavelengths of pure sine waves to get a localized wave packet, so you can't say the wave packet has a single wavelength. It's only because the wavefunction in QM represents the probability of getting different values for properties like position and momentum that we call it "uncertainty", so if you make the assumption that these properties must all have well-defined values at all times (hidden variables), then this means a limit on your ability to know the values of noncommuting properties simultaneously.
fargoth said:
if you say you'd ignore the fact that things are represented by waves of probability amplitude, and say that in the double slit experiment there is a path this particle will go through which is already determined (hidden var), but you just can't know it without changing the momentum,
you can't explain the dark spots, in which the particle can't be.
you don't have interference if the particle behaves classically, it must move through both slits and interfer (even one particle will interfere with itself), once the wavefunction collapse to one of the slits (if the particle moves through just one slit), there will be no interference.
But in the Bohmian interpretation of QM, the particle does always travel through one slit or another, it's just that it's guided by a nonlocal "pilot wave" which in some sense is exploring both paths, and the pilot wave will guide it differently depending on whether the other slit is open or closed, and depending on whether there's a detector there. I think the Bohmian interpretation seems too inelegant to be plausible, but it nevertheless serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM.
 
correct me if I am wrong, i haven't read that interpretation yet, but if you claim a particle has a definite place in the space every time even though we can't know it, and this place is dependant on the wave function, the route the particle will go through will not be a straight one, but rathar a complicated direction-changing one... doesn't it mean a charged particle will radiate on such a path? or does the particle just teleports itself each moment to a different place? is so, how does it help determinism?
 
gstafleu said:
...this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).

Predicting probability distributions isn't the same as predicting individual outcomes. QM predicts a limit to predictability...and provides the techniques for making the best predictions possible.
 
Last edited:
  • #10
Probability deals with events. While a fundamental, rock solid basis for probability theory does not exist, what does exist covers QM probabilities, which, of course, deal with events.

Without QM, the two slit experiment for electrons will not have a discernable interference pattern; the electrons will go in a straight line unless blocked by a screen. But, unlike electrons, photons/radiation "always interfere when involved in a two-slit experiment -- within appropriate frequency ranges and so forth. If the photon is energetic enough to melt the screen, all bets are off. You never know, so to speak where the next photon will fall.

You never know what, if any stocks on the New York Stock exchange will appreciate more than 5% during a day's trading, or... Given enough history, statisticians can estimate the probability of such events -- theories in this area are not anywhere near as powerful as in physics, but some have had success with models. So, data analysis, data mining and so forth are key elements. But, the financial modelers and the physicist are doing exactly the same thing: they are, in one way or another, estimating and deriving probabilities.

They have, among other things, a common allegiance to probabilistic methods, and recognize the importance of empirical data to their success. In fact, it can be argued that they both use the same fundamental, formal equation to track probabilities over time. In physics, the equation is called the Schrödinger Esq. for the density matrix, in finance, economics, and other social sciences, and in engineering, the equation is often called a stochastic probability equation, or many things, usually with stochastic tossed in.What they both do, is give the values of probabilities now, based on yesterday, or last minute, or the last year, or...or...But of course the drivers of dynamics differ greatly between the various fields. If we use classical and quantum approaches to sending electrons though slits, then only the QM approach will explain the interference pattern, which has nothing to do with experimental design -- the sample space actually covers probably thousands of experiments, many many slit configurations, and the conclusions are invariant; QM is right.

It's worth remembering that the experimental finding of electron interference came before modern QM. Thus, QM was, out of necessity, required to explain electron diffraction. To a substantial degree this core phenomenon, in my opinion, drives almost all of QM, the Heisenberg Uncertainty principal, the superposition principal, tunneling, and so forth. Interference first, theory second.

When talking QM, it is, I think, important to recall QM's origins, and to indicate the immense coverage and success of the theory. My conservative guess is that there are at least 100,000 pages of material on the successful testing of QM -- this covers atomic physics, solid state physics, superconductivity, super fluidity, semiconductors and transistors, and you name it. There is no indication of the influence of experimental design on QM, other than that normally expected. (I did a lot of work on this issue, in regard to electron scattering experiments, a few years ago, and never found any indication that different designs led to different conclusions. (if you know about things like ANOVA, etc, then you will recognize a bit of oversimplification on my part -- to save writing another paragraph.)

It's a virtual certainty, at least to me, that eventually QM will be found wanting -- the march of history and all that. But, I'd say that there's no chance -- better, slim to none -- to find the problem in the 100,000 pages. All of this work has been steam-cleaned by the most intense scrutiny imaginable, and QM always has done the job. The QM edifice, probability and all, is huge, stable and solid -- very unlikely to come tumbling down; it will crumble at the edges instead.
Regards,
Reilly Atkinson
QM is weird because nature is weird.
 
Last edited:
  • #11
Jesse,

thanks for the links to the articles about Bohmian mechanics, that was very interesting. I think that that may have been precisely what I was after. The article by Goldstein at one point says "...if at some time (say the initial time) the configuration Q of our system is random, with distribution given by Abs(Psi)^2 = psi*psi, this will be true at all times..." That, I think, is like what I was trying to say about the randomness that the results show being the same as the randomness put into the experiment in the first place.

JesseM said:
Well, wouldn't any theory that attempts to explain the apparent randomness of QM in terms of lack of information about some underlying deterministic reality, like with the apparent "randomness" of classical systems when you don't have all the necessary information about every particle's position and velocity, necessarily involve "hidden variables" of some sort?

Yes, except that determined non-determinists like to think that QM has somehow "proved" that there "are" (no doubt in some Clintonian sense :-) no hidden variables. As you point out elsewhere, Bohmian mechanics, whether correct or not, does away with any arguments that rely on the assumption that QM is necessarily non-deterministic: Bohmian mechanics "serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM," thus providing a counter example to the assumption of necessary nondeterminism.
 
  • #12
reilly said:
They [physics and e.g. economics] have, among other things, a common allegiance to probabilistic methods, and recognize the importance of empirical data to their success. In fact, it can be argued that they both use the same fundamental, formal equation to track probabilities over time.

Reilly,

Thanks for your reply. I never intended to say that QM was wrong or didn't work, just that I didn't see any reason to assume fundamental nondeterminism. In e.g. economics randomness is introduced because we don't know enough, or don't have enough computational resources to completely describe the system (as in chaos theory).

In QM we seem to have something similar, and following Bohm the source of the randomness there seems to be that we cannot establish anything beyond what the quantum equilibrium hypothesis allows, thus preserving e.g. the uncertainty principle.

In both cases there is no need to posit a fundamental indeterminacy, we just run into practical limits. At least, that is what I understand from Bohmian mechanics so far.
 
  • #13
gstafleu said:
Bohmian mechanics "serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM," thus providing a counter example to the assumption of necessary nondeterminism.

However, nonlocal theories are incompatible with relativity, as far as I know. In order for BM (or some other nonlocal hidden variables theory) to be taken more seriously as a replacement for orthodox QM, someone is going to have to come up with a replacement for relativity that both allows for nonlocality and satisfies all the experimental tests that relativity has passed so far. Rather a tall order! :bugeye:
 
  • #14
As a pragmatist in economics and physics, I really don't care much about the origins of randomness.It's there -- let me explain: My version is, for whatever reasons, we cannot predict the precise result of virtually any experiment -- for sure for anything measured by a real number. Always there's experimental error -- maybe not in simple counting, oranges or change or books ... the sources of which we do not fully understand, but we do know how to manage the error, and estimate it. One posits the experimental error to be a random variable, Gaussian is preferable but not necessary. In fact systems, control, communication and electronics engineers use a very sophisticated approach to errors, one that goes well beyond elementary statistics. And, all this effort works.


Why should something be causal rather than random? Where is it written?
And, reflect upon the following curiosity: it's possible to build models, based on random variables, that are fully deterministic -- this shows up in the subject of Time Series Analysis.

So, to avoid brain crunching contradictions, and tortured thoughts, I go with: Random is as random does. If random works, it must be random.

Probability and statistics provide great tools for solving lot's of important problems -- even if we are not sure why, these tools work. But we say that it's random stuff that is explicated by probability and statistics.

Around these parts, Newton's mechanics and gravity work just fine. Frankly, I find this as mysterious as the accuracy of the standard
probability model for a fair coin.

By your criteria, how could you ever judge that an apparent random process was in fact causal?
Good questions.
Regards,
Reilly Atkinson
 
  • #15
jtbell said:
However, nonlocal theories are incompatible with relativity, as far as I know. In order for BM (or some other nonlocal hidden variables theory) to be taken more seriously as a replacement for orthodox QM, someone is going to have to come up with a replacement for relativity that both allows for nonlocality and satisfies all the experimental tests that relativity has passed so far. Rather a tall order! :bugeye:
Well... isn't orthodox QM equally non-local? http://plato.stanford.edu/entries/qm-bohm/#nl" states:

It should be emphasized that the nonlocality of Bohmian mechanics derives solely from the nonlocality built into the structure of standard quantum theory, as provided by a wave function on configuration space, an abstraction which, roughly speaking, combines -- or binds -- distant particles into a single irreducible reality.​
It seems to me that under these circumstances to complain that you can't take BM seriously as a replacement for the equally non-local orthodox QM is, to make a link with a quarkian property, strange :smile: .
 
Last edited by a moderator:
  • #16
reilly said:
Why should something be causal rather than random? Where is it written?

The reason for assuming causality is perhaps not so much physical as methodological. When you incorporate a stochastic element in a model, you are in fact saying something like "we have a rough description of how this variable works, but for the purposes of this model we are not going to bother explaining the detailed workings." In other words, you are explicitly putting the causation of a certain behaviour outside your model.

That is just fine for pragmatic solutions. It may simply not be possible to incorporate all causilities into the model (think chaos theory), or we may not yet know enough. In that case you take what you have or what you can do. No problem.

But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours. You just stick with the rough description, never mind "what makes it tick." And that is equivalent to saying that you have given up on the scientific endeavour, at least for those behaviours. Now it may be that at some point we will have to acknowledge that for some phenomena we have been getting nowhere for a long time and it doesn't look as if we'll be getting any further any time soon. But one shouldn't assume that as a matter of principle, which is what non-determinism amounts to.

And as a practical matter, one probably won't raise too much grant money for proposing to not look any further into a certain phenomenon :smile:.
 
  • #17
well, as i see it the "non-locality" of orthodox QM is due to particle not having definite size, i mean they are everywhere, its just less probable to find them far away from a certain spot... so the particle is not local, and there's no problem of it "knowing" things that are far away from it.

the non-locality problem only occures when you insist the particle has definite place at all times, and it just "knows" non-local conditions.
 
  • #18
fargoth said:
well, as i see it the "non-locality" of orthodox QM is due to particle not having definite size, i mean they are everywhere, its just less probable to find them far away from a certain spot... so the particle is not local, and there's no problem of it "knowing" things that are far away from it.
the non-locality problem only occures when you insist the particle has definite place at all times, and it just "knows" non-local conditions.

Totally incorrect. Size, as ZapperZ remarked on another thread, is simply not defined for these particles, and you have completely missed the point of entanglement.

Unlike some people, physicists don't just make up this stuff off the top of their heads.
 
  • #19
selfAdjoint said:
Totally incorrect. Size, as ZapperZ remarked on another thread, is simply not defined for these particles, and you have completely missed the point of entanglement.
Unlike some people, physicists don't just make up this stuff off the top of their heads.

i don't just make up this stuff off the top of my head, i can accept that you don't like the term size, and I am not going to argue about it, because in the end its the same as saying the particle doesn't have a defined size or is point sized or whatever, what matters is where it can interact with other matter, and that is everywhere (though maybe with almost zero probability when its far away...).

if a particle is described as a field of probability amplitudes, it exists everywhe, that's what i meant.

there's no need to go all sensitive about it :-p

i wasnt talking about entanglement... i was talking about the particle "knowing" where it could exist in space according to the surrounding enviorement, which would have been a non-locality problem if you thought of this particle as existing at a certain point for a certain time, like BM claims.

isnt the uncertainty principle also related to indeteminacy?
you can't know where you particle is not because you can't measure it, but because if QM is right, it doesn't have a well defined position.
you can confine it using a potential well, (though as long as the well isn't infinite it could still tunnel everywhere with very little probability)
tunneling, for example, is a non-locality problem if you think of this particle as having a defined place which you just can't know, but if you say the particle is everywhere its not a problem anymore.

maybe i miss-understood the hidden variable stuff, but i think it should be applied to every unknowable variable, and if it is, then a problem of non-locality is present at the most simple level of position, and you don't have to go and look for it in entanglement.
 
Last edited:
  • #20
Bohm

Bohm did his work on his QM some 60 years ago; roughly around the same time that Feynman, Schwinger, and Tomonaga (F, S, T) figured out how to make QED work.

Even if the Bohm approach had lead to a mere 1 percent of the progress in fundamental issues of the FST approach, it would have a certain, if small. level of credibility. But, as far as I know, the Bohm approach has lead only to controversy, conferences, heated and passionate arguments, and soforth -- and only in a limited part of the physics community. No Bohm interpretation-based physics of any consequence means it's a path not likely to bring any success -- orthodox physics has left Bohmian physics way behind.

By the way, there are sound theoretical reasons why QM and QFT are local--it's all in the structure of the interactions. And we use, almost exclusively so-called point interactions, guaranteed to be local.

Regards,
Reilly Atkinson
 
  • #21
gstafleu said:
The reason for assuming causality is perhaps not so much physical as methodological. When you incorporate a stochastic element in a model, you are in fact saying something like "we have a rough description of how this variable works, but for the purposes of this model we are not going to bother explaining the detailed workings." In other words, you are explicitly putting the causation of a certain behaviour outside your model./QUOTE
>>>>>>>>.


RA Your description of the import of random variables is quite at variance with much of the statistical and analytical work and literature of more than a century of experience. I'll cite you the examples of so-called Monte-Carlo simulations, and/or simulations of gaseous or liquid turbulance, or explosions, and then there's the stock market, sales forecasts, surveys an so on. Check it out.

And, any way, is it not perhaps the case that our basic perceptual mechanisms deal with stable averages, and what you might call, reognition of comprehensible motions and changes -- we were programmed to look for causality; a fundamental human bias, and a good thing. Random is troublesome. life threatning -- "random traffic accidents -- ...

>>>>>>>>>>>>>>>>>>>>>>>>>>>.

That is just fine for pragmatic solutions. It may simply not be possible to incorporate all causilities into the model (think chaos theory), or we may not yet know enough. In that case you take what you have or what you can do. No problem.
>>>>>>>>>>>>>>>>>...
RA
Are you an experienced builder of models?

>>>>>>>>>>>>>>>>>>
But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours. You just stick with the rough description, never mind "what makes it tick." And that is equivalent to saying that you have given up on the scientific endeavour, at least for those behaviours. Now it may be that at some point we will have to acknowledge that for some phenomena we have been getting nowhere for a long time and it doesn't look as if we'll be getting any further any time soon. But one shouldn't assume that as a matter of principle, which is what non-determinism amounts to.
<<<<<<<<<<<<<<<<<<<<<<<<<<<,

RA Randomness does not preclude understanding -- Much of Einstein's work dealt with randomness, with extraordinary sucess.

For me to make assumptions of the type you mention just above. is to take me way beyond my paygrade. I don't have a clue. But to stop investigation is just plain silly, as you suggest.

Regards,
Reilly Atkinson
 
  • #22
reilly said:
By the way, there are sound theoretical reasons why QM and QFT are local--it's all in the structure of the interactions. And we use, almost exclusively so-called point interactions, guaranteed to be local.
I’ve always seen QM and its sub theories as “Non-local”. HUP, superposition, entanglement and the like; being explained through that non-local quality.

So just to be clear on what you mean by saying “QM and QFT are local” does not mean local in a classical sense i.e. with a dependent background. But local against a structure of reality built on a independent background as Lee Smolin describes both GR and QM have.

Thus there is no true tracking of “local” positions between separated points. Just the relative interrelation of interactions, each acting as a kind of independent local system.

Is that how some "theoretical reasons" can get “local” onto QM.
 
  • #23
The question being indeterminacy being due to experimental design: The obvious answer being: NO.

We know that a electron can be confined to an arbitrarily small volume because it acts as a point particle. If it was an issue of experimental design, we could simultaneously measure its momentum with similar precision - and that doesn't happen.

Instead, our results exactly match the HUP, which is essentially the "source" of indeterminacy. Even within the theoretical framework of BM and MW, no one claims any different results are possible within an experiment setup.

So it isn't the experiment, it's the theory every time. Experiment, of course, supports the theory too. :smile:
 
  • #24


:smile: Schrodinger's cat; woof!
I've rapidly come to the conclusion that pinpointing a size for say a photon or an electron isn't really that important: yes they both have a size even if it is just a wavelength or a point: but to argue about it seems to be a bit pointless if you'll forgive the pun:smile:

So everything has a definite value it's just we can't measure it? So superposition a cloud of possible electron configurations isn't that there are in fact all possibilities at once merely that we can't for sure determine exactly where the electron is at any given time and it's momentum, thus it's everywhere with certain probabilities given by the density of the cloud? That makes sense to me:smile:

Does everyone independantly come to the conclusion that this sort of non-deterministic idea, indicates that there is in fact free will and then find out that there isn't again or is it just me? that means that you can't know what anyone is thinking and neither can your mind so I guess there is free will after all, I think I just broke my brain:-p
 
Last edited:
  • #25
gstafleu said:
But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours...

I'm not convinced Bohmian mechanics are leading the way. They might be deterministic but my understanding is that the pilot waves can never be detected because they always cancel out. I like the ideas a lot, but I always find myself in a Bohmian universe that is deterministic, but is still behaving just like it isn't! I think most followers have already given up the idea of creating technology to control pilot waves etc (unless you include all "ordinary" technology that exploits QM). I just use the ideas as a useful perspective to take up occasionally.
 
Last edited:
  • #26
Oh, and my guess is that if I ever went back to doing the mathematics of quantum mechanics, it would be much simpler to put determinism in a small folder in the loft labelled "Beautiful ideas" and only bring it down once a year with the Christmas tree and fairy lights :biggrin:
 
  • #27
RandallB said:
So just to be clear on what you mean by saying “QM and QFT are local” does not mean local in a classical sense i.e. with a dependent background. But local against a structure of reality built on a independent background as Lee Smolin describes both GR and QM have.
Thus there is no true tracking of “local” positions between separated points. Just the relative interrelation of interactions, each acting as a kind of independent local system.
Is that how some "theoretical reasons" can get “local” onto QM.


I haven't a clue about what you are saying. Perhaps it's my lack of knowledge about Dr. Smolin's ideas. Maybe you could provide a brief explanation of independent vs. dependent reality, or of the structure of reality, whatever that is, or "relative interrelation of interactions, whatever that is. Beats me.

And, by the way, how can a position be anything but local? I'd be very intrigued to know what a nonlocal position is.

Locality, as I've known it while doing physics is a rather simple concept -- to paraphrase a TV commercial, What happens here, is here, and is always face-to-face. And, with the restrictions imposed by relativity, something there can influence something here only by transmission of some "signal", at or less than the speed of light. So, "local" gives the following picture of electric and magnetic forces -- charge particle A emits a photon, which is absorbed by B, and vica versa. This goes on all the time, and the emission and absorbtion of photons is the specific mechanism behind the electromagnetic forces -- as is shown in numerous QED textbooks. All these words describe a very concise mathematical formulation, based on a field theory of local, that is point, interactions.

Regards,
Reilly Atkinson
 
  • #28
Coucou :cool: ! Here I am again with my pet interpretation of QM, which is of course, MWI :rolleyes:

The way MWI sees QM is related to the issues here, because it combines both the locality and apparent non-local effects of QM, and the relationship between determinism and the essential randomness (of which the HUP is of course a cornerstone).

In MWI, the wavefunction of the universe is evolving *deterministically* and *via local dynamics*, always. The only dynamical law of the wavefunction is the Schroedinger equation, resulting in a unitary evolution operator ; moreover, this unitary evolution operator is such, that it only evolves subsystems in local contact. It is not the structure of quantum theory per se that requires this, but one can impose it, and in fact all postulated interactions (from which this unitary operator is build up) are local point interactions.

The Schroedinger equation being a first-order partial differential equation (over Hilbert space), this evolution is as deterministic as the flow in classical phase space: if we know the state exactly at one spacelike slice, we know it everywhere. As such, the dynamics of quantum theory (in this viewpoint: the Schroedinger equation is ALWAYS valid) is totally deterministic.

Where does the randomness come from then ? In the standard von Neumann view, it comes from the collapse, when a "measurement" (non-physical interaction) is performed. An explicitly stochastic prescription for the change of the state is given. But in MWI, there is no collapse. Nevertheless, there is something equivalent of course, namely the RANDOMNESS IN THE ASSOCIATION OF THE STATE AND THE SUBJECTIVE EXPERIENCE. You have to pick ONE observer state to be aware of, and the randomness is entirely related to that choice. This can sound like a trick: instead of saying that *nature* is random, you say that your *perception of nature* is random. What's the difference ? The difference is that you can save locality. Indeed, the explicit non-locality in the projection postulate (after all, you project the entire state at the moment of your (local) measurement, while that state describes subsystems all over the place, and hence your local, non-physical measurement changes the physical state of all subsystems of the universe) is only AN APPARENT EFFECT, DUE TO YOUR - ERRONEOUS - EXTRAPOLATION BACK IN TIME OF THE DEFINITENESS OF A REMOTE MEASUREMENT RESULT. It is this error (Alice - when learning about Bob's result - extrapolating back in time that Bob had a definite result - while in fact he was in a superposition) which is entirely responsible for the riddle of the violation of the Bell relationships, while retaining the respect of signal locality.

As such, when we stick to strictly unitary physical dynamics, we gain back determinism and locality. The randomness is only due to *our window on the state of nature* and the non-locality due to our erroneous extrapolation that observed results existed back in time.
It is this, together with the fact that I do not have to assume a non-physical measurement process, which makes me favor the MWI view.

Now, I place my usual caveat: MWI is of course just ONE view upon QM - one needs not to adhere to it. But I think it is interesting to know that there IS a view on QM which avoids the problems which are discussed here (while of course introducing another one: the total weirdness of the concept!).
 
  • #29
reilly said:
I haven't a clue about what you are saying. Perhaps it's my lack of knowledge about Dr. Smolin's ideas.
Likely so, google scholar will get you to Smolin articles on backgound(s)
And, by the way, how can a position be anything but local? I'd be very intrigued to know what a nonlocal position is.

Positions not position
If QM were "local" it would have a classical direct solution to entanglement - it does not, it has non-local superposition where two things remain connected, via some non-local means, while in two space like separated positionS.
QED of photon A to B does no better at resolving entanglement "localy" does it?

RB
 
  • #30
vanesch said:
Coucou :cool: ! Here I am again with my pet interpretation of QM, which is of course, MWI :rolleyes:

In your opinion, does MWI offer any explanation for why we observe peculiarities/features on a large scale? An example of what I mean is that there are 9 planets in our solar system, but this can clearly not be derived from maths or laws alone without observation. Can we say it is just a feature of our universe?
 
Last edited:
  • #31
jackle said:
In your opinion, does MWI offer any explanation for why we observe peculiarities/features on a large scale? An example of what I mean is that there are 9 planets in our solar system, but this can clearly not be derived from maths or laws alone without observation. Can we say it is just a feature of our universe?

Well, two caveats of course. I'm only offering MWI as an interpretation for unitary QM (where I think it is vastly superior to unitary QM + projection a la Copenhagen). All cosmological stuff is going to involve gravity and as is known, unitary QM has some serious difficulties with gravity. It has LESS difficulties with it than any projection postulate (which is in violent disagreement even with SR), but nevertheless. So I don't know if it is meaningful to discuss seriously a view on unitary QM of phenomena where gravity plays a role ; after all, this may alter entirely the structure of QM, and might do away with unitarity all together (though most people seem to stick to unitarity even in this case, but with not much success).
Of course MWI drops dead when strict unitarity is gone.
The second caveat is that MWI has no more predictive power than standard QM ; in fact it IS standard QM !

But the question you ask can be answered almost in the same way as in classical physics: we see 9 planets because the initial conditions were such that 9 planets were going to arise. Of course, with a slight change: we now probably have an initial condition (a quantum state) where a term with 9 planets could arrise, but also other terms. Nevertheless, the 9 planets must have had a relatively high hilbert norm over the others, so that when we were to "pick our state", we picked this one, with 9 planets. It must not be the highest hilbert norm of course, just not a totally ridiculously small one (or we are just "lucky"...).
Because of decoherence, this relatively high hilbert norm is more or less conserved, and a more or less "classical" evolution happened in this term. No significant mixing occurred with other terms. So this behaved very very much in the same way as if we had a classical evolution from a classical statistical distribution, once the most important quantum effects that could have effected the formation of the number of planets, decohered. That's also why the history of our solar system "makes sense" when analysed from a classical perspective.

But maybe, loosely speaking, there's a copy of you, posting on a copy of PF, in another term, and wondering if it can be answered why there are 15 planets and 2 suns, right now ! Because of decoherence, however, you'll never hear of him :-)
 
  • #32
RandallB said:
If QM were "local" it would have a classical direct solution to entanglement - it does not, it has non-local superposition where two things remain connected, via some non-local means, while in two space like separated positionS.
But if you use some type of many-worlds approach, QM can be local in principle. If Alice and Bob make a measurement on a pair of entangled particles at different locations, then you can imagine Alice splitting into multiple copies when she makes her measurement, and Bob splitting into multiple copies when he makes his, with the universe not having to decide which copy of Alice is mapped to which copy of Bob until there has been time for a signal moving at the speed of light to cross between them.

For example, suppose Alice and Bob each measure the spin of our particle on one of three separate axes, a, b, or c, and Bell's inequality predicts that when they pick different axes, they must get opposite spins at least 1/3 of the time (see my first post on this thread for the logic behind this prediction), but QM predicts they'll get opposite spins only 1/4 of the time. To make things easier, assume they are both using some deterministic pseudorandom process to decide which axis to measure on each trial, so on one particular trial, all the copies of Bob will measure axis b and all the copies of Alice will measure axis a. Then when Bob makes his measurement, say he splits into multiple copies, 1/2 of which measure spin up (b+) and 1/2 of which measure spin down (b-):

Bob 1: b+
Bob 2: b+
Bob 3: b+
Bob 4: b+
Bob 5: b-
Bob 6: b-
Bob 7: b-
Bob 8: b-

And the same thing happens to Alice:

Alice 1: a+
Alice 2: a+
Alice 3: a+
Alice 4: a+
Alice 5: a-
Alice 6: a-
Alice 7: a-
Alice 8: a-

Notice that each split in the same way, just based on the local probabilities of measuring spin-up vs. spin-down. The spooky correlations of entanglement only appear when you decide which copy of Bob is mapped to which copy of Alice, and it's easy to do this mapping in a way that ensures that there's only a 1/4 chance they will get opposite spins:

Bob 1 <-> Alice 1 (same)
Bob 2 <-> Alice 2 (same)
Bob 3 <-> Alice 3 (same)
Bob 4 <-> Alice 5 (opposite)
Bob 5 <-> Alice 4 (opposite)
Bob 6 <-> Alice 6 (same)
Bob 7 <-> Alice 7 (same)
Bob 8 <-> Alice 8 (same)

On the other hand, suppose they had both measured axis a, so QM predicts there's a 100% chance they'll get opposite spins. Again, you can assume each initially splits based purely on local probabilities:

Bob 1: a+
Bob 2: a+
Bob 3: a+
Bob 4: a+
Bob 5: a-
Bob 6: a-
Bob 7: a-
Bob 8: a-

and

Alice 1: a+
Alice 2: a+
Alice 3: a+
Alice 4: a+
Alice 5: a-
Alice 6: a-
Alice 7: a-
Alice 8: a-

Only this time, once a signal has had time to cross between them, the mapping would work differently:

Bob 1 <-> Alice 5 (opposite)
Bob 2 <-> Alice 6 (opposite)
Bob 3 <-> Alice 7 (opposite)
Bob 4 <-> Alice 8 (opposite)
Bob 5 <-> Alice 1 (opposite)
Bob 6 <-> Alice 2 (opposite)
Bob 7 <-> Alice 3 (opposite)
Bob 8 <-> Alice 4 (opposite)

So, this is basically how many-worlds could explain the results of the EPR experiment in a purely local way. You could simulate this on two ordinary classical computers, one simulating the location of Bob and the other simulating the location of Alice, with the computers not allowed to communicate until after each one's measurement had been made--by programming in the right mapping rule, you could insure that a randomly-selected Alice copy will see the same probabilities that "her Bob" (the one she's mapped to) gets a given result as are precicted by quantum mechanics, even though the computers are totally classical ones.

Here are some papers arguing that the many-worlds interpretation is local in basically the same way, although I don't have the expertise to judge if their arguments are convincing (there has always been a problem explaining how to get any notion of probabilities from the universal wavefunction postulated by the MWI):

http://www.arxiv.org/abs/quant-ph/0003146
http://www.arxiv.org/abs/quant-ph/0103079
http://www.arxiv.org/abs/quant-ph/0204024
 
  • #33
RandallB said:
Likely so, google scholar will get you to Smolin articles on backgound(s) (Thanks)
Positions not position RA So?



If QM were "local" it would have a classical direct solution to entanglement - it does not, it has non-local superposition where two things remain connected, via some non-local means, while in two space like separated positionS.

RA Why ? What is the non-local means? What is a classical direct solution?

Entanglement is just another way of talking about conditional probability. There's plenty of entanglement in classical physics, due to conservation laws, but, of course, the probability rules for classical and quantum physics are a bit different. (Control engineering deals with such issues, classical state vectors and all,among others.)

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.


QED of photon A to B does no better at resolving entanglement "localy" does it?
RB
RA
Who's talking resolving? I've provided you with a standard formal definition of locality in QED. What's the problem?

Regards,
Reilly Atkinson
 
  • #34
JesseM said:
But if you use some type of many-worlds approach, QM can be local in principle. If Alice and Bob make a measurement on a pair of entangled particles at different locations, then you can imagine Alice splitting into multiple copies when she makes her measurement, and Bob splitting into multiple copies when he makes his, with the universe not having to decide which copy of Alice is mapped to which copy of Bob until there has been time for a signal moving at the speed of light to cross between them.

:approve: :approve: :approve:

Right on !
 
  • #35
reilly said:
RA
Who's talking resolving? I've provided you with a standard formal definition of locality in QED. What's the problem?
No you didn’t – you said
By the way, there are sound theoretical reasons why QM and QFT are local—
To people that were looking and causality and QM non-locality. Effectively telling them they wrong to think that way. And they are not, a non-local QM an appropriate way for them to look at QM.
I was just looking for you to clarify whatever you saying and put it proper context.
The problem is, instead of addressing their issues; you were just promoting your opinions. At least you could say you or some think, QM can be local in principle, because - - -whatever --.

And just for the record IMO (in my opinion) MWI is one of the more ridiculous ideals getting ink to come out of QM.
 
  • #36
RandallB said:
And just for the record IMO (in my opinion) MWI is one of the more ridiculous ideals getting ink to come out of QM.

The usual, well-argumented rebuttal against MWI :biggrin:
 
  • #37
vanesch said:
The usual, well-argumented rebuttal against MWI :biggrin:
As compared to the agreements & proofs provided in favor of it your correct.
 
  • #38
RandallB said:
As compared to the agreements & proofs provided in favor of it your correct.

It is hard to find out if you are ironic, but here are the main arguments in favor of MWI (as an interpretation of QM):

1) respect of unitarity (as postulated in QM) and possibility to represent the state of the universe as a ray in Hilbertspace (as postulated in QM)

2) no non-physical (because non-unitary) phenomenon in the measurement apparatus

3) the theory DOES describe a reality (although it is different from the one we perceive directly) - so no positivism which denies reality all together

4) we can save locality (any collapse is non-local) so that we can keep SR

The downside is that we have to postulate a non-trivial relationship between reality and subjective experience, but if this is done, no contradiction is derived between the postulated subjective experience and actually perceived subjective experience as we know it.

I consider the above as a rather solid ARGUMENTATION when compared to:
"one of the more ridiculous ideas"
or:
"a meaningless string of words"
or:
"naah, can't be true".

Mind you, I'm not claiming that MWI is necessarily true. I'm claiming that, when you consider the many formal advantages of this viewpoint (see argument above) that other viewpoints (as long as they don't touch upon the formalism of QM; in other words, as long as they are an INTERPRETATION of QM, and not simply a different theory) pale against it: The Copenhagen view is inconsistent with the basic axioms of QM (as Schoedinger found out rather early with his cat) and non-local, and the probabilistic view (although I can have some sympathy for it) denies any link with reality.

I note that the only "arguments" against the viewpoint are simply emotional statements and do not include a solid reasoning.

So the only way I can consider a non-MWI viewpoint, is by presenting another theory, with a clearer interpretation, which will explain all the QM successes.
There are two "candidates": local realist theories starting from classically relativistic field theories (we already KNOW that they will not be equivalent to QM thanks to Bell) and Bohmian mechanics. Bohmian mechanics is not compatible in its workings with the principle of relativity (it is non-local) and has also some of its own interpretational problems.
 
  • #39
vanesch --

A knowledge based QM interp, as with any probabilistic theory, necessarily obeys unitarity. Further, with QM so interpreted, than all, repeat, all theories involving probability are resting on the same basis -- there's always a probability (wave)(function) collapse, by necessity in fact, and it occurs in people's brains -- that's quite close to certainty,in fact I'm fairly certain that brain scans can actually show such a collapse.QM describes at least the reality of the experimentalist, and, when, the fabric of experimental results is woven, it, metaphorically, looks strangely like normal reality -- the reality of common experience.

As usually formuted, QM is local, point interactions and all that.

I've always thought of MWI as odd. Seems to me that it's just another attempt to subvert probability. If it is such a great idea, why has it not been in place since at least Fermat's notions about games of chance? Why is this ideas absent from virtually all books on probability and statistis? (I would say all, but there are such books that I've never read.)Why not a universe in which I broke the Bank at Monte Carlo, when I found and extra million in my bank acount, why not a universe in which the Boston Red Sox won 25 World Series in a row? What good does such speculation bring?

The kindest thing I can say about MWI is that it is rather peculiar.

The knowledge approach works, and, I have yet to hear of any practical arguments against it.

I will admit that the stupendous ego of David Deutsch, which permeates his book, The Fabric of Realty, turned me away, in part, from MWI. It's one of these books that says,"Trust me, I'm right." He's a great spinner, probably could do well as a political consultant, with such catchy ideas as shadow photons. Small wonder that his views remain relatively unknown.

With all due respect, I have yet to see anything about MWI that solves any problem of QM other than with fanciful suppositions of universes we can never know. Deus ex Machina

Regards,
Reilly Atkinson
 
  • #40
gstafleu said:
In case of the cat the design incorporates a known indeterminate element: the decay of an atom. We do not have a model that will predict the exact time when a given atom will decay, but we do have a statistical model, and that model is incorporated into the cat experiment. That model predicts that, if we open the box at t=Lambda, we have a 50% chance of a dead cat. Before we open the box we of course don't know the state of the cat, but that is what was designed into the experiment. How does this show any indeterminacy beyond that which was designed into the experiment?

The issue here is that an observer inside the box has a very clear knowledge about the state of the cat, but an observer outside of the box, has just an information about the state of the cat with 50% uncertainty.

This means that the same physical system (the box with the cat) provides to the inside observer scientific information which comes from constant observation that is not available to the outside observer. This difference in status produce the dramatic difference between the uncertainty of the information about the status of the system for each observer.

If you put the solar system in a box, and you have an internal observer constantly observing the planets and an external observer having a snapshot observation, they share the same certainty regarding the information about the status of the solar system in the box. The internal observer can observe constantly the orbits of the planets and he can guide a space vehicle from Earth to another planet based on his scientific information gathered by the continues observations, with the same certainty with which the external observer can guide another space vehicle from Earth to another planet based on the scientific information gathered from the system, in a snapshot. Both observers having informations from the system, the one from real time observations from inside and the other from a snapshot observation before being locked outside from the system, they share the same certainty about the status of the system during the passage of time.

In the solar system in a box example, the position of the observer is irrelevant from the degree of certainty of the scientific information for the status of the system.

gstafleu said:
With the double slit experiment we seem to have a similar situation. The indeterminacy designed into the experiment is that we don't know exactly which slit a given electron will go through. But we do know that 50% of the electrons will go through one, 50% through the other slit. The actual probability distribution turns out to be a wave function, which is perhaps a bit unusual, but no more than that. As a result the distribution of electrons on the detector screen displays an interference pattern. So far I fail to see any indeterminacy beyond what was designed into the experiment.

We can then change the experiment by removing the indeterminacy: we add some sort of apparatus that tells us through which slit each electron passed. This apparently changes the probability distribution from a wave to a classic particle yes-or-no distribution. OK, so that is maybe strange, but I still see no indeterminacy here.

This is a similar situation, like the cat in the box situation, where the information known to the observer affected the certainty about the information of the status of the cat.

The information that the observer has in hand affect the distribution of electrons on the screen.

The status of each observer, regarding the "position" of observation, affects the certainty of the information that the system is providing about the system's status.

(edit) In an analogy, the opposite thing happens in the observation of the solar system. The certainty of the scientific information about the status of the solar system is the same in both cases: when the observer makes observations from outside using information from starting and end points from the movements of the planet, or when the observer makes observations by using position traps gathering information from positions in between the starting and end points from the movements of planets. Both observers share the same certainty of the information of the status of the solar system. In the experiment with the electrons the certainty of the information of the status of the system is different for each observer, when the one knows the in between positions of the electrons and the other one does not.

In both experiments, the indeterminacy is about the diferrent degrees of certainty of the information of the system which depends on the "position" of the observer. You come to the right conclusion, that all local observers share the same uncertainty, and that all non-local observers share the same uncertainty. There is no indeterminacy, for the same class of observers. The indeterminacy is about having two classes of observers, the local and the non-local. Within each class the certainty of the scientific information of the status of the system is consistent. But the experiment is inconsistent when we compare the certainty of the scientific information of the status of the system between the two classes of observers.

This incosistency, between classes of observers, is NOT happening in systems of classic physics (like the solar system). This is not a measurement problem. It is an intrinsic paradox. The same system, in quantum physics, provides different degrees of certainty of scientific informations to the observer, depending on the "position" of the observer.

So, I think that the answer to your question "Is quantum indeterminacy a result of experiment design?" is NO. The indeterminacy is an intrinsic behaviour of the systems of quantum physics.

Leandros
 
Last edited:
  • #41
leandros_p said:
The issue here is that an observer inside the box has a very clear knowledge about the state of the cat, but an observer outside of the box, has just an information about the state of the cat with 50% uncertainty.

Isn't that equally the case if we replace the cat+atom with a coin flipping machine? The observer inside the box knows exactly which side came up, the one outside lives in uncertainty. Until the two communicate, that is, or the outsider looks in, at which point both will agree. Which is the same situation as with the cat+atom.

Now for you solar system example, doesn't the same apply? Assuming the outside observer (1) cannot peek and (2) does not have pre-knowledge of the solar system (he just zapped over from a galaxy far far away and has never seen the solar system before), then the insider knows and the outsider doesn't.

The solar system example has the drawback that we have a pretty good deterministic model of the planetary positions, while we do not have such a model of atom decay. As a result, if both observers know the initial state, and then close the box at time T, they can then both come up with, agreeing, descriptions of the state at T+t. You cannot do that with the cat+atom (catom?), but that is because we don't have a deterministic catom model, which we knew when we started.

In other words, if you design an experiment such that all its components have well established deterministic models, then observers both inside and outside any surrounding boxes will have the same knowledge of the system's status. If you throw in a component for which you "only" have a stochastic model, well, then those on the inside will know more.
 
  • #42
gstafleu said:
Isn't that equally the case if we replace the cat+atom with a coin flipping machine? ...Which is the same situation as with the cat+atom.

...You cannot do that with the cat+atom (catom?), but that is because we don't have a deterministic catom model, which we knew when we started.


If you understand that the the cat+atom model is a non-deterministic model by itself, then the experiment is not "producing" the result. It just provides non-deterministic information to the observer.

The "quantum inteterminacy" is not a product from the desigh of the experiment.

The "quantum inteterminacy" is made known, as a scientific information, by the design of the experiment. Each experiment is designed in order to acquire information. This expreriment provides the "information" of "inteterminacy", but it does not produce this "information".

Leandros
 
  • #43
reilly said:
vanesch --
A knowledge based QM interp, as with any probabilistic theory, necessarily obeys unitarity.

I don't understand what this could mean.

As usually formuted, QM is local, point interactions and all that.

The *unitary* part of QM is local, yes.

I wonder what it could possibly mean for something to interact "locally" if it is just a knowledge description. What does it mean that my "knowledge of electron A" interacts locally with "my knowledge of proton B" ?
Assuming that this is not the objective state of the electron A or the proton B, I don't see what can be "local" to it, and why my 'knowledge of electron A' cannot have any interaction with my knowledge of muon C, which is - or rather, I know that it is - 7 lightyears from here.
So how do you implement something like lorentz invariance for knowledge ?

I've always thought of MWI as odd. Seems to me that it's just another attempt to subvert probability. If it is such a great idea, why has it not been in place since at least Fermat's notions about games of chance?

The reason for MWI is of course NOT to circumvert probability or something. In fact (although many MWI proponents trick themselves IMO into the belief that they can do without it - I'm convinced that they are wrong, and that probabilistic concepts are needed also there) the only reason for MWI is to be able to take the wavefunction as an objective description of reality. You run into a lot of problems and paradoxes if you take the wavefunction as describing objective reality while accepting the probabilistic projection, but the problem is not the probabilistic aspect of it ; the problem is twofold:
1) the fact that all elementary interactions between quantum systems are described as strictly unitary operations on the quantum state (its derivative being a Hermitean operator which we call the Hamiltonian) - so there is no known mechanism to implement a non-unitary evolution, which is a projection
2) the fact that this projection cannot be formulated in a Lorentz invariant way

The problem is NOT, the probabilistic aspect.

There is a difference between the relationship between classical physics and probability, and between the quantum state and probability, and that's the following. When we use probability in classical physics, the probability distribution itself plays no physical role.
When a classical system evolves from A to A', and from B to B', then, if we assign probability p1 to A and p2 to B, we'll have an outcome B with probability p1 and an outcome B' with probability p2. If we learn that the system was finally in B', then we can "update backward" our probabilities, say that the system, after all, was in state B, and that A was just part of our ignorance. As you state, there's no reason to introduce a "parallel world" in which A was there, after all, but we happen to be in a universe where B happened.
The reason why this is superfluous is that the numbers p1 and p2 never enter into any physical consideration. They are just carried along, with the classical physics, WITHOUT INFLUENCING THE DYNAMICS.

But in quantum theory, this is not true. If the state is a |u> + b |v> , and if we now evolve this into the state a |u'> + b |v'> and we work now in the basis |x> = |u'> + |v'> and |y> = |u'> - |v'>, and measure x/y, then the probability of having x or y will depend on the numbers a and b. It is not that the coefficients a and b are somehow, a measure of our lack of knowledge, which get updated after the measurement. Because if this were true, there would be no difference between a STATISTICAL MIXTURE of states |u> and |v> and the state a|u> + b|v>
To illustrate that this is not the case, consider a = b. A statistical mixture of 50% |u> and 50% |v> will yield in an outcome which gives us 50% |x> and 50% |y>. Nevertheless, the state |u> + |v> (which has identical statistical value, right) will result in 100% state |x> and 0% state |y>.
So the values of a and b CANNOT be interpreted as describing our lack of knowledge which gets updated during the measurement. It would be hard to imagine that NOT KNOWING something (having non-zero values for a and b) would make it impossible to obtain the outcome |y>, while KNOWING something (like knowing that a = 1 and b = 0) would suddenly make the states |x> and |y> appear 50% each.
So those numbers a and b HAVE PHYSICAL MEANING. They influence what will happen later, and this cannot be seen in a purely "I didn't know, and now I learned" fashion, as probability CAN be seen in a classical context. It is the phenomenon of quantum interference which makes the "knowledge" view of the wavefunction, IMO, untenable.
The state |u> + |v> has simply DIFFERENT PHYSICAL CONSEQUENCES than the state |u>. One cannot say that |u> + |v> expresses our lack of knowledge about whether it is |u> or |v>, while the state |u> expresses our certainty of having the system in state u for sure, because if that were so, then it is strange that a lack of knowledge leads to more certainty (namely, that we will NOT have the result y) than when we know more.

Why is this ideas absent from virtually all books on probability and statistis? (I would say all, but there are such books that I've never read.)Why not a universe in which I broke the Bank at Monte Carlo, when I found and extra million in my bank acount, why not a universe in which the Boston Red Sox won 25 World Series in a row? What good does such speculation bring?

It doesn't bring any good in a classical setting, because of the fact that this "parallel possibility" has no influence what so ever on the physical dynamics. You can say, in this context, that the "parallel universe" where the initial conditions where such that the Boston Red Sox will win 25 World series in a row, has, FROM THE BEGINNING, never been there, and that we just entertained its possibility because we didn't knew all the details. When we did find out that this didn't happen, we could simply scrap this parallel universe from our list with no harm, BECAUSE IT HAS NEVER BEEN PART OF THE ONTOLOGICAL STATE OF THE UNIVERSE in the first place (only, we didn't know).

But when we know that the state is |u>, and we find |x>, we cannot go back 'scrap' somehow a state from our list. We cannot say that it actually meant that the state was actually |u> + |v>, back then. Because we MEASURED u back then, and we found u, and not v. So it is not "an imaginary parallel universe which turned out not to be the right one".

The knowledge approach works, and, I have yet to hear of any practical arguments against it.

The most important argument against it IMO, is that there is no description of reality in this view. It is hard to work with things of which you have constantly to remind yourself that "it isn't really there", and nevertheless devellop a physical intuition for.

I will admit that the stupendous ego of David Deutsch, which permeates his book, The Fabric of Realty, turned me away, in part, from MWI. It's one of these books that says,"Trust me, I'm right." He's a great spinner, probably could do well as a political consultant, with such catchy ideas as shadow photons. Small wonder that his views remain relatively unknown.

Didn't read it. My only attempt was to write a paper showing that his proof was flawed, but (as has been discussed here), it was not accepted.

With all due respect, I have yet to see anything about MWI that solves any problem of QM other than with fanciful suppositions of universes we can never know. Deus ex Machina

You are probably right that it doesn't have much practical implications. In my opinion, the most important function of MWI is to rehabilitate QM as a description of reality, and to be able to put all this positivist considerations aside. As such, it removes all ambiguity about WHEN one should apply the projection postulate, and removes the need of the distinction between a physical interaction and a non-physical measurement. In most situations, this distinction is so clear, that it doesn't need any specific treatment, but in situations such as delayed choice quantum erasers or EPR setups, one can wonder about when one should apply the projection postulate. Well, MWI solves that situation unambiguously.

I would also like to point out that "the universes we can never know" are NOT introduced or postulated. They are simply not ELIMINATED by a projection postulate.
 

Similar threads

Back
Top