# Is quantum indeterminacy a result of experiment design?

Is "quantum indeterminacy" a result of experiment design?

Stemming from a debate about free will, I, a total lay person, have been doing some reading on quantum indeterminacy (QI). I have come up with some questions, I hope this forum is an appropriate place for them.

Basically the question is this: where does the idea of QI come from. The two scenarios that seem to pop up most often are Schrödingers cat and the two-slit electron experiment. But in both cases it seems to me that the indeterminacy that shows up is exactly what was designed into the experiment.

In case of the cat the design incorporates a known indeterminate element: the decay of an atom. We do not have a model that will predict the exact time when a given atom will decay, but we do have a statistical model, and that model is incorporated into the cat experiment. That model predicts that, if we open the box at t=Lambda, we have a 50% chance of a dead cat. Before we open the box we of course don't know the state of the cat, but that is what was designed into the experiment. How does this show any indeterminacy beyond that which was designed into the experiment?

With the double slit experiment we seem to have a similar situation. The indeterminacy designed into the experiment is that we don't know exactly which slit a given electron will go through. But we do know that 50% of the electrons will go through one, 50% through the other slit. The actual probability distribution turns out to be a wave function, which is perhaps a bit unusual, but no more than that. As a result the distribution of electrons on the detector screen displays an interference pattern. So far I fail to see any indeterminacy beyond what was designed into the experiment.

We can then change the experiment by removing the indeterminacy: we add some sort of apparatus that tells us through which slit each electron passed. This apparently changes the probability distribution from a wave to a classic particle yes-or-no distribution. OK, so that is maybe strange, but I still see no indeterminacy here.

In both cases the outcome of the experiment is completely predictable. In the first case we have a probabilistic experiment (we don't know which slit each electron goes through), hence we get a probabilistic outcome. Indeterminacy would only pop up if the interference pattern was not consistent. When we change the experiment such that we remove the indeterminacy, we get a result that is equally predictable and compatible with the new experimental set-up.

So where does this term "quantum indeterminacy" come from? I must be missing something here.

Last edited:

JesseM
Your question is a little confusing--"indeterminacy" usually just means the fact that QM is probabilistic, ie non-deterministic, but you seem to accept this in your question. By "indeterminacy" are you talking about something more like the idea that quantum particles and systems don't have well-defined properties when they aren't being measured, such as the idea that the cat in the Schroedinger's cat thought-experiment is not in a definite state until the box is opened, or the idea that the particles in the double-slit experiment don't go through one slit or the other when their path is not measured? If that's what you mean, I don't think "indeterminacy" would really be the right word for it, although I'm not sure what term should be used, maybe "no local hidden variables".

In any case, the difficulty with imagining particles have definite properties even when not measured is probably best illustrated by the weird correlations between measurements of different members of entangled pairs of particles, what Einstein called "spooky action at a distance". Here's something I wrote about this on another thread:
Photons have two types of polarization, linear polarization and circular polarization, and they are "noncommuting" variables which means you can't measure both at once, just like with position and momentum. If you measure circular polarization, only two possible outcomes are possible--you'll either find the photon has left-handed polarization, or right-handed polarization. With linear polarization, you can only measure it along a particular spatial axis, depending on how you orient the polarization filter--once you make that choice, then again there are only two outcomes possible, the photon either makes it throught the filter or it doesn't. And if you have two entangled photons, then if you make the same measurement on both (either measuring the circular polarization of each, or measuring the linear polarization of both in the same direction) then you'll always get opposite answers--if one photon makes it through a particular polarization filter, the other photon will never make it through an identical filter.

To understand the EPR experiments, You can actually ignore circular polarization and just concentrate on linear polarization measured on different axes. If you do not orient both filters at exactly the same angle, then you are no longer guaranteed that if one photon makes it through its filter, the other photon will not make it through--the probability that you will get the opposite answers in both cases depends on the angle between the two filters. Suppose you choose to set things up so that each experimenter always chooses between three possible angles to measure his photon. Again, whenever both experimenters happen to choose the same axis, they will both get opposite answers. But by itself, there is nothing particularly "spooky" about this correlation--as an analogy, suppose I make pairs of scratch lotto cards with three scratchable boxes on each one, and you find that whenever one person scratches a given box on his card and finds a cherry behind it, then if the other person scratched the same box on their own card, they'd always find a lemon, and vice versa. You probably wouldn't conclude that the two cards were linked by a faster-than-light signal, you'd just conclude that I manufactured the pairs in such a way that each had opposite pictures behind each one of its three boxes, so if one had cherry-cherry-lemon behind the boxes, the other must have lemon-lemon-cherry behind its own.

In the same way, you might conclude from polarization experiment that each photon has a preexisting polarization on each of the three axes that can be measured in the experiment, and that in the case of entangled photons, both are always created with opposite polarization on their three axes. For example, if you label the three filter orientations A, B, and C, then you could imagine that if one photon has the preexisting state A+,B-,C+ (+ meaning it is polarized in such a way that the photon will make it through that filter, - meaning it won't make it through), then the other photon must have the opposite preexisting state A-,B+,C-.

The problem is that if this were true, it would force you to the conclusion that on those trials where the two experimenters picked different axes to measure, both photons should behave in opposite ways in at least 1/3 of the trials. For example, if we imagine photon #1 is in state A+,B-,C+ before being measured and photon #2 is in state A-,B+,C-, then we can look at each possible way that the two experimenters can randomly choose different axes, and what the results would be:

Experimenter #1 picks A, Experimenter #2 picks B: same result (photon #1 makes it through, photon #2 makes it through)

Experimenter #1 picks A, Experimenter #2 picks C: opposite results (photon #1 makes it through, photon #2 is blocked)

Experimenter #1 picks B, Experimenter #2 picks A: same result (photon #1 is blocked, photon #2 is blocked)

Experimenter #1 picks B, Experimenter #2 picks C: same result (photon #1 is blocked, photon #2 is blocked)

Experimenter #1 picks C, Experimenter #1 picks A: opposite results (photon #1 makes it through, photon #2 is blocked)

Experimenter #1 picks C, Experimenter #2 picks picks B: same result (photon #1 makes it through, photon #2 makes it through)

In this case, you can see that in 1/3 of trials where they pick different filters, they should get opposite results. You'd get the same answer if you assumed any other preexisting state where there are two filters of one type and one of the other, like A+,B+,C-/A-,B-,C+ or A+,B-,C-/A-,B+,C+. On the other hand, if you assume a state where each photon will behave the same way in response to all the filters, like A+,B+,C+/A-,B-,C-, then of course even if they measure along different axes the two experimenters are guaranteed to get opposite answers with probability 1. So if you imagine that when multiple pairs of photons are created, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C-/A-,B+,C+ while other pairs are created in homogoneous preexisting states like A+,B+,C+/A-,B-,C-, then the probability of getting opposite answers when you measure photons on different axes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get opposite answers in less than 1/3 of trials where you measure on different axes, provided you assume that each photon has such a preexisting state.

But the strange part of QM is that by picking the right combination of 3 axes, it is possible for the probability that the photons will behave in opposite ways when different filters are picked to be less than 1/3! And yet it will still be true that whenever you measure both along the same axis, you get opposite answers for both. So unlike in the case of the scratch lotto cards, we can't simply explain this by imagining that each photon had a predetermined answer to whether it would make it through each of the 3 possible filters. This is the origin of the notion of "spooky action at a distance".

But notice that this spookiness can't be exploited to send messages--each experimenter only gets to choose which of the three filter orientations to use, but they have no influence over whether the photon actually makes it through the filter they choose or is blocked by it, that seems to be completely random. It's only when they get together and compare their results over multiple trials that they notice this "spooky" statistical property that when they both happened to pick the same filter orientation, both photons always behaved in opposite ways, yet when they picked two different filter orientations, both photons behaved in opposite ways less than 1/3 of the time.
Technically, the prediction that the photons must have opposite polarizations at least 1/3 of the time when measured along different axes here is predicted by something called "Bell's theorem", which you can read more about here or http://en.wikipedia.org/wiki/Bell's_Theorem]here--any[/PLAIN] [Broken] system that obeys "local realism" (particles having definite preexisting states before measurement, where the state of one cannot affect the state of another faster-than-light) must obey something called the "Bell inequality", but quantum mechanics violates the Bell inequality, thus disproving local realism.

Last edited by a moderator:
gstafleu said:
We can then change the experiment by removing the indeterminacy: we add some sort of apparatus that tells us through which slit each electron passed. This apparently changes the probability distribution from a wave to a classic particle yes-or-no distribution. OK, so that is maybe strange, but I still see no indeterminacy here.
you dont see the interference anymore because you dont have coherent wave anymore (you got all the range of frequecies and directions for the wave vector)-meaning that by measuring the possition for definite value you created uncertainty in the momentum...

thats the uncertainty priciple, you cant know for sure the position and the momentum, or the time duration of a reaction and its energy, or the spin of a particle at more then one axis simultaniuosly...

when you measure one of these properties you change your system and get an uncertainty at the other property.

JesseM said:
Your question is a little confusing--"indeterminacy" usually just means the fact that QM is probabilistic, ie non-deterministic, but you seem to accept this in your question.
Yes, there are lots of models that have a probabilistic component, e.g. weather predictions, so assuming that isn't that strange. This generally does not let us to assume that it is "fundamentally" impossible to come up with a non-probabilistic model. Usually the conclusion is that we don't know enough and/or don't have enough computational resources.

As I said, it was a philosophic discussion--about free will, determinism, that kind of thing--that led me to think about this. In (some) philosophic circles it is quite usual to loosely remark that QM has introduced a fundamental randomness, or non-determinism if you will, into the universe. This may be so, but if so neither the cat nor the slits seem to indicate this.

Maybe the only part of QM that does introduce a fundamental "unknowability" is, as fargoth mentioned, Heisenberg's uncertainty principle.

Thanks for the explanation of and the pointers to Bell's inequality. As far as I can tell from Harrison's article, this experiment throws doubt on at least one of the following three assumptions:
1. Logic is valid (to be precise the assumption that (P Or Not P) is always true.)
2. There is a reality separate from its observation (hidden variables)
3. Locality (c is the speed limit).

Interesting as this is (my personal favorite is that (P Or Not P) may not always hold :-), this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).

JesseM
gstafleu said:
Yes, there are lots of models that have a probabilistic component, e.g. weather predictions, so assuming that isn't that strange. This generally does not let us to assume that it is "fundamentally" impossible to come up with a non-probabilistic model. Usually the conclusion is that we don't know enough and/or don't have enough computational resources.
Sure, in classical physics there may be situations where the exact details of the position and velocity of every particle at the microscopic level cannot be know, so you can only make statistical predictions, but it's assumed that if you could have complete information the situation would actually be completely deterministic. There actually is an "interpretation of QM" (not a theory, since it makes no new predictions) like this, invented by David Bohm and sometimes called "Bohmian mechanics", in which each particle has a well-defined position and trajectory at all times, and they are guided by a "pilot wave" which can transmit information instantaneously (thus sacrificing locality), and whose behavior is totally deterministic. In this theory, the perceived randomness is due to our lack of information about the precise state of each particle and the pilot wave, so you make an assumption called the quantum equilibrium hypothesis which treats every possible state that's compatible with your knowledge as equally likely for the purposes of making statistical predictions. See this article on Bohmian mechanics, or this explanation of how the pilot wave works. Aside from the fact that, like all interpretations, it's impossible to test, a lot of people object to the fact that Bohmian mechanics is a bit contrived and inelegant, not to mention the fact that it must violate relativity at a fundamental level even if not at an empirical level (but Bohmian mechanics was only developed to explain nonrelativistic quantum mechanics, I don't think there have been any fully successful attempts to extend it to quantum field theory, which is relativistic). This and other objections to Bohmian mechanics, along with answers from its supporters, are discussed in this paper:

http://arxiv.org/abs/quant-ph/0412119

There is also the "many-worlds interpretation" (see here and here), which is supposed to be deterministic at the level of the entire multiverse, with the apparent randomness due to the experimenters splitting into multiple copies who get different results; but there seem to be a lot of problems showing how to derive the correct probabilities from the many-worlds interpretation without making additional assumptions.
gstafleu said:
Maybe the only part of QM that does introduce a fundamental "unknowability" is, as fargoth mentioned, Heisenberg's uncertainty principle.
But this is a limitation on our ability to measure noncommuting variables like position and momentum simultaneously, so it's only "unknowability" about hidden variables, if you believe there's some objective truth about the value of these variables at all times.
gstafleu said:
Thanks for the explanation of and the pointers to Bell's inequality. As far as I can tell from Harrison's article, this experiment throws doubt on at least one of the following three assumptions:
1. Logic is valid (to be precise the assumption that (P Or Not P) is always true.)
2. There is a reality separate from its observation (hidden variables)
3. Locality (c is the speed limit).
Interesting as this is (my personal favorite is that (P Or Not P) may not always hold :-), this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).
Well, wouldn't any theory that attempts to explain the apparent randomness of QM in terms of lack of information about some underlying deterministic reality, like with the apparent "randomness" of classical systems when you don't have all the necessary information about every particle's position and velocity, necessarily involve "hidden variables" of some sort?

Last edited:
JesseM said:
But this is a limitation on our ability to measure noncommuting variables like position and momentum simultaneously, so it's only "unknowability" about hidden variables, if you believe there's some objective truth about the value of these variables at all times.

the uncertainty principle stems from the fact that we are handling a wave packet in the standard QM.
a pulse is constructed from many frequencies, the shorter it is, the more frequencies it has.
if you say you'd ignore the fact that things are represented by waves of probability amplitude, and say that in the double slit experiment there is a path this particle will go through which is already determined (hidden var), but you just cant know it without changing the momentum,
you cant explain the dark spots, in which the particle cant be.
you dont have interference if the particle behaves classically, it must move through both slits and interfer (even one particle will interfere with itself), once the wavefunction collapse to one of the slits (if the particle moves through just one slit), there will be no interference.

Last edited:
JesseM
fargoth said:
the uncertainty principle stems from the fact that we are handling a wave packet in the standard QM.
Sure, but the analogous version of this for waves in classical mechanics doesn't mean there is any information about the wave we are lacking, it just means you have to sum multiple wavelengths of pure sine waves to get a localized wave packet, so you can't say the wave packet has a single wavelength. It's only because the wavefunction in QM represents the probability of getting different values for properties like position and momentum that we call it "uncertainty", so if you make the assumption that these properties must all have well-defined values at all times (hidden variables), then this means a limit on your ability to know the values of noncommuting properties simultaneously.
fargoth said:
if you say you'd ignore the fact that things are represented by waves of probability amplitude, and say that in the double slit experiment there is a path this particle will go through which is already determined (hidden var), but you just cant know it without changing the momentum,
you cant explain the dark spots, in which the particle cant be.
you dont have interference if the particle behaves classically, it must move through both slits and interfer (even one particle will interfere with itself), once the wavefunction collapse to one of the slits (if the particle moves through just one slit), there will be no interference.
But in the Bohmian interpretation of QM, the particle does always travel through one slit or another, it's just that it's guided by a nonlocal "pilot wave" which in some sense is exploring both paths, and the pilot wave will guide it differently depending on whether the other slit is open or closed, and depending on whether there's a detector there. I think the Bohmian interpretation seems too inelegant to be plausible, but it nevertheless serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM.

correct me if im wrong, i havent read that interpretation yet, but if you claim a particle has a definite place in the space every time even though we cant know it, and this place is dependant on the wave function, the route the particle will go through will not be a straight one, but rathar a complicated direction-changing one.... doesnt it mean a charged particle will radiate on such a path? or does the particle just teleports itself each moment to a different place? is so, how does it help determinism?

gstafleu said:
...this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).
Predicting probability distributions isn't the same as predicting individual outcomes. QM predicts a limit to predictability...and provides the techniques for making the best predictions possible.

Last edited:
reilly
Probability deals with events. While a fundamental, rock solid basis for probability theory does not exist, what does exist covers QM probabilities, which, of course, deal with events.

Without QM, the two slit experiment for electrons will not have a discernable interference pattern; the electrons will go in a straight line unless blocked by a screen. But, unlike electrons, photons/radiation "always interfere when involved in a two-slit experiment -- within appropriate frequency ranges and so forth. If the photon is energetic enough to melt the screen, all bets are off. You never know, so to speak where the next photon will fall.

You never know what, if any stocks on the New York Stock exchange will appreciate more than 5% during a day's trading, or..... Given enough history, statisticians can estimate the probability of such events -- theories in this area are not anywhere near as powerful as in physics, but some have had success with models. So, data analysis, data mining and so forth are key elements. But, the financial modelers and the physicist are doing exactly the same thing: they are, in one way or another, estimating and deriving probabilities.

They have, among other things, a common allegiance to probabilistic methods, and recognize the importance of empirical data to their success. In fact, it can be argued that they both use the same fundamental, formal equation to track probabilities over time. In physics, the equation is called the Schrödinger Esq. for the density matrix, in finance, economics, and other social sciences, and in engineering, the equation is often called a stochastic probability equation, or many things, usually with stochastic tossed in.What they both do, is give the values of probabilities now, based on yesterday, or last minute, or the last year, or.....or.......

But of course the drivers of dynamics differ greatly between the various fields. If we use classical and quantum approaches to sending electrons though slits, then only the QM approach will explain the interference pattern, which has nothing to do with experimental design -- the sample space actually covers probably thousands of experiments, many many slit configurations, and the conclusions are invariant; QM is right.

It's worth remembering that the experimental finding of electron interference came before modern QM. Thus, QM was, out of necessity, required to explain electron diffraction. To a substantial degree this core phenomenon, in my opinion, drives almost all of QM, the Heisenberg Uncertainty principal, the superposition principal, tunneling, and so forth. Interference first, theory second.

When talking QM, it is, I think, important to recall QM's origins, and to indicate the immense coverage and success of the theory. My conservative guess is that there are at least 100,000 pages of material on the successful testing of QM -- this covers atomic physics, solid state physics, superconductivity, super fluidity, semiconductors and transistors, and you name it. There is no indication of the influence of experimental design on QM, other than that normally expected. (I did a lot of work on this issue, in regard to electron scattering experiments, a few years ago, and never found any indication that different designs led to different conclusions. (if you know about things like ANOVA, etc, then you will recognize a bit of oversimplification on my part -- to save writing another paragraph.)

It's a virtual certainty, at least to me, that eventually QM will be found wanting -- the march of history and all that. But, I'd say that there's no chance -- better, slim to none -- to find the problem in the 100,000 pages. All of this work has been steam-cleaned by the most intense scrutiny imaginable, and QM always has done the job. The QM edifice, probability and all, is huge, stable and solid -- very unlikely to come tumbling down; it will crumble at the edges instead.
Regards,
Reilly Atkinson
QM is weird because nature is weird.

Last edited:
Jesse,

thanks for the links to the articles about Bohmian mechanics, that was very interesting. I think that that may have been precisely what I was after. The article by Goldstein at one point says "...if at some time (say the initial time) the configuration Q of our system is random, with distribution given by Abs(Psi)^2 = psi*psi, this will be true at all times..." That, I think, is like what I was trying to say about the randomness that the results show being the same as the randomness put into the experiment in the first place.

JesseM said:
Well, wouldn't any theory that attempts to explain the apparent randomness of QM in terms of lack of information about some underlying deterministic reality, like with the apparent "randomness" of classical systems when you don't have all the necessary information about every particle's position and velocity, necessarily involve "hidden variables" of some sort?
Yes, except that determined non-determinists like to think that QM has somehow "proved" that there "are" (no doubt in some Clintonian sense :-) no hidden variables. As you point out elsewhere, Bohmian mechanics, whether correct or not, does away with any arguments that rely on the assumption that QM is necessarily non-deterministic: Bohmian mechanics "serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM," thus providing a counter example to the assumption of necessary nondeterminism.

reilly said:
They [physics and e.g. economics] have, among other things, a common allegiance to probabilistic methods, and recognize the importance of empirical data to their success. In fact, it can be argued that they both use the same fundamental, formal equation to track probabilities over time.
Reilly,

Thanks for your reply. I never intended to say that QM was wrong or didn't work, just that I didn't see any reason to assume fundamental nondeterminism. In e.g. economics randomness is introduced because we don't know enough, or don't have enough computational resources to completely describe the system (as in chaos theory).

In QM we seem to have something similar, and following Bohm the source of the randomness there seems to be that we cannot establish anything beyond what the quantum equilibrium hypothesis allows, thus preserving e.g. the uncertainty principle.

In both cases there is no need to posit a fundamental indeterminacy, we just run into practical limits. At least, that is what I understand from Bohmian mechanics so far.

jtbell
Mentor
gstafleu said:
Bohmian mechanics "serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM," thus providing a counter example to the assumption of necessary nondeterminism.
However, nonlocal theories are incompatible with relativity, as far as I know. In order for BM (or some other nonlocal hidden variables theory) to be taken more seriously as a replacement for orthodox QM, someone is going to have to come up with a replacement for relativity that both allows for nonlocality and satisfies all the experimental tests that relativity has passed so far. Rather a tall order!

reilly
As a pragmatist in economics and physics, I really don't care much about the origins of randomness.It's there -- let me explain: My version is, for whatever reasons, we cannot predict the precise result of virtually any experiment -- for sure for anything measured by a real number. Always there's experimental error -- maybe not in simple counting, oranges or change or books ....... the sources of which we do not fully understand, but we do know how to manage the error, and estimate it. One posits the experimental error to be a random variable, Gaussian is preferable but not necessary. In fact systems, control, communication and electronics engineers use a very sophisticated approach to errors, one that goes well beyond elementary statistics. And, all this effort works.

Why should something be causal rather than random? Where is it written?
And, reflect upon the following curiosity: it's possible to build models, based on random variables, that are fully deterministic -- this shows up in the subject of Time Series Analysis.

So, to avoid brain crunching contradictions, and tortured thoughts, I go with: Random is as random does. If random works, it must be random.

Probability and statistics provide great tools for solving lot's of important problems -- even if we are not sure why, these tools work. But we say that it's random stuff that is explicated by probability and statistics.

Around these parts, Newton's mechanics and gravity work just fine. Frankly, I find this as mysterious as the accuracy of the standard
probability model for a fair coin.

By your criteria, how could you ever judge that an apparent random process was in fact causal?
Good questions.
Regards,
Reilly Atkinson

jtbell said:
However, nonlocal theories are incompatible with relativity, as far as I know. In order for BM (or some other nonlocal hidden variables theory) to be taken more seriously as a replacement for orthodox QM, someone is going to have to come up with a replacement for relativity that both allows for nonlocality and satisfies all the experimental tests that relativity has passed so far. Rather a tall order!
Well... isn't orthodox QM equally non-local? http://plato.stanford.edu/entries/qm-bohm/#nl" states:

It should be emphasized that the nonlocality of Bohmian mechanics derives solely from the nonlocality built into the structure of standard quantum theory, as provided by a wave function on configuration space, an abstraction which, roughly speaking, combines -- or binds -- distant particles into a single irreducible reality.​
It seems to me that under these circumstances to complain that you can't take BM seriously as a replacement for the equally non-local orthodox QM is, to make a link with a quarkian property, strange .

Last edited by a moderator:
reilly said:
Why should something be causal rather than random? Where is it written?
The reason for assuming causality is perhaps not so much physical as methodological. When you incorporate a stochastic element in a model, you are in fact saying something like "we have a rough description of how this variable works, but for the purposes of this model we are not going to bother explaining the detailed workings." In other words, you are explicitly putting the causation of a certain behaviour outside your model.

That is just fine for pragmatic solutions. It may simply not be possible to incorporate all causilities into the model (think chaos theory), or we may not yet know enough. In that case you take what you have or what you can do. No problem.

But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours. You just stick with the rough description, never mind "what makes it tick." And that is equivalent to saying that you have given up on the scientific endeavour, at least for those behaviours. Now it may be that at some point we will have to acknowledge that for some phenomena we have been getting nowhere for a long time and it doesn't look as if we'll be getting any further any time soon. But one shouldn't assume that as a matter of principle, which is what non-determinism amounts to.

And as a practical matter, one probably won't raise too much grant money for proposing to not look any further into a certain phenomenon .

well, as i see it the "non-locality" of orthodox QM is due to particle not having definite size, i mean they are everywhere, its just less probable to find them far away from a certain spot... so the particle is not local, and theres no problem of it "knowing" things that are far away from it.

the non-locality problem only occures when you insist the particle has definite place at all times, and it just "knows" non-local conditions.

Staff Emeritus
Gold Member
Dearly Missed
fargoth said:
well, as i see it the "non-locality" of orthodox QM is due to particle not having definite size, i mean they are everywhere, its just less probable to find them far away from a certain spot... so the particle is not local, and theres no problem of it "knowing" things that are far away from it.
the non-locality problem only occures when you insist the particle has definite place at all times, and it just "knows" non-local conditions.
Totally incorrect. Size, as ZapperZ remarked on another thread, is simply not defined for these particles, and you have completely missed the point of entanglement.

Unlike some people, physicists don't just make up this stuff off the top of their heads.

Totally incorrect. Size, as ZapperZ remarked on another thread, is simply not defined for these particles, and you have completely missed the point of entanglement.
Unlike some people, physicists don't just make up this stuff off the top of their heads.
i dont just make up this stuff off the top of my head, i can accept that you dont like the term size, and im not going to argue about it, because in the end its the same as saying the particle doesnt have a defined size or is point sized or whatever, what matters is where it can interact with other matter, and that is everywhere (though maybe with almost zero probability when its far away...).

if a particle is described as a field of probability amplitudes, it exists everywhe, thats what i meant.

there's no need to go all sensitive about it :tongue2:

i wasnt talking about entanglement... i was talking about the particle "knowing" where it could exist in space according to the surrounding enviorement, which would have been a non-locality problem if you thought of this particle as existing at a certain point for a certain time, like BM claims.

isnt the uncertainty principle also related to indeteminacy?
you cant know where you particle is not because you cant measure it, but because if QM is right, it doesnt have a well defined position.
you can confine it using a potential well, (though as long as the well isnt infinite it could still tunnel everywhere with very little probability)
tunneling, for example, is a non-locality problem if you think of this particle as having a defined place which you just cant know, but if you say the particle is everywhere its not a problem anymore.

maybe i miss-understood the hidden variable stuff, but i think it should be applied to every unknowable variable, and if it is, then a problem of non-locality is present at the most simple level of position, and you dont have to go and look for it in entanglement.

Last edited:
reilly
Bohm

Bohm did his work on his QM some 60 years ago; roughly around the same time that Feynman, Schwinger, and Tomonaga (F, S, T) figured out how to make QED work.

Even if the Bohm approach had lead to a mere 1 percent of the progress in fundamental issues of the FST approach, it would have a certain, if small. level of credibility. But, as far as I know, the Bohm approach has lead only to controversy, conferences, heated and passionate arguments, and soforth -- and only in a limited part of the physics community. No Bohm interpretation-based physics of any consequence means it's a path not likely to bring any success -- orthodox physics has left Bohmian physics way behind.

By the way, there are sound theoretical reasons why QM and QFT are local--it's all in the structure of the interactions. And we use, almost exclusively so-called point interactions, guaranteed to be local.

Regards,
Reilly Atkinson

reilly
gstafleu said:
The reason for assuming causality is perhaps not so much physical as methodological. When you incorporate a stochastic element in a model, you are in fact saying something like "we have a rough description of how this variable works, but for the purposes of this model we are not going to bother explaining the detailed workings." In other words, you are explicitly putting the causation of a certain behaviour outside your model./QUOTE
>>>>>>>>.

RA Your description of the import of random variables is quite at variance with much of the statistical and analytical work and literature of more than a century of experience. I'll cite you the examples of so-called Monte-Carlo simulations, and/or simulations of gaseous or liquid turbulance, or explosions, and then there's the stock market, sales forecasts, surveys an so on. Check it out.

And, any way, is it not perhaps the case that our basic perceptual mechanisms deal with stable averages, and what you might call, reognition of comprehensible motions and changes -- we were programmed to look for causality; a fundamental human bias, and a good thing. Random is troublesome. life threatning -- "random traffic accidents -- ...

>>>>>>>>>>>>>>>>>>>>>>>>>>>.

That is just fine for pragmatic solutions. It may simply not be possible to incorporate all causilities into the model (think chaos theory), or we may not yet know enough. In that case you take what you have or what you can do. No problem.
>>>>>>>>>>>>>>>>>...
RA
Are you an experienced builder of models?

>>>>>>>>>>>>>>>>>>
But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours. You just stick with the rough description, never mind "what makes it tick." And that is equivalent to saying that you have given up on the scientific endeavour, at least for those behaviours. Now it may be that at some point we will have to acknowledge that for some phenomena we have been getting nowhere for a long time and it doesn't look as if we'll be getting any further any time soon. But one shouldn't assume that as a matter of principle, which is what non-determinism amounts to.
<<<<<<<<<<<<<<<<<<<<<<<<<<<,

RA Randomness does not preclude understanding -- Much of Einstein's work dealt with randomness, with extraordinary sucess.

For me to make assumptions of the type you mention just above. is to take me way beyond my paygrade. I don't have a clue. But to stop investigation is just plain silly, as you suggest.

Regards,
Reilly Atkinson

reilly said:
By the way, there are sound theoretical reasons why QM and QFT are local--it's all in the structure of the interactions. And we use, almost exclusively so-called point interactions, guaranteed to be local.
I’ve always seen QM and its sub theories as “Non-local”. HUP, superposition, entanglement and the like; being explained through that non-local quality.

So just to be clear on what you mean by saying “QM and QFT are local” does not mean local in a classical sense i.e. with a dependent background. But local against a structure of reality built on a independent background as Lee Smolin describes both GR and QM have.

Thus there is no true tracking of “local” positions between separated points. Just the relative interrelation of interactions, each acting as a kind of independent local system.

Is that how some "theoretical reasons" can get “local” onto QM.

DrChinese
Gold Member
The question being indeterminacy being due to experimental design: The obvious answer being: NO.

We know that a electron can be confined to an arbitrarily small volume because it acts as a point particle. If it was an issue of experimental design, we could simultaneously measure its momentum with similar precision - and that doesn't happen.

Instead, our results exactly match the HUP, which is essentially the "source" of indeterminacy. Even within the theoretical framework of BM and MW, no one claims any different results are possible within an experiment setup.

So it isn't the experiment, it's the theory every time. Experiment, of course, supports the theory too.

Schrodinger's cat; woof!!
I've rapidly come to the conclusion that pinpointing a size for say a photon or an electron isn't really that important: yes they both have a size even if it is just a wavelength or a point: but to argue about it seems to be a bit pointless if you'll forgive the pun

So everything has a definite value it's just we can't measure it? So superposition a cloud of possible electron configurations isn't that there are in fact all possibilities at once merely that we can't for sure determine exactly where the electron is at any given time and it's momentum, thus it's everywhere with certain probabilities given by the density of the cloud? That makes sense to me

Does everyone independantly come to the conclusion that this sort of non-deterministic idea, indicates that there is in fact free will and then find out that there isnt again or is it just me? that means that you can't know what anyone is thinking and neither can your mind so I guess there is free will after all, I think I just broke my brain:tongue2:

Last edited:
gstafleu said:
But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours...
I'm not convinced Bohmian mechanics are leading the way. They might be deterministic but my understanding is that the pilot waves can never be detected because they always cancel out. I like the ideas a lot, but I always find myself in a Bohmian universe that is deterministic, but is still behaving just like it isn't! I think most followers have already given up the idea of creating technology to control pilot waves etc (unless you include all "ordinary" technology that exploits QM). I just use the ideas as a useful perspective to take up occasionally.

Last edited: