I Is Free Will a Foundational Assumption in Quantum Theory?

  • #151
Lynch101 said:
If the choices of which observable to be measured had a common cause would they be correlated?
Let me try again to explain the difference between cause and correlation.

Imagine first that everything in a system is fully determined. The universe since the big bang say.

Now imagine that you have experiment with two people involved. There are two boxes. The first person puts a prize in one of the boxes. The second person gets to open a box and try to win a prize.

Now, let's assume you, at the big bang, can predict exactly what everyone will do in this experiment. Everything to you is completely predictable. But, what you predict will be a mixture of all four possibilities. Prize in box 1, box 1 is chosen; prize in box 1, box 2 is chosen; prize in box 2, box 1 is chosen; prize in box 2 , box 2 is chosen.

To you this was all totally predictable. But, it still represents "random", "uncorrelated" results. There is no correlation between the prize being in box 1 and box 1 being chosen etc.

Now, suppose we find that there is a correlation. Let's assume the prize is never won. It's not enough that the universe is fully deterministic for this to happen. There would need to be a casual chain that enforces opposite choices. But, what law of nature can enforce that for complex systems like human beings 14 billion years later? Especially if this always happens with any two people. In any place at any time.

The answer is no normal law of nature can explain that. You argue as though simple determinism could produce that result. It can't.

That's why determinism cannot explain QM correlations.
 
Last edited:
  • Like
Likes Lynch101 and DarMM
Physics news on Phys.org
  • #152
A. Neumaier said:
This is just because, as I had already mentioned, nobody ever really addressed the classical version of the quantum measurement problem. Instead it is simply assumed that the measuring device somehow acquires the measurement value, in principle with arbitrary accuracy and without backreaction to the system measured. This is a Platonic notion of measurement, looking into God's cards so to say. In this case you may say that the events are the properties of the collection of trajectories of the particles making up the system measured. You (like everyone else in the past) are treating classical mechanics exclusively from this Platonic perspective, while you consider quantum mechanics on a different footing, e.g., by allowing for microscopic descriptions of the measuring device, or even a hierarchy of these. This gives a distorted picture of the classical vs. quantum theme.

But for a system consisting of a few interacting atoms it is impossible to measure (from within a classical universe) most of these properties as if they were unobserved, as the coupling to the measuring device distorts the trajectories as in the quantum case. This proves that classical measurement is nontrivial. Realistic measurement in a classical universe would have to consider how some observable of a classical microscopically described measuring device acquires a value correlated with some property of the observed system. The outcome space is then the range of that observable -
No I still don't think this is quite right and it's not just a matter of assuming an approximation of Platonic nondisturbance. Robert Spekkens and others have investigated a classical theory with fundamental disturbance and where the idealized notion of no back reaction is abandoned. So called epistimically restricted classical theories.

We do get non-commutativity of measurements, entanglement, discord, steering, super-dense coding and many other features. We do not however get Contextuality and Non-classical correlations, because ultimately the underlying event algebra of the system is Boolean. These features indicate something beyond merely some unconsidered irremovable back reaction and form the core of the differences between measurement in a quantum theory viewed as a probability theory.
 
  • #153
DarMM said:
No I still don't think this is quite right and it's not just a matter of assuming an approximation of Platonic nondisturbance. Robert Spekkens and others have investigated a classical theory with fundamental disturbance and where the idealized notion of no back reaction is abandoned. So called epistimically restricted classical theories.
https://arxiv.org/pdf/1409.5041.pdf ? I don't see there a chaotic deterministic dynamics between system and detector giving rise to stochastic measurements. Instead, both probability and what can be measured is input by hand in a purely Platonic way - i.e., axiomatically, by making assumptions.
DarMM said:
We do get non-commutativity of measurements, entanglement, discord, steering, super-dense coding and many other features. We do not however get Contextuality and Non-classical correlations, because ultimately the underlying event algebra of the system is Boolean. These features indicate something beyond merely some unconsidered irremovable back reaction and form the core of the differences between measurement in a quantum theory viewed as a probability theory.
Of course the differences between QM and CM must show up somewhere. But at least the situation is not so easy.

In any case one needs extraneous structure beyond what is given by a model of Nature as such (i.e., before physicists tamper with it by installing a cut separating systems and detectors). This was my primary claim against yours.

In the classical case one can dispense with separately specifying a Boolean subalgebra because there is already one naturally intrinsically given. But in QM, decoherence also seems to specify a natural intrinsically given Boolean subalgebra (once the Heisenberg cut is made); cf. Schlosshauer's recent article.
 
  • Like
Likes akvadrako
  • #154
What then is the difference between Quantum and Classical probability in your view? That is in views where we are not taking the quantum state as representational.
 
  • #155
A. Neumaier said:
In the classical case one can dispense with separately specifying a Boolean subalgebra because there is already one naturally intrinsically given. But in QM, decoherence also seems to specify a natural intrinsically given Boolean subalgebra (once the Heisenberg cut is made); cf. Schlosshauer's recent article.
Yes, but decoherence only gives it after an appropriate device is included in the total system. Where as in classical mechanics each system has a Boolean algebra of properties intrinsically.
 
  • #156
DarMM said:
What then is the difference between Quantum and Classical probability in your view? That is in views where we are not taking the quantum state as representational.
Quantum probability is a formal extension of classical probability obtained by dropping the commutative law in the algebraic formulation of classical probability theory (as given by Peter Whittle). Their notion of expectation are essentially the same, given by states (continuous monotone linear functionals). But the quantum notion of probability is quite different: In place of a classical measure (a a mapping from the commutative algebra of measurable sets to [0,1] defining the probability, uniquely determined by the state) it has a quantum measure (a pair consisting of a state and a POVM, jointly defining the probability). Thus one needs a state and a context (given by the POVM) to define probabilities, making the concept of probability context-dependent. (Special cases are the projective quantum measures where the POVM consists of a system of orthogonal projectors. The latter are the only quantum measures you seem to consider in your discussions of the classical-quantum difference, though it is well-known that they cannot describe lots of stochastic situations in quantum experiments.)

But I sharply distinguish between these mathematical notions and quantum and classical physics. The latter are about dynamical systems giving rise to measurement questions and associated probability statements. The dynamics should give rise through an appropriately defined cut (= specification of system, detector, environment) to a definition of measurement results, their relation to the system state, and the appropriate quantum or classical measure describing the measurement statistics, including in the quantum case the proper context. The quest for achieving this constitutes the measurement problem. It is nontrivial both in the classical and in the quantum case.
 
Last edited:
  • #157
I think that the occurrence of unique measurement results is just a fundamental empirical fact, which cannot be explained by simpler facts or sophisticated theories. In theory it's just to be assumed as a postulate in both classical and quantum theory. Of course, both classical and quantum theory are dynamical theories, describing the evolution of the state of a system with time, given the dynamics (i.e., the Hamiltonian) and an initial condition. The main difference is just the notion of state, which reflects the classical-deterministic and the quantum-deterministic description with the latter being more fundamental and the former being derivable as an effective description of macroscopically relevant observables in a statistical sense.
 
  • Like
Likes PeroK and DarMM
  • #158
A. Neumaier said:
it has a quantum measure (a pair consisting of a state and a POVM, jointly defining the probability). Thus one needs a state and a context (given by the POVM) to define probabilities, making the concept of probability context-dependent
So far this seems very similar to what I was saying. You need a POVM choice (in addition to the state) to have a well defined probability model, unlike the classical case where no such choice is needed.

A. Neumaier said:
Special cases are the projective quantum measures where the POVM consists of a system of orthogonal projectors. The latter are the only quantum measures you seem to consider in your discussions of the classical-quantum difference
Where did I only consider PVMs. I spoke about POVMs from the very beginning. The only point where I mentioned PVMs is when you said Born et al didn't know about POVMs.
 
  • #159
vanhees71 said:
I think that the occurrence of unique measurement results is just a fundamental empirical fact, which cannot be explained by simpler facts or sophisticated theories.
But what makes a measurement device (considered as a quantum system) so special that one can read off from it unique measurement results - in spite of it being represented by a superposition in standard quantum measurement theory? Usual quantum systems do not behave this way, so there must be something special about measurement devices...
 
  • #160
DarMM said:
So far this seems very similar to what I was saying. You need a POVM choice (in addition to the state) to have a well defined probability model, unlike the classical case where no such choice is needed.
You were referring to ''quantum theory'' (and ''classical theory'') rather than ''quantum probability'' (and ''classical probability''), terms that have quite different meanings to me. Now I realized that with ''quantum theory viewed as a probability theory'' you just meant the purely mathematical discipline of noncommutative probability theory (46L53 in the Mathematical Subject Classification) and nothing else. In the interpretation of the latter we completely agree.

But ''quantum theory'' and ''classical theory'' are in my view dynamical theories; to apply notions of probability to them one needs additional specifications. These determine the context, and once the context is fixed, quantum probability restricts to classical probability. Thus I don't consider the specific noncommutative aspects of quantum probability a ''seismic shift'' (your #147).

Rather, the seismic shift is that one expects quantum physics to be consistent on all scales while one expects classical physics to be consistent only at macroscopic scales, obviating the quest for a microscopic description of the measurement process. Thus at present the requirements for a consistent quantum theory are far more stringent than those for a consistent classical theory. This strengthening of the requirements is the seismic shift that created the measurement problem.

If one strengthens the requirements for a consistent classical theory in the same way, one ends up with the question of how a detector subsystem of a large classical chaotic system can acquire information about a disjoint subsystem to be measured. Trying to answer this poses a classical measurement problem with essentially the same difficulties as in the quantum case. The differences in the probability calculus appear minor from this perspective.

DarMM said:
Where did I only consider PVMs. I spoke about POVMs from the very beginning. The only point where I mentioned PVMs is when you said Born et al didn't know about POVMs.
Ah yes, in post #92. Sorry; the discussion extended over a time span longer than my short term memory. Since on the classical side you always referred to the Boolean algebra I had thought without checking you assumed in the quantum case a Boolean subalgebra as well to get a classical subsetting.
 
  • Like
Likes DarMM
  • #161
Elias1960 said:
What I reference here as the interpretation of the Einstein equations would be the limit Ξ,Υ→0\Xi,\Upsilon \to 0 of the equations of that theory.

Ok, that makes it clearer what you are actually talking about, and it is not "Lorentz ether theory". What you are talking about is a different theory that makes different empirical predictions, but just happens to have standard GR as an approximation in an appropriate limit. Making measurements outside the domain in which that approximation is valid would test this theory against, for example, standard GR not considered as an approximation to anything else.
 
  • #162
PeroK said:
The difficulty is not to "predict" the role of the die after it's been thrown, but before it's been thrown!
It would be difficult of course, but surely if all the relevant information was known then it would be possible. The difficulty is down to determining the values of all the relevant information, no?
 
  • #163
Lynch101 said:
It would be difficult of course, but surely if all the relevant information was known then it would be possible. The difficulty is down to determining the values of all the relevant information, no?
Possibly. At the very least it becomes practically impossible if a human is involved. Also, you can have strange loops in this case. Suppose you claim to be able to predict what number I will write down next. That only works if you don't tell me. Otherwise, I can use my "free will" to write something else down.

That's also why stock market predictions are impossible if the information is made public. You get feedback loops.
 
  • Like
Likes DarMM and Lynch101
  • #164
bhobba said:
Determinism and choice are mutually contradictory.

Think carefully. Suppose I describe "free choice" this way: "free choice" just means you determine what your actions are, not anything else. That seems to capture our intuitive sense of what "free choice" is: after all, "free choice" doesn't mean you just do some random thing, it doesn't mean you choose A and then do B or C or D; it means you choose what you do. But that means your choice has to determine what you do.

But how could you even have this kind of free choice in a world that wasn't deterministic, at least at whatever level of description is relevant for "free choice"? In a world without deterministic laws, or at least laws that were deterministic to a good enough approximation at the level of your free choice, having "free choice" wouldn't matter, because any effects of your free choice would soon be overwhelmed by random fluctuations.

So the key question is really, what are "you"? What is this "you" that has to determine your actions in order for "you" to have free choice? In a deterministic universe (or at any rate one that is deterministic to a good enough approximation at the appropriate level of description), "you" are a particular set of deterministic processes that go on in a particular physical subsystem (your brain and body). As long as those processes are what determine your actions, you have free choice.

Many people object to this concept of free choice, but as the philosopher Daniel Dennett has pointed out in several of his books and many articles on the topic, this concept of free choice gives you everything about free will that's actually worth wanting. You just have to be clear on what "you" actually are.
 
  • Like
Likes Lynch101
  • #165
PeroK said:
Possibly. At the very least it becomes practically impossible if a human is involved. Also, you can have strange loops in this case. Suppose you claim to be able to predict what number I will write down next. That only works if you don't tell me. Otherwise, I can use my "free will" to write something else down.

That's also why stock market predictions are impossible if the information is made public. You get feedback loops.
Absolutely, it is impossible in a practical sense.

If I were to predict what number you were to write down though, I would also predict my telling you a number and so the number I tell you might not necessarily be the one I predict, unless I predict that you will think that I am telling you the wrong number and write down the number I tell you thinking that the number I tell you is is the one number it is guaranteed not to be...

my head hurts...

I think Daniel Dennett had a term for that, something like 3rd, 4th, 5th order intentionality or something.
 
  • #166
Lynch101 said:
Absolutely, it is impossible in a practical sense.

If I were to predict what number you were to write down though, I would also predict my telling you a number and so the number I tell you might not necessarily be the one I predict, unless I predict that you will think that I am telling you the wrong number and write down the number I tell you thinking that the number I tell you is is the one number it is guaranteed not to be...

my head hurts...

I think Daniel Dennett had a term for that, something like 3rd, 4th, 5th order intentionality or something.
It's not particularly relevant except to note that the behaviour of complex systems is fundamentally different from the behaviour of simple systems.
 
  • Like
Likes Lynch101
  • #167
PeroK said:
Let me try again to explain the difference between cause and correlation.

Imagine first that everything in a system is fully determined. The universe since the big bang say.

Now imagine that you have experiment with two people involved. There are two boxes. The first person puts a prize in one of the boxes. The second person gets to open a box and try to win a prize.

Now, let's assume you, at the big bang, can predict exactly what everyone will do in this experiment. Everything to you is completely predictable. But, what you predict will be a mixture of all four possibilities. Prize in box 1, box 1 is chosen; prize in box 1, box 2 is chosen; prize in box 2, box 1 is chosen; prize in box 2 , box 2 is chosen.

To you this was all totally predictable. But, it still represents "random", "uncorrelated" results. There is no correlation between the prize being in box 1 and box 1 being chosen etc.

Now, suppose we find that there is a correlation. Let's assume the prize is never won. It's not enough that the universe is fully deterministic for this to happen. There would need to be a casual chain that enforces opposite choices. But, what law of nature can enforce that for complex systems like human beings 14 billion years later? Especially if this always happens with any two people. In any place at any time.

The answer is no normal law of nature can explain that. You argue as though simple determinism could produce that result. It can't.

That's why determinism cannot explain QM correlations.
Thanks Perok again, this is very helpful. I'm pretty sure I understand the notion of correlation on a basic level at least - the example I gave to Demystifier was the correlation between flowers blooming in summer and the number of people wearing sunglasses. It's probably more the issue of correlation as it pertains to Bell's theorem that I am unclear about.

Am I correct in saying that we wouldn't expect the outcomes of measurements made in the Bell tests to be correlated, but results show that they are not statistically independent and so there is a higher level of correlation than if the measurements were just random? The question then is, what is the cause of this correlation. With Bell's theorem implying that we must give up one of the following:
1) Realism
2) Locality
3) Local realism
4) Free Will

I'm not intending to argue the point that simple determinism can explain the QM correlations but in discussing the issue of Free Will (however it is interpreted) it seems to get juxtaposed with SuperDeterminism (SD). I'm wondering how SD explains the correlations. Is it by saying that they have a common cause, that has its origins at the Big Bang? Someone mentioned that SD doesn't try to explain the correlations but I'm not sure how to interpret that.

I tend to interpret the term correlation in the context of the adage "correlation does not imply causation" but I also think of it in terms of a relationship, that two "things" are related in some way. In the case of my example of correlation above, the number of flowers that bloom in summer and the number of people wearing sunglasses are correlated but one doesn't cause the other. They do however share a common cause namely, the Suns rays, so they are related in that way.

When I think of SD I tend to imagine a giant set of dominoes stretching all the way back to the big bang. An incredibly complicated and intricate set of dominoes which are, in practice, unpredictable but which are entirely deterministic. I imagine those dominoes falling in such that they lead always lead to the case where the person always chooses the wrong box. It would appear like an enormous coincidence and defy all explanation, but it would be completely deterministic.

Is that an accurate characterisation of SD or am I missing something along the way?
 
  • #168
PeterDonis said:
These claims about macroscopic processes like shuffling cards, rolling dice, etc., are not necessarily true, because it's highly likely that quantum indeterminacy does not play any role in the outcome, unlike the case of, say, a Stern-Gerlach measurement of a particle's spin. It's entirely possible that, for example, the process of die rolling is insensitive enough to its exact initial conditions that an accurate enough measurement of those initial conditions could allow us to predict the result.

The only analysis I've seen of flipping a coin came to the opposite conclusion.
 
  • #169
Lynch101 said:
Is that an accurate characterisation of SD or am I missing something along the way?
That's correct.
 
  • Like
Likes Lynch101
  • #170
vanhees71 said:
I think that the occurrence of unique measurement results is just a fundamental empirical fact, which cannot be explained by simpler facts or sophisticated theories. In theory it's just to be assumed as a postulate in both classical and quantum theory.

Empirically all we know is each individual observer only observes unique results, but the idea that other results were not observed by those we will never come in contact with is an assumption and it doesn't seem warranted. Indeed it seems arbitrary and unneeded.
 
Last edited:
  • #171
akvadrako said:
The only analysis I've seen of flipping a coin

Please give a reference.
 
  • #173
PeterDonis said:
In a world without deterministic laws, or at least laws that were deterministic to a good enough approximation at the level of your free choice, having "free choice" wouldn't matter, because any effects of your free choice would soon be overwhelmed by random fluctuations.
Just to say and I really have no developed view on this, this sort of views "randomness" as something ontic that can overwhelm choice. Where as some people see probability as something epistemic one autonomous object with choice has to another autonomous object with choice when they interact. You'll see some discussion about this in Fuch's email collection in https://arxiv.org/abs/1405.2390. See for example exchanges with Terry Rudolph and Marcus Appleby. It comes up a few other times, see the topic index.

It's a long, long read though!
 
  • #174
akvadrako said:

Their analysis of the coin flip process looks inconsistent to me; they claim the relevant fluctuations are in polypeptides, but they use the numbers for water. Using numbers for polypeptides should make ##n_Q## larger; an increase of only a factor of 10 in ##r## and ##l## is sufficient for ##n_Q > ##, if ##\Delta b## is kept the same as for water; if ##\Delta b## is decreased as would be expected for a polypeptide whose mass is two or more orders of magnitude larger than that of a water molecule (##\Delta b## goes roughly as the inverse cube root of the mass), ##n_Q## gets even larger.
 
  • Like
Likes akvadrako
  • #175
DarMM said:
this sort of views "randomness" as something ontic that can overwhelm choice

That's one alternative covered by what I said, yes: basically that the fundamental laws are not deterministic, and their non-determinism is so strong that it prevents us from controlling what we do in any meaningful way.

The other alternative is that the fundamental laws are deterministic but their dependence on the exact initial conditions is so sensitive that our inability to control the exact initial conditions means that we cannot control what we do in any meaningful way.

Neither of these alternatives seems to be true of our actual universe: we do seem to be able to control what we do in meaningful ways. My point was simply that that fact alone implies, if not fundamental determinism, at least determinism for practical purposes in the domain of our actions.
 
  • Like
Likes mattt
  • #176
Ah I see what you mean now. The world around you has to be somewhat deterministic in the "region" into which your choices propagate or otherwise your choices would just get wiped out or you wouldn't realistically be able to judge the consequences of anything.

That's a very interesting point: Supposing you have Free Will and aren't subject to determinism in some way, the world of objects around you has to be somewhat predictable in order for you to exercise that Free Will in any meaningful way.
 
  • #177
DarMM said:
That's correct.
Ah, I see.

I've heard people refer to the conspiratorial nature of SD and some suggest that it undermines scientific inquiry. Are there other objections to it?
 
  • #178
Lynch101 said:
Ah, I see.

I've heard people refer to the conspiratorial nature of SD and some suggest that it undermines scientific inquiry. Are there other objections to it?
It's very hard to actually construct a superdeterministic theory, since you need to be able to show the initial state has all these consequences and that it is a natural initial state in some sense without fine tuning.
 
  • Like
Likes Lynch101
  • #179
DarMM said:
Supposing you have Free Will and aren't subject to determinism in some way, the world of objects around you has to be somewhat predictable in order for you to exercise that Free Will in any meaningful way.

Yes, and it has to be predictable in both directions, so to speak. Not only do you have to be able to control what you do, you have to be able to rely on what your senses tell you. In a world that is not deterministic to a good enough approximation in the relevant domain, you won't be able to do that.

And it goes even further than that. You have to be able to count on your brain processes to be reliable. That means your brain processes can't be randomly jumping around all the time; they have to be reasonably related to your sensory inputs and your action outputs. Otherwise "you" won't even be a coherent thinking thing.

When you fully unpack all this, the idea that "free will" in any meaningful sense involves not being subject to determinism begins to look pretty hopeless.
 
  • Like
Likes mattt and Lynch101
  • #180
Lynch101 said:
Would the choices of which observable to be measured require a common cause in order to be considered correlated? Is that the position essentially what superdeterminism implies/requires?
No, superdeterminism does not say that, but it also does not deny that.
 
  • Like
Likes Lynch101
  • #181
Demystifier said:
No, superdeterminism does not say that, but it also does not deny that.
It might be my interpretation of the term "correlation" as 'a mutual relationship or connection between two or more things."
Google Dictionary

I imagine two things which have such a mutual relationship or connection would display statistical interdependence or would not be statistically independent.

A common cause would constitute a common relationship or connection but would it account for statistical interdependence?
 
  • #182
bhobba said:
That is a logical absurdity. Determinism and choice are mutually contradictory. Determinism means initial conditions determine everything - the concept of choice does not exist. What we do know is chaos does exist so in practical terms it is impossible to predict everything even if the world is deterministic. We can never know initial conditions with exact accuracy and those inaccuracies grow to the point all we can predict is probabilities just like if it was probabilistic in the first place.

I agree with this. Determinism is absurd as shown by the Strong Free Will Theorem.

1. If the Experimenters choice is determined by some mechanism then probability wouldn't exist on a quantum level. Every experiment supports this free choice and there's not a shred of evidence that supports any mechanism that determines the choice of the Experimenter.

2. You can never reduce a quantum system to 1 state prior to measurement which is local to the observer. It's always this state or that state or a 1 and an 0. It's never reduced to a single state so randomness is fundamental and this is the antithesis of determinism. This means there isn't one set of initial conditions. The initial conditions of the universe wouldn't be objective but a combination of states that can be in at least 2 different sates up to 10^500 if you accept the String Theory Landscape.

3. Free will is an extension of freedom of choice. All observers, whether human or non human have freedom of choice. The freedom of choice loophole was closed in the Big Bell Test, and the difference is this. A free choice can occur when a measuring device measures which slit the particle went through in the double slit experiment.

Free will is me deciding to go to Denny's this morning for breakfast. This is human consciousness making a free choice among a probability distribution of probable states. You can do Bayesian updating based on my history of choices when I eat breakfast to assign probabilities. Based on my history, maybe 40% of the time I went to Denny's for Breakfast, 30% of the time I went to Bob Evans, 20% of the time to a local diner, 4% of the time McDonald's, 4% of the time Burger King and 2% of the time somewhere new when I go out to eat breakfast.

There's no evidence that there's any mechanism outside of my consciousness that determines which restaurant I will go to for breakfast when I eat out for breakfast. The most you can do is Bayesian updating based on my history of free will choices.
 
  • Like
Likes Lord Jestocost
  • #183
Demystifier said:
No, superdeterminism does not say that, but it also does not deny that.
Is it not typical in superdeterminism for the choice of observable and state of the system to be highly correlated due to a common cause (in Reichenbach's sense) in an earlier prior state of the world?
 
  • #184
PeterDonis said:
When you fully unpack all this, the idea that "free will" in any meaningful sense involves not being subject to determinism begins to look pretty hopeless.
Very interesting. I'm not quite sure of this last point, but plenty of food for thought.
 
  • #185
DarMM said:
Is it not typical in superdeterminism for the choice of observable and state of the system to be highly correlated due to a common cause (in Reichenbach's sense) in an earlier prior state of the world?
Well, without an explicit example of a superdeterministic theory, it's hard to tell. But it seems to me that, in principle, the fine tuning of the initial conditions can be a pure coincidence, without an actual common cause.
 
  • #186
PeterDonis said:
Their analysis of the coin flip process looks inconsistent to me; they claim the relevant fluctuations are in polypeptides, but they use the numbers for water. Using numbers for polypeptides should make ##n_Q## larger; an increase of only a factor of 10 in ##r## and ##l## is sufficient for ##n_Q > ##, if ##\Delta b## is kept the same as for water; if ##\Delta b## is decreased as would be expected for a polypeptide whose mass is two or more orders of magnitude larger than that of a water molecule (##\Delta b## goes roughly as the inverse cube root of the mass), ##n_Q## gets even larger.

Thanks for the review. This is the only paper I've seen where they try to do it. Indeed it could depend a lot on the specifics of biology and even the psychology of how much energy the brain decides to use and many other factors. But if even one of those factors depends on amplifying a quantum event then the result will be random. It seems like getting rid of all the quantum randomness would require every system in the path to have some kind of dynamics that self-corrects to keep the outcomes in discrete peaks.
 
  • #187
Demystifier said:
Well, without an explicit example of a superdeterministic theory, it's hard to tell. But it seems to me that, in principle, the fine tuning of the initial conditions can be a pure coincidence, without an actual common cause.
In 't Hooft's theory basically there is, but I see what you mean now. It would be an incredible coincidence of course, but not logically excluded as such.
 
  • Like
Likes Demystifier
  • #188
DarMM said:
Another system constitutes the POVM selected for the system under study. Only a POVM provides a well defined statistical model, the full algebra of projectors does not.
DarMM said:
In quantum theory viewed as a probability theory, due to the non-Boolean structure, we do not. Some device must be present to define the outcome space. [...]
So quantum theory provides a stochastic description of a system-external system interaction when supplied with a choice of external system, but it is intrinsically incapable of modelling that choice of external system.
DarMM said:
Robert Spekkens and others have investigated a classical theory with fundamental disturbance and where the idealized notion of no back reaction is abandoned. So called epistimically restricted classical theories.
We do get non-commutativity of measurements, entanglement, discord, steering, super-dense coding and many other features. We do not however get Contextuality and Non-classical correlations, because ultimately the underlying event algebra of the system is Boolean.
DarMM said:
So far this seems very similar to what I was saying. You need a POVM choice (in addition to the state) to have a well defined probability model, unlike the classical case where no such choice is needed.
A. Neumaier said:
Since on the classical side you always referred to the Boolean algebra I had thought without checking you assumed in the quantum case a Boolean subalgebra as well to get a classical subsetting.
Note that restricting the probabilistic framework of noncommutative probability (i.e., only state + POVM together define probabilities) to the special case of a commutative algebra does not produce noncontextual Kolmogorov probability but a contextual classical probability calculus. Even in the classical case there are POVMs that do not correspond to the Boolean probability. Thus in full generality, commutative probability must also be considered contextual, if considered on the same footing as the noncommutative version.
 
Last edited:
  • Like
Likes vanhees71 and bhobba
  • #189
Demystifier said:
Well, without an explicit example of a superdeterministic theory, it's hard to tell. But it seems to me that, in principle, the fine tuning of the initial conditions can be a pure coincidence, without an actual common cause.
Could the cause of the big bang be considered a common cause?
 
  • #190
Elias1960 said:
To justify proposals to reject such old common sense notions like the existence of space and time, one needs a serious justification. Extraordinary claims require extraordinary evidence. This extraordinary evidence would have to contain something close to impossibility theorems for theories with classical space and time. At least there should be quite obvious serious problems for any theory with classical space and time, with no plausible chance to solve them. This is certainly not the actual situation. Such theories exist, have been published, they follow a straightforward path known already by Lorentz. Simply not liking them because a curved spacetime is sort of more fascinating is not enough, but nothing better has been proposed yet against them.
With a preferred frame of reference, one would expect objects flying on a jetliner to behave as if they were in motion. But what we observe is not this - e.g. if you dropped a ball onboard an airplane flying at a constant speed of 550 mph is that it falls straight down as if they plane was completely stationary and in all accounts, it is stationary... wrt to the ground. This is what relativity says. All frames are completely equal. This is also the famous elevator thought experiment Einstein had prior to 1905.

How would you explain this experiment if GR was wrong? Are the ball and feather 'moving'?

 
  • #191
A. Neumaier said:
Note that restricting the probabilistic framework of noncommutative probability (i.e., only state + POVM together define probabilities) to the special case of a commutative algebra does not produce noncontextual Kolmogorov probability but a contextual classical probability calculus. Even in the classical case there are POVM that do not correspond to the Boolean probability. Thus in full generality, commutative probability must also be considered contextual, if considered on the same footing as the noncommutative version.
Certainly, the resulting classical probability model is contextual. Streater makes similar remarks in his book "Lost Causes in and beyond Theoretical Physics" in Chapter 6.
 
  • Like
Likes bhobba
  • #192
PeterDonis said:
Neither of these alternatives seems to be true of our actual universe: we do seem to be able to control what we do in meaningful ways. My point was simply that that fact alone implies, if not fundamental determinism, at least determinism for practical purposes in the domain of our actions.

Seems to me there are regions of the world (all physics) where choice “propagates” and regions where it’s more like the first two.

I’m still not clear how one knows the difference between complexity that seems probabilistic and something truly indeterminate.

By “the world” I mean the terrain of all things that could be said to be will-full or chosen, so that’s sort of everything...

Does a rock have will? Does a complex molecule, RNA, a virus, a planet, an ant, a country? “Will” is distinguishable from inertia or other physics how?
 
  • #193
Sorry to mention a movie on a serious thread but we’ve all seen “Ex Machina” right? Nice little movie about “Free Will”

It’s like... the pinnacle of Human poetry (esp if you favor minimalist style) while also the best ever oxymoron.
 
  • #194
A. Neumaier said:
But what makes a measurement device (considered as a quantum system) so special that one can read off from it unique measurement results - in spite of it being represented by a superposition in standard quantum measurement theory? Usual quantum systems do not behave this way, so there must be something special about measurement devices...
Measurement devices are nothing special. E.g., for a position measurement you merely need a photoplate, wher the photons/particles leave a spot through some interaction inducing a chemical reaction, or a photo-multiplier working via the photoelectric effect for photons or something similar for particles. It's all described by the interactions of the measured object with the device's atoms/molecules. It's an empirical fact that the outcomes as expected by the probabilistic predictions of QT, i.e., that if you have prepared a single-photon state at most one spot is blackened when hitting the photo plate and repeating the experiment with equally such prepared single photons the frequency of occurring at this spot is well-estimated according to the usual statistical laws using the probabilities (probability distributions) predicted by QT. As far as I can see there's nothing deeper to it. That's the best "explanation" (or rather "description") we have for the behavior of a single photon: We cannot predict, where it hits the photo plate, we only know that with a certain probability it hits the photoplate at each spot, and the location of the spots is with some macroscopic resolution allowing for statistical analysis. In this sense there's no logical argument forbidding to conclude that nature is on a fundamental level random as very specifically described by QT.

I don't understand, why it is claimed that this minimal statistical interpretation doesn't provide "enough ontology" and why the PBR theorem should forbid this interpretation of the quantum state (as discussed in another thread in the foundations forum). Of course, the quantum state is not purely epistemic but for the single system it describes formally a class of preparation procedures. The expected probabilistic outcome of the measurement is described using Born's rule, using the analysis of the interaction of the measurement device with the measured object and the measurement device is constructed such as to measure more or less accurately some observable (be it in the von Neumann PV or the more general POVM sense).

The "ontology" the simply is that the "primitive phenomena" are random and can only be described by corresponding probabilistic rules, which are defined by the quantum formalism. There are state preparations, where some observable is determined, i.e., leads with probability 1 to a certain outcome and then any other observable not compatible with the prepared state is indetermined leading to certain possible outcomes with some probability <1.
 
  • #195
DarMM said:
That's correct.
Thinking more on this. If the "domino" analogy is accurate, is SD not simply the extrapolation of determinism to it's logical conclusion then?
 
  • #196
EPR said:
With a preferred frame of reference, one would expect objects flying on a jetliner to behave as if they were in motion.
No, one would have to look at the equations of the particular theory in question instead of expecting whatever. If these equations are the Einstein equations in harmonic coordinates, one would expect the same as in GR. If the description of the interpretation claims that the preferred frame is hidden, then you don't even have to look at the equations, because this verbal description says exactly that one has to expect no difference to GR.
 
  • #197
Lynch101 said:
Could the cause of the big bang be considered a common cause?
It could.
 
  • #198
Demystifier said:
It could.
Not really. Ok, some details:

It could be, but only as a single common cause for everything. So, an ideally homogeneous initial temperature could be explained.

But the minor differences in the distribution cannot. They have to be explained by common causes after the BB.

This is essential, because this is what forces us to accept inflation (in the technical sense of ##a''(\tau)>0##, without speculations about the mechanism which would give such a thing). Without inflation, one can compute the maximal size of inhomogeneities with a common cause after the BB. And what we observe is inhomogeneous at much greater distances.
 
  • #199
Elias1960 said:
No, one would have to look at the equations of the particular theory in question instead of expecting whatever. If these equations are the Einstein equations in harmonic coordinates, one would expect the same as in GR. If the description of the interpretation claims that the preferred frame is hidden, then you don't even have to look at the equations, because this verbal description says exactly that one has to expect no difference to GR.

That hidden preferred frame sounds like a hidden explanation. In the framework of relativity all laws of physics behave the same way in any whatever frame is chosen. This is what the example highlights and what is virtually impossible to explain by other means. If you have a source that explains this ubiquitous preferred hidden frame of reference and how it works, please share.
Why is it that virtually no one has been able to find this frame?
 
  • #200
vanhees71 said:
Measurement devices are nothing special. E.g., for a position measurement you merely need a photoplate,
[...]
I don't understand, why it is claimed that this minimal statistical interpretation doesn't provide "enough ontology" [...]The expected probabilistic outcome of the measurement is described using Born's rule, using the analysis of the interaction of the measurement device with the measured object and the measurement device is constructed such as to measure more or less accurately some observable
The lack of ontology consists of using the above only for the system measurement but not for the meter reading. For the latter you are content with the phenomenological description used by the experimentalists.
As long one is doing this there are no problems since the ontology declares as real the experimental setting (preparation and measurement) only. This leaves state and outcome something ill-defined - neither the equivalence relation nor the definition of measurement results are specified to an extent that one could make a simulation of both on the theoretical level.
 
Back
Top