QM Uncertainty: Uncovering Hidden Variables

DMuitW
Messages
26
Reaction score
0
Up to today no underlying hidden variables that determine what is called the quantum uncertainty have been demonstrated.

Consider :

We have a setting that includes a geiger counter and a computer.
This computer has a button, which, when we press it, starts a clock. This computer is attached to a geiger counter, which if it detects a (random) electron decay, signs a signal to the computer to stop the clock. Let's say if the clock stops at an even 1/1000th second number, it generates a 0, if its stopped at an uneven 1/1000th second number, it generates a 1.

This experiment is well known, and will generate if repeated long enough, on the long run almost as much 0's as 1's (Binomial distribution). There is room for deviations, but usually within a small degree of freedom.

My Question now is, WHY does this experiment, which should be dependent on really RANDOM quantum mechanic occassions (QM as total chaos) generate a quite clear binomial distribution and in the end in theory almost as much 0's as 1's instead of a total random output?
 
Physics news on Phys.org
I think that to best appreciate this phenomenon, you need to do a brute-force analysis of a small version of it.

Each decay has an equal probability of producing a 0 or a 1. Therefore, if you consider six consecutive decays, you get a binary number ranging from 000000 to 111111, with an equal probability for each number within that range (000000, 000001, 000010, etc.).

Write down all 64 six-bit binary numbers, and count how many of them have one 1, two 1's, three 1's, etc. How many 1's are most likely?

Note that there's nothing particularly quantum-mechanical about this phenomenon at all! We could just as well toss an evenly-balanced coin to decide between 0 and 1.
 
I am not sure that this has anything to do with hidden variables -- the statistics quoted are just normal statistics . Hidden variables states that something is aready perdetermined even if YOU do not know the outcome .
What quantum mechanics says is that only when you make your choice is the outcome THEN determined - until then there is NO determined outcome .
I do not know if the Bell test is totally accepted but the contention is sure -- the past does not effect the future , it says that causality is at risk , whereas the normal interpretation is 'you may not know ' but nevertheless effect follows cause .
The result is effects do not follow cause but are indeterminate . With some probability . If you think about this you will see that this is only the possible answer to free will -- if not your choices are predetermined -- even if you do not know .
How about this scenario --- your ideas are random ( totally ) but they get fed into a machine which has learned from past experience and is capable of weighing odds of following the new idea cf the old idea -- that is with all sorts of weighted emotions -- I have a feeling that this would quite closely mimic Human response ( BUT with A billion yrs of experience ) .
Ray ( I have no real clue -- but I believe in measurements especially when repeated by independant groups ) there is no reason to think that Human normal experience has educated us as to what to expect of physics -- thts' a few hundred yrs cf 14 billion .
Yours Ray .
 
i would think that the result that is found, (that there is very close to the same number of 0's as 1's if the experiment is run long enough) is just a result of the Lorentz invariance of Quantum Electro-Dynamics!

In other words, all other variables the same, detector type, sample type, etc., it should not make any difference WHEN the experiment is conducted, you would have to expect the nearly the same result, indeed perhaps the same number of 1's and 0's if the sample was ALL decayed. But I would not know how to prove this, this is just my guess.

Again, my guess is that if there would not be the relatively the same number of 0's and 1's found in conducting such an experiment then somehow the Lorentz invariance of Quantum Electro-Dynamics would fail somehow. But this is just my "Hunch!"
what does anybody think of this?
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
http://www.altelco.net/~lovekgc/kl.htm
 
Last edited by a moderator:
of course, there's more to Beta decay than that is not there?
oh i know so little physics! the spontaneous symetry breaking involved in beta decay.
how this, QED, and Quantum Statistics, to possibly prove that one would get the same number of 1's and 0's in this type of experiment IS WAY BEYOND ME!
peace and love,
and,
love and peace,
kirk
 
maybe what i have said so far has nothing to do with the situation at all except in the case where one has many radioactive atoms of an isotope as one generally has.
i wonder if the experiment has ever been done for a really small sample size, say of several thousand atoms? or hundred? is this possible and then does one still get the same result as the same number of 1's and 0's?
peace and love,
and,
love and peace,
kirk
http://www.cosmicfingerprints.com/audio/newevidence.htm
 
A total random output with only 2 possibilities will give you equal amounts of each possibility if totally random and continued long enough.

Each event does not have the same probability set, but the probability super sets for each outcome are the same over the long term.

The possibility of hidden variables occurs in the exact temporal structure of each superset. They will not in this case affect the outcome.

juju
 
Last edited:
DMuitW said:
My Question now is, WHY does this experiment, which should be dependent on really RANDOM quantum mechanic occassions (QM as total chaos) generate a quite clear binomial distribution and in the end in theory almost as much 0's as 1's instead of a total random output?

Not sure I understand the paradox you are trying to identify. You take randomly occurring phenomena and map to numbers, giving a distribution consistent with random output?

Where is the difference between the expected values and the observed values?
 
let's see what i remember about nuclear decay!
imagine the following experiment. we have an radioactive isotope that has a half-life of 4 days. if we have a lot of atoms of them very close to 1/2 would be decayed in exactly four days.

but if we have only one we cannot say that! from quantum physics a single atom could decay in a tenth of a second or four years or some other time from the beginning of the experiment.

IT IS FROM THIS FACT OF QUANTUM MECHNICS I THINK; that even though the half life of a radio-isotope can be known, the time of an individual atom's decay is NOT that an equal number of 0's and 1's will appear if the experiment is performed for a long enough time in the case where one has many atoms of the radioactive isotope.

plus i think there may be some effect that the experimental apparatus is placing the data in histogram bins of 1/1000 seconds also.

all the stuff i wrote before about QED, etc. i do not feel is necessary to prove the results found in experiments, just what is stated here in this reply. I ALMOST think if i could have almost done it at one time? using what i knew about the half life formulae and considering an experiment with a large number of radioactive atoms of an isotope run for a time period much longer than the isotope's half life.

well! i am probably just proving to everybody how little i know! I thought maybe? something? i have written on this topic might help in a tiny way, sorry if it has not.
peace and love,
and,
love and peace,
kirk
http://www.altelco.net/~lovekgc/PrincessLittleHoney.htm
 
Last edited by a moderator:
  • #10
rayjohn01 said:
What quantum mechanics says is that only when you make your choice is the outcome THEN determined - until then there is NO determined outcome .
I do not know if the Bell test is totally accepted but the contention is sure -- the past does not effect the future , it says that causality is at risk , whereas the normal interpretation is 'you may not know ' but nevertheless effect follows cause
Incorrect. This is a common fallacy in interpreting the results of QM. What (the results of) QM “says” is that the outcomes of experiments are epistemically indeterminable. Only certain strange interpretations of QM (notably the Copenhagen interpretation) equate this with ontic indeterminism. It is important to understand that epistemic indeterminability does not necessarily imply ontic indeterminism.

rayjohn01 said:
If you think about this you will see that this is only the possible answer to free will -- if not your choices are predetermined -- even if you do not know
(1) As explained above, QM is not necessarily indeterministic, hence this is incorrect.
(2) Even if QM were ontically indeterministic, this would be a source of randomness, but how could this be a source of “naïve” free will? Think about it (and see below).

rayjohn01 said:
How about this scenario --- your ideas are random ( totally ) but they get fed into a machine which has learned from past experience and is capable of weighing odds of following the new idea cf the old idea -- that is with all sorts of weighted emotions -- I have a feeling that this would quite closely mimic Human response.
You are incorrect in thinking this somehow generates “naïve free will”.
Think about it. Why do the “ideas” in this model need to be random? Feed the same machine with a selection of “non-random” ideas, and the same machine (which has learned from past experience and is capable of weighing odds etc) can still decide which “ideas” to follow-up. The introduction of a random element into the generation of the “ideas” in the first place adds nothing to the “free will” of the machine (whatever "free will" might be).

MF
:smile:
 
Last edited:
  • #11
Thanks for the replies, i'll try to put my question a little less vague.

Lets say we use a quantum effect, ie the decay of an electron, which should according to the uncertainty principle be totally random. Use this effect to map it to a number generator, so that with this random effect you can generate 0's or 1's. WHY is it that on the long run chances will be distributed 0.5 for 0 and 0.5 for 1? Something that occurs thus totally random produces an output that is still random (the actual experiment has fluctuations according to the theorethical distribution) but to a far less degree of freedom?

Also, in answer to rayjohn01's response, if it is like you say, everything is undetermined (due to quantumhyperposition that says that the outcome is actually only coming into exist when the experiment is conducted), WHY and HOW is it that something so utterly uncertain generates all of this in such way as we know everything (from matter to...)

Use this last in the experiment given, WHAT determines , if using the superposition quantum uncertainty principle, that what is produced will always generate a consistent distribution??

Hope i cleared some things out about my question,

Thanks!
 
  • #12
DMuitW said:
Lets say we use a quantum effect, ie the decay of an electron, which should according to the uncertainty principle be totally random. Use this effect to map it to a number generator, so that with this random effect you can generate 0's or 1's. WHY is it that on the long run chances will be distributed 0.5 for 0 and 0.5 for 1? Something that occurs thus totally random produces an output that is still random (the actual experiment has fluctuations according to the theorethical distribution) but to a far less degree of freedom?

It is a little bit difficult to understand what you want to say/know. First of all, I think you need to say if you understand classical probabilities (frequentist view) and the experimental results they give (as the binomial law of the independent trials on random variable).
If you understand this topic then you can view, formally, the statistical results of a quantum measurement experiment (the satsitics of one observable, e.g. the energy of the electron, P(E=e)) as the statistical
result of a classical experiment (P=P(E=e), 1-P) and therefore recover binomial laws (of n trial sequences) or simply the law P (n --> +oO).

Seratend.
 
  • #13
moving finger said:
Incorrect. This is a common fallacy in interpreting the results of QM. What (the results of) QM “says” is that the outcomes of experiments are epistemically indeterminable. Only certain strange interpretations of QM (notably the Copenhagen interpretation) equate this with ontic indeterminism. It is important to understand that epistemic indeterminability does not necessarily imply ontic indeterminism.

Well, you can say this... a lot of philosopher-types do...

But the experiments speak strongly against this view. As a general rule, a single counter-example is sufficient to disprove any theory. The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.

There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.
 
  • #14
DrChinese said:
Well, you can say this... a lot of philosopher-types do...

But the experiments speak strongly against this view. As a general rule, a single counter-example is sufficient to disprove any theory. The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.

There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.


Oh, the irony. If "a single counter-example is sufficient to disprove any theory", then wouldn't the example of Bohmian mechanics -- which you are obviously aware of -- be a counter-example to the "theory" that "experiments speak strongly against" the failure of determinism?

Oh, right, Bohmian mechanics isn't "generally accepted at this time" so I guess we can just ignore this counterexample. Everyone else is doing it...

I think it would be much more accurate to simply state the truth: for some bizarre philosophical or historical reasons, the founding fathers of quantum mechanics loved and latched onto the idea that determinism failed. And many, many people have followed them. But there is an explicit counter-example to the claim that this was necessitated by the evidence. This proves one thing and one thing only -- whatever reasons people had for believing in the failure of determinism were not based on conclusive physical evidence, but something else.

On this point, I would highly recommend the book "Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony" by (the late) Jim Cushing.

ttn
 
  • #15
ttn said:
Oh, the irony. If "a single counter-example is sufficient to disprove any theory", then wouldn't the example of Bohmian mechanics -- which you are obviously aware of -- be a counter-example to the "theory" that "experiments speak strongly against" the failure of determinism?

Oh, right, Bohmian mechanics isn't "generally accepted at this time" so I guess we can just ignore this counterexample. Everyone else is doing it...

Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality. Just as Bell discovered that local reality and QM are incompatible in some respects, perhaps in the future someone will figure out how to distinguish between BM and CI.

But I appreciate your point. *Perhaps* if things had been discovered in a different order, we would consider CI to be fringe and BM to be mainstream.
 
  • #16
DrChinese said:
Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality.

That's not correct. The price paid for determinism is not locality, but... nothing. Orthodox Copenhagen QM is non-local too, in precisely the same way that Bohmian mechanics is -- namely, it violates Bell's locality ("factorizability") condition.

No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that. But there is a choice about whether to have a theory that is deterministic and clear or a theory that is non-deterministic, fuzzy, subjective, "unprofessionally vague and ambiguous."

Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.
 
  • #17
DmuitW, omg, have i been out of the mainstream for so long that i have missed the "news" about lepton, i.e. electron decay?
what is the half life determined for the "buggers"?
i will have to go to work now quantizing my KingFranklin theory which I thought could remain a CLASSICAL theory.
http://www.altelco.net/~lovekgc/KingFranklin.htm
any help from you all would be appreciated in answering these questions or this endevour! THANKS! (:-O) !
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
 
Last edited by a moderator:
  • #18
is not it true Dr. Chinese, that non-local theories run into causality problems.

Quantum Electro-Dynamics for example at least in the Domain for which it applies is about the BEST, MOST ACCURATE verified theory there is for is for at least a number of experimental parameters, implying strict Lorentzian invariance and thus locality in its type.

Although some esoteric experiments have somewhat recently been performed such as "freezing" photons, or "trapping" them, or slowing them down, and various, "tunneling" experiments performed, I know of no experiment performed yet where it has been shown that actual information can be tranmitted at a velocity faster than the speed of light in a vacuum.

please inform me if I am incorrect!

love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
http://www.altelco.net/~lovekgc/brainwash.htm
 
Last edited by a moderator:
  • #19
ttn said:
Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.

Some points can be made, but in essence, I agree with you.
Copenhagen, with explicit collapse, is just as ugly as Bohm and moreover must introduce a paradigm shift concerning "reality". Both commit - in my opinion - the same sin: while very powerfull symmetries led to the right dynamics of the wavefunction (which is shared by both theories), they introduce a blunt violation of it in another part: in Copenhagen, it is the "collapse", and in Bohmian mechanics, it is the guiding equation.

MWI-like views trade in something else: they trade in "intuitive ontology" for "respect of symmetry" (of which locality is a part). That's why I like them: they respect fully the same symmetries (and locality) as those that lead us to the theory in the first place (lorentz invariance, gauge invariance...).

So Bohmians, and Copenhagians must solve the following puzzle: how come that the symmetries which led us to the right theory concerning the wave function are put under the carpet in the guiding equation/collapse ?
Copenhagians moreover introduce a paradigm shift from determinism to complementarity.

MWI-ers have to solve the issue: why is the ontology of the world so very different from what we intuitively observe ? Personally, I think that this is a difficulty of lower order, because intuition is something psychologically rooted into humans and does not need to be related to any ontology.

And working scientists have to solve the issue: what is the simplest formalism that allows me to compare my experimental results to my calculations ? And that's the place were Copenhagen wins usually by several lengths. And, it is in fact the most important point.

cheers,
Patrick.
 
  • #20
ttn said:
No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that.

This, however, is not true, as MWI shows you. So don't fall in the same trap as CI proponents: do not deny a possibility of which there exists an example :-)

Some CI proponents deny that a deterministic model can make the same predictions as QM/CI, while Bohmian mechanics does exactly that. In the same way, MWI fully respects Lorentz invariance (= "locality"), while you claim that this cannot be done.

Apparently, what doesn't go together is the following set:
{Lorentz invariance, deterministic ontology, experimentally confirmed QM EPR predictions}

QM/CI blows the first two ;
Bohm blows the first one ;
MWI blows the second one ;
Original Thinkers blow the third one.

cheers,
Patrick.
 
  • #21
DrChinese said:
Well, you can say this... a lot of philosopher-types do...
you and I can say anything we like, this is not the point. The point is what is true and what is untrue. (BTW, I am a scientist before I am a philosopher).

DrChinese said:
But the experiments speak strongly against this view.
No they do not, and this is your error. Show me an experiment which proves the world is ontically indeterministic? There is no such experiment.

DrChinese said:
The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.
This is a common fallacy that some myopic scientists continue to believe is true (and what is even worse, that teachers continue to teach as being true). The EPR/Bell tests you refer to show that the quantum world is non-local. They do NOT show that the quantum world is indeterministic. I challenge any rational open-minded scientist to show that the results of these tests prove that the quantum world is indeterministic.

DrChinese said:
There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.
to be "generally accepted" is not a prerequisite for truth, only for popularity. Any free-thinking scientist should be interested in truth, not in being "generally accepted".

Niels Bohr brainwashed a whole generation of physicists into believing that the problem had been solved
Murray Gell-Mann

MF
:smile:
 
  • #22
DrChinese said:
Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality. Just as Bell discovered that local reality and QM are incompatible in some respects, perhaps in the future someone will figure out how to distinguish between BM and CI.
The quantum world IS non-local, no matter what "interpretation" you favour! You cannot get rid of that fact.

DrChinese said:
But I appreciate your point. *Perhaps* if things had been discovered in a different order, we would consider CI to be fringe and BM to be mainstream.
it matters not what is "fringe" nor what is "mainstream".

What matters is only the truth. And a good teacher of science should not be promoting only the mainstream, they should be promoting the truth.

The simple truth is that the world is epistemically indeterminable, but nobody has shown that it is also ontically indeterministic.

MF

:smile:
 
  • #23
ttn said:
No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that. But there is a choice about whether to have a theory that is deterministic and clear or a theory that is non-deterministic, fuzzy, subjective, "unprofessionally vague and ambiguous."
I agree 100%!

ttn said:
Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.
There is the (poor) reason of wanting to be "mainstream" or "popular" (which should not matter to a free-thinking, objective scientist)

MF
:smile:
 
  • #24
moving finger said:
The quantum world IS non-local, no matter what "interpretation" you favour! You cannot get rid of that fact.

This is not true, but discussions about this usually show that people understand different things under "local".
It is only in a deterministic ontology that "local" has an unambiguous meaning, namely that there is no causal influence beyond the light cone.
In a probabilitic setting, things are more complicated. There is "information locality" and "Bell Locality". Information locality somehow needs "free will" and "observation" in order to be able to set up a communication channel between two points. Bell locality is a definition of a requirement on correlations. I have shown here that a probabilitic theory that obeys Bell locality is always equivalent to an underlying deterministic theory that obeys information locality and vice versa, so we have:

Bell locality <=> determinism AND information locality

Bell locality is not a requirement by anybody per se ; it is not a requirement by relativity. It is just a written-down definition. Lorentz invariance + "free will" needs information locality in order to avoid a logical paradox.

"free will" just stands for the proposition that an experimenter can make choices in doing certain experiments or not, which are somehow not part of the physics of the set up (eg. Alice has the free choice of setting up her analyser according to an angle of her choice, and this is not somehow forced upon her by the set up). Without this "free will", most of experimental science doesn't have any meaning because all observations are the result of conspiracies.

QM predictions violate Bell locality and respect information locality.

So the only thing that we can conclude from this, is that QM cannot respect at the same time Lorentz invariance, underlying determinism and "free will".

Taking the third part for granted, QM cannot at the same time respect lorentz invariance and underlying determinism. That's all EPR says.
Now, if you stick to underlying determinism, then Lorentz invariance has to go (that's what Bohm does) ; if you stick to lorentz invariance, determinism has to go (that's what MWI does and it is also what you obtain when you consider QM just as a generator for probability functions of observations without underlying ontology "shut up and calculate") ; you can also give up on both (that's what Copenhagen does).

cheers,
patrick.
 
Last edited:
  • #25
' "free will" just stands for the proposition that an experimenter can make choices in doing certain experiments or not, which are somehow not part of the physics of the set up (eg. Alice has the free choice of setting up her analyser according to an angle of her choice, and this is not somehow forced upon her by the set up). Without this "free will", most of experimental science doesn't have any meaning because all observations are the result of conspiracies '

i would rather say that most of experimental science doesn't have DEFINITE last word because of man's STATISTICAL interpretation AND statistics if axiomatically FUNDATMENTALLY FLAWED to say nothing of always leaving some experimental and statistical error!

I for one am not holding my breath waiting for the finding of "hidden variables" in
some deterministic quantum field theory or ever expecting that one could be found by MAN and expecting that it could be usful as an aid for calculations! On the other hand, it is my belife that the universe IS deterministic but man has been given free will but this last statement "shudder" will probably get me another warning and probably belongs elsewhere in Physics Forums.

With this I give up on some of you guys in your arguements, as far as some are concerned some belief in a flat Earth do they NOT everyone will agree to that RIGHT?
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
 
  • #26
Kirk Gregory Czuhai said:
is not it true Dr. Chinese, that non-local theories run into causality problems.

Quantum Electro-Dynamics for example at least in the Domain for which it applies is about the BEST, MOST ACCURATE verified theory there is for is for at least a number of experimental parameters, implying strict Lorentzian invariance and thus locality in its type.

Although some esoteric experiments have somewhat recently been performed such as "freezing" photons, or "trapping" them, or slowing them down, and various, "tunneling" experiments performed, I know of no experiment performed yet where it has been shown that actual information can be tranmitted at a velocity faster than the speed of light in a vacuum.

please inform me if I am incorrect!

Locality can be considered preserved in some sense in QM/CI (Copenhagen Interpretation) as you describe. ttn argued exactly the opposite in his post above, that even CI is non-local. So we can see that there is some ambiguity to the meaning of non-local.

For causality to be preserved, I would presume that the future cannot influence the past. It seems to me that the Aspect experiments demonstrate that the future does influence the past; and by my definition, causality fails.

Although you and ttn disagree on what locality is, you both agree that causality stands. Hmmm...
 
  • #27
DrChinese said:
ttn argued exactly the opposite in his post above, that even CI is non-local.

As far as you assign any ontology to the wave function, it is clear that CI is non-local, no ? I mean, you collapse the wavefunction instantaneously and everywhere !
Of course, if you consider the wavefunction just as a mathematical tool to calculate probabilities (but that's not Copenhagen, no ?) the only thing you can talk about are probabilities for outcomes of experiments. These probabilities satisfy information locality but they do not satisfy Bell locality. The only way to talk about *causality* in a probabilistic environment is to talk about information flow, and as we know, QM respects this.
It is very difficult to give up on causality without compromising the entire scientific undertaking.

For causality to be preserved, I would presume that the future cannot influence the past. It seems to me that the Aspect experiments demonstrate that the future does influence the past; and by my definition, causality fails.

I would say that the only meaningful way to say that the future (or whatever) influences the past, is that you make a choice in the future, and that you can find out that choice in the past. This, to me, is the meaning of causality (which, as I said, could be named "information causality" in a probabilistic setting). This leads to a paradox, because, informed by your choice you will make in the future, in your past, you can now decide NOT to make that choice. So it is almost impossible to violate causality and not run into paradoxes ; the only way out is to avoid "free will": namely not to allow you somehow to decide to make that other choice.
But if there is no "free will" then all our experiments, in the past and in the future have been erroneously analysed. All "correlations" we found, ever, didn't need to be correlations, but could have been the result of conspiracies. So in essence, science falls on its face if you deny some form of "free will".
And if you accept it, and you want to avoid paradoxes, then (information) causality is a logical must. So I wouldn't give up on it :-)

cheers,
Patrick.
 
  • #28
vanesch said:
This, however, is not true, as MWI shows you. So don't fall in the same trap as CI proponents: do not deny a possibility of which there exists an example :-)

Touche'. But what I said originally is true. No local theory can account for the observed
EPR/Bell correlations. Even granting that MWI is a "local theory", it simply isn't true that
it accounts for the observed correlations. Rather, it denies that what we *took* to be the
outcomes of those experiments are the *real* outcomes of the experiments. That is, MWI
holds that our beliefs about what those experiments showed are *delusions*.

It's only in a very stretched, dubious sense that this counts as "accounting for the observed
correlations." If someone publishes a paper showing that drug X cures disease Y, one way of
"accounting for that data" is to say that the author simply faked the data. But that's not normally
what we mean by explaining the observed facts. This is explaining them away, not explaining
them.


Apparently, what doesn't go together is the following set:
{Lorentz invariance, deterministic ontology, experimentally confirmed QM EPR predictions}

QM/CI blows the first two ;
Bohm blows the first one ;
MWI blows the second one ;
Original Thinkers blow the third one.

I don't follow you here. You're saying MWI rejects determinism? I thought the whole point of MWI was to get rid of the pesky collapse postulate, leaving only the deterministic evolution? Or maybe you're making the higher-level point that this doesn't really work and MWI needs to "smuggle in" some notion of probability.

But whatever you meant, I don't think it's the fundamental objection to MWI. MWI forces us to hold that we have been deluded about the outcomes of all the science experiments that have ever been done -- including the ones that led to QM in the first place. That is a very uncomfortable position for a theory to be in... totally unprecedented in the history of science (or, more broadly, rational thought).
 
  • #29
vanesch said:
I would say that the only meaningful way to say that the future (or whatever) influences the past, is that you make a choice in the future, and that you can find out that choice in the past. This, to me, is the meaning of causality (which, as I said, could be named "information causality" in a probabilistic setting). This leads to a paradox, because, informed by your choice you will make in the future, in your past, you can now decide NOT to make that choice. So it is almost impossible to violate causality and not run into paradoxes ; the only way out is to avoid "free will": namely not to allow you somehow to decide to make that other choice.

No, I wouldn't agree with that picture at all. That is but one possibility! Why should the future be any more modifiable than the past? In other words, even the future is in the past to some even further off future. The following scenario is just as matched to the facts as anything (times are arbitrary and exaggerated):

a. I measure one of an entangled pair at an angle I choose at 3:30pm.
b. That causes it to have a definite spin orientation at the time of emission, 3:00pm. (However, note that there is still a random element present.)
c. That forces the other of the pair to have a definite spin orientation when it is created slightly later, at 3:01pm, so that total spin is conserved. Recall that in many experiments, the particle pair is not created exactly simultaneously (for example, in the Aspect experiments).
d. Bob measures that second particle at 3:45 and finds its spin orientation relative to the polarizer angle he chooses freely.

Regardless of whether you have a. before d. or not, the above description is perfectly valid and fits with all observables! There is no paradox in which a choice at one point in time interferes with a choice at another point in time. And yet, no information transfer occurs faster than c.

If you really believe in 4 space, this makes perfect sense.
 
  • #30
Adding to my post above after some reflection... What I am trying to say: we generally *think* we know what causality should be, how it should work, etc. But we don't quite know yet, do we?

After all, the laws of physics are build on symmetry and here we are denying one of the most fundamental symmetries of all: time reversibility. Perhaps the time dimension is fully symmetric, in which case c (+ or -) could still be respected. If most matter was moving in a particular and similar direction due to initial conditions, that could explain the world we see today.

This is speculation, of course. The point is we preserve causality at all costs and I ask why? Maybe causality exists but abides by different rules.
 
Last edited:
  • #31
vanesch said:
the only way out is to avoid "free will": namely not to allow you somehow to decide to make that other choice.
what exactly do you mean by "free will"?
How can I make a choice "other than" the choice that I did make?
MF
:smile:
 
  • #32
DrChinese said:
a. I measure one of an entangled pair at an angle I choose at 3:30pm.
b. That causes it to have a definite spin orientation at the time of emission, 3:00pm. (However, note that there is still a random element present.)
[...]

This is indeed a possibility, but as I outlined somewhere, it simply means that experimental science has no meaning. Indeed, if what I'm going to measure in the future influences what is happening in the past, I cannot conclude much.

Imagine the following "experiment": I have 2 wires, and on a panel, there's a light bulb. I want to find out whether the light bulb is connected to the two wires, so I take a battery, and connect them to the light bulb: it lights up. I disconnect the battery: it goes out. I reconnect it: it goes on again.
Conclusion of my experiment: yes, these wires somehow are connected (or pilot) the lightbulb.

But maybe not at all ! Something else is making this lightbulb light up and go out, and this influences me, connecting exactly a few nanoseconds earlier, each time, the wires to the battery and not.

So I cannot even determine from my experiment that the wires have anything to do with the lightbulb ! You have to leave aside this possibility if you are going to consider the experimental scientific method at all, no ?

cheers,
Patrick.
 
  • #33
ttn said:
I don't follow you here. You're saying MWI rejects determinism? I thought the whole point of MWI was to get rid of the pesky collapse postulate, leaving only the deterministic evolution? Or maybe you're making the higher-level point that this doesn't really work and MWI needs to "smuggle in" some notion of probability.

Absolutely. I think "pure MWI" doesn't work for the probabilititic aspect, and something extra is needed.

But whatever you meant, I don't think it's the fundamental objection to MWI. MWI forces us to hold that we have been deluded about the outcomes of all the science experiments that have ever been done -- including the ones that led to QM in the first place. That is a very uncomfortable position for a theory to be in... totally unprecedented in the history of science (or, more broadly, rational thought).

No, we are not completely deluded, only partially :biggrin:

As I've been advocating a few times already here, you can have a view where there is indeed a quantum world out there evolving deterministically out there, respecting all symmetries (unitary evolution), and an observer, evolving through all those "branches", making LOCALLY each time his choice for the next branch upon his local "observations" (= entanglement with other stuff, and "choice" to be in one of those branches).

This last part is indeed an extra postulate, outside of strict MWI, (as said above), is probabilistic, and is local (to the observer). It also explains fully his measurement history. Now, you can think of it what you want, but it IS a working explanation. There's no delusion in it, not any more than the wave function in Bohm. It is the same wave function.

In Bohm, there is a *physical token* which obeys non-local laws, in my version, there is a "mental token" which obeys local laws...

cheers,
Patrick.
 
  • #34
vanesch said:
This is indeed a possibility, but as I outlined somewhere, it simply means that experimental science has no meaning. Indeed, if what I'm going to measure in the future influences what is happening in the past, I cannot conclude much.
If the world is completely deterministic then it is just as correct to say the future influences the past as it is to say the past influences the future. In fact both past and future are fully determined.

It is possible that the universe operates this way.

Are you suggesting that experimental science would have no meaning in such a deterministic world?

I think not.

MF
:smile:
 
  • #35
moving finger said:
Are you suggesting that experimental science would have no meaning in such a deterministic world?

Yes, that's what I'm saying. In a fully deterministic world, indeed, experimental science doesn't make sense, because an essential ingredient in experimental science is the detection of a correlation of an effect with a free choice, which is assumed to have some "statistical independence" from the causal relationship under effect.
Let us look at the example of a new drug, that has to be tested. The idea is that somehow the mapping in which certain patients receive the drug, and others receive a placebo, or an old drug, is determined "independently". If however, this mapping is deterministically fixed, nothing can stop you from thinking that the order in which the patients come in is ALSO deterministically fixed, so that the patients with a serious disease happen to get the placebo, and those with lighter problems, the new drug. So the wrong conclusion will be that the new drug works very well. There is nothing "improbable" about it, because, by definition, in a deterministic view, probabilities don't make sense.

I see determinism as a structure in time, which resembles a structure in space, like, say, a pen. Let's say that the pen is lying along the z-axis, our "time" axis. And our "current time" is a slice at 2 cm from the back of the pen. Moving along the z-axis, we have a circular section of plastic and ink, which remains there for a while ; suddenly the radius of the circular section gets narrower, then we have a small metal disk which grows and then diminishes, and then there's nothing anymore. We can go back, and forward in z, we will have a "deterministic" section. You cannot conclude anything about these observations. Any "law" you could find based upon those observations is completely arbitrary.
You need to be able to assume "statistical independence" of your choices, and the mechanism under study ; and this "statistical independence" is impossible in a strictly deterministic universe, because all possible conspiracies are a possibility.

Of course, you could have a deterministic universe, in which everything behaves AS IF you had statistical independence between your choices and the process under study. But that, by itself, would be an enormous conspiracy !
A bit like the mice were guiding human science in the Hitchhiker's guide to the Galaxy :-)

cheers,
Patrick.
 
  • #36
vanesch said:
This is indeed a possibility, but as I outlined somewhere, it simply means that experimental science has no meaning. Indeed, if what I'm going to measure in the future influences what is happening in the past, I cannot conclude much.

Imagine the following "experiment": I have 2 wires, and on a panel, there's a light bulb. I want to find out whether the light bulb is connected to the two wires, so I take a battery, and connect them to the light bulb: it lights up. I disconnect the battery: it goes out. I reconnect it: it goes on again.
Conclusion of my experiment: yes, these wires somehow are connected (or pilot) the lightbulb.

But maybe not at all ! Something else is making this lightbulb light up and go out, and this influences me, connecting exactly a few nanoseconds earlier, each time, the wires to the battery and not.

So I cannot even determine from my experiment that the wires have anything to do with the lightbulb ! You have to leave aside this possibility if you are going to consider the experimental scientific method at all, no ?

cheers,
Patrick.

Sure, what you are saying makes sense, and yet that is exactly what needs re-thinking.

1. Our world is experimentally demonstrated as being full of both identifed causes and unidentified random effects. So your light bulb example applies only to that large subset that is what we refer to as deterministic. It is the other group that needs more explanation. You cannot deny that the future MIGHT have SOME role in explaining the random effects. Of course, I offer no convincing proof either. merely speculation.

2. There is nothing that says the future can't have a small role to play in what happens in the present. Two simple hypothetical examples: a) We are barely influenced by events occurring in Andromeda, but that does not mean there is no influence. Similarly, the influence from the future could be very mild. b) Look at how hard it is to create a singlet state! Such state allows us to see some strange quantum behavior (Nightlight calls it a parlor trick). And perhaps that is because the influence from our future is so subtle that it is difficult to otherwise see.

The way I see it, we are being asked to give up strict locality or strict causality if we reject hidden variables. Perhaps the least intrusive modifications we should make to theory will have us accept locality and acknowledge that the future could influence the past in a way which appears totally random from our perspective. Given symmetry considerations alone, I would think it is worth at least considering. And yet, in most respects, causality - and the scientific method! - would still apply.
 
  • #37
vanesch said:
Let us look at the example of a new drug, that has to be tested. The idea is that somehow the mapping in which certain patients receive the drug, and others receive a placebo, or an old drug, is determined "independently". If however, this mapping is deterministically fixed, nothing can stop you from thinking that the order in which the patients come in is ALSO deterministically fixed, so that the patients with a serious disease happen to get the placebo, and those with lighter problems, the new drug. So the wrong conclusion will be that the new drug works very well. There is nothing "improbable" about it, because, by definition, in a deterministic view, probabilities don't make sense.
Patrick.

Beware, you are just describing what is called an unlikely event or realisation: an event with probability= 0 but that may occur in an experimental trial (just a very rare event, like the "winning at the lottery" event).
There is not any incompatibility between the deterministic and probabilistic formulation of a problem. Only the hypotheses, may be incompatible (e.g. interpretation of "independent", probability law etc ...).
Therefore, when we verify the probability frequency in an experimental trial we must not forget that this trial may belong to an unprobable event set (i.e. we win at the lottery).

Seratend.
 
  • #38
vanesch said:
Yes, that's what I'm saying. ... There is nothing "improbable" about it, because, by definition, in a deterministic view, probabilities don't make sense.
I disagree. Probability (in the mind of an agent) usually relates simply to the agent's epistemic abilities - ie it's knowledge of the process concerned and it's ability to predict an outcome. Therefore even if I know that a particular process is completely deterministic (such as the toss of a coin) I may still be unable to predict the outcome, hence must resort to probability. My epistemic horizon does not allow me to distinguish between (A) a truly random and ontically indeterministic coin-toss, and (B) an ontically deterministic but unpredictable coin-toss. As far as I am concerned, both are simply epistemically indeterminable.

MF
:smile:
 
Last edited:
  • #39
moving finger said:
I disagree. Probability (in the mind of an agent) usually relates simply to the agent's epistemic abilities - ie it's knowledge of the process concerned and it's ability to predict an outcome.

Yes, that's the Bayesian view, I presume. However, I would think - maybe I'm wrong - that you need at least a few statements concerning statistical independence in order to infere ANYTHING that way (the law of large numbers also assumes somehow "independent trials"). I would think that strict determinism doesn't allow you to say that any event is "statistically independent" of any other (it is this "independence" which I call somehow "free will", I'm realising this now myself).
To me, a deterministic universe is a "frozen structure in 4 D". This can then take on ANY form or shape. Anything can happen at ANY moment. There is no need for a kind of "stationarity in time evolution". No time evolution laws are needed in a deterministic universe. The structure just "is", just like a pen just "is" and there is no way to deduce the existence of the metal ball in the tip when you know about the cross section in the plastic and ink.
I'm not saying that there are deterministic universes which COULD also be "stationary and ergodic" so that statistics DOES work out there, but my only point was that there is no reason for it to be so.

cheers,
Patrick.
 
  • #40
vanesch said:
Yes, that's the Bayesian view, I presume. However, I would think - maybe I'm wrong - that you need at least a few statements concerning statistical independence in order to infere ANYTHING that way (the law of large numbers also assumes somehow "independent trials"). I would think that strict determinism doesn't allow you to say that any event is "statistically independent" of any other
Determinism would force all events to be ontically dependent, yes. But not necessarily epistemically dependent (ie we might not be able to detect the dependence). For example, QM suggests that the quantum world is epistemically indeterminable (we cannot detect any dependence at the quantum level) but we cannot conclude from this that it is ontically indeterministic (ie determinsitic hidden variables theories are still valid).

vanesch said:
(it is this "independence" which I call somehow "free will", I'm realising this now myself).
oooops. What is "free will"?

vanesch said:
To me, a deterministic universe is a "frozen structure in 4 D". This can then take on ANY form or shape. Anything can happen at ANY moment. There is no need for a kind of "stationarity in time evolution". No time evolution laws are needed in a deterministic universe. The structure just "is", just like a pen just "is" and there is no way to deduce the existence of the metal ball in the tip when you know about the cross section in the plastic and ink.
I'm not sure what you are trying to say here. In the 4D block time model, each 3D-space cross-section represents a snapshot of space (like frames in a movie film). As time "progresses" we are just moving from one frame to another (so to speak).
I don't see where the problem is?

MF
 
  • #41
moving finger said:
I'm not sure what you are trying to say here. In the 4D block time model, each 3D-space cross-section represents a snapshot of space (like frames in a movie film). As time "progresses" we are just moving from one frame to another (so to speak).

Well, this 4D block can then just take on ANY shape, there needs to be no "statistical regularity" in the evolution of this 3D slice. That's what I was trying to show with my ballpoint. There is no need for any evolution law that relates one 3D slice to the next one, and it is such an evolution law that gives the inhabitants of these 3D slices the impression that they can use epistemically a statistical description of this evolution. Of course it is a possibility, but this places high constraints on the 4D shapes that are possible.
Statistical analysis of the shape of the 2D slice of a pen doesn't mean much, so in the same way, statistical analysis of the shape of 3D slices of an arbitrary 4D shape shouldn't learn us much either, no ? You need peculiar 4D shapes for this to hold, which make "statistics work" from one 3D slice to another.

cheers,
Patrick.
 
  • #42
vanesch said:
Well, this 4D block can then just take on ANY shape, there needs to be no "statistical regularity" in the evolution of this 3D slice.
No, I don't see this. The 4D block is fixed and static, and the inter-relation of the slices is fixed by determinism, I don't understand what you mean by "take on any shape"?

vanesch said:
That's what I was trying to show with my ballpoint. There is no need for any evolution law that relates one 3D slice to the next one, and it is such an evolution law that gives the inhabitants of these 3D slices the impression that they can use epistemically a statistical description of this evolution.
But there is an evolution law which relates one 3D slice to the next - the deterministic laws of nature.

vanesch said:
Of course it is a possibility, but this places high constraints on the 4D shapes that are possible.
I still don't know what you are trying to say. In a deterministic world, the configuration of the entire 4D block is fully determined by the laws of nature. One cannot get much more constrained than that.

vanesch said:
Statistical analysis of the shape of the 2D slice of a pen doesn't mean much, so in the same way, statistical analysis of the shape of 3D slices of an arbitrary 4D shape shouldn't learn us much either, no ?
The 3D slices represent our "present". I think what you are looking for in wanting to probe the 4th dimension is to have foreknowledge? (to know the future as well?)
MF
:smile:
 
  • #43
moving finger said:
I still don't know what you are trying to say. In a deterministic world, the configuration of the entire 4D block is fully determined by the laws of nature. One cannot get much more constrained than that.

Well, these "laws of nature" could simply be a catalog of arbitrary 3D slices, no ? What is the "law of nature" describing the 2D slices of a ballpoint ?

I know what you mean: you mean that there are some differential equations that relate these 3D slices in a very simple way, "locally" and "in a reductionist way". But that's a VERY PECULIAR KIND of determinism ! This is a very peculiar "symmetry" rule that this 4-D shape has to satisfy. Because there could be a very complicated "law of nature" which somehow reads:
"look up the current 3D slice in the Big Catalog", turn the catalog 1 page further, and this is the prediction of the next 3D slice.
This is a "law of nature" with the right "big catalog", that always works.
It is impossible for us to know it, of course, but there could be such a "catalog" and that would then be the "law of nature", which deterministically joins each 3D slice to the next.
In the case of our ballpoint, the "big catalog" is simply its technical drawing in 3D, sliced up in 2D slices. That technical drawing can be as simple, or as complicated, as the designer of the ballpoint decided.
So given this pathetic example of "law of nature" which determines the next 3D slice from the former, you see that ANY shape in 4D is possible. There is of course only ONE such shape, THE shape of the deterministic universe, but a priori, there's no reason why there should be SIMPLE laws of nature linking the different 3D slices, no ?

cheers,
Patrick.
 
  • #44
vanesch said:
Well, these "laws of nature" could simply be a catalog of arbitrary 3D slices, no ? What is the "law of nature" describing the 2D slices of a ballpoint ?
The laws of nature are found in the correlations between the slices.
If the slices are indeed arbitrary as you suggest, with no correlation between them, then this suggests absence of law.

In the ballpoint case, the "law of nature" of the ballpoint tells you how one slice changes to another slice as you move along the length of the ballpoint. If the ballpoint slices are arbitrarily random then this would imply no law (or randomness), but if there is correlation between the slices (as indeed there is in the case of a ballpoint) then the law of nature of the ballpoint tells you how these slices correlate with each other.

vanesch said:
I know what you mean: you mean that there are some differential equations that relate these 3D slices in a very simple way, "locally" and "in a reductionist way".
No, I mean the slices are correlated with each other, they are not simply arbitrary.

vanesch said:
But that's a VERY PECULIAR KIND of determinism !
What I have described is exactly deterministic. If the slices were arbitrary then one could claim this would lead an observer in this world to conclude that the world is indeterministic (the slices are not crrelated, there is no law), but if the slices are correlated then the determinsitic laws which decribe the correlations are the laws of nature.

vanesch said:
This is a very peculiar "symmetry" rule that this 4-D shape has to satisfy. Because there could be a very complicated "law of nature" which somehow reads:
"look up the current 3D slice in the Big Catalog", turn the catalog 1 page further, and this is the prediction of the next 3D slice.
Yep, this would work. This would be a (very large, infinite?) look-up table of the slices.

vanesch said:
This is a "law of nature" with the right "big catalog", that always works.
It is impossible for us to know it, of course, but there could be such a "catalog" and that would then be the "law of nature", which deterministically joins each 3D slice to the next.
In a determinsitic world we observe the laws of nature to be apparently fixed with time (the law of gravity for example seems to be invariant). This means that we do not need to have a big catalog which contains a unique description of every slice - instead we can reduce the whole problem down to having just one page (description of one slice) plus a one-time description of the laws of nature which allows us (in principle) to calculate all of the other slices.

vanesch said:
In the case of our ballpoint, the "big catalog" is simply its technical drawing in 3D, sliced up in 2D slices. That technical drawing can be as simple, or as complicated, as the designer of the ballpoint decided.
Yes, again this is the look-up table version. But if the slices are correlated we can actually reduce the information, so that rather than having an infinite number of cross-sectional drawings, we have instead a small number of key cross-sections plus a series of equations (the "laws of the ballpoint") which tell us how the other (undrawn) cross sections are correlated with the drawn ones. In the ballpoint case, we may only be able to do this by postulating that the laws suddenly "change" are some points (ie there are discontinuities and the laws of the ballpoint are not in fact fixed) - but nevertheless we can still describe the ballpoint using a limited series of cross-sections plus a limited number of laws, rather than an infinite number of slices.

vanesch said:
So given this pathetic example of "law of nature" which determines the next 3D slice from the former, you see that ANY shape in 4D is possible.
Any shape in 4D is possible in principle. But in a deterministic world with fixed laws, then specifying just one slice allows us to construct the entire world.

vanesch said:
There is of course only ONE such shape, THE shape of the deterministic universe, but a priori, there's no reason why there should be SIMPLE laws of nature linking the different 3D slices, no ?
There is a reason, I believe - and that reason (IMHO) is that the emergence of life would have been extremely unlikely in a world with variable laws (ie if the laws of nature were random or not fixed) - in fact we DO see that the laws of nature appear to be fixed - hence we may live in a deterministic universe.

Note : even if the laws are not fixed, if they vary in a deterministic way then this variation becomes just another "law", and determinism is still possible.

MF
:smile:
 
  • #45
moving finger said:
If the ballpoint slices are arbitrarily random then this would imply no law (or randomness), but if there is correlation between the slices (as indeed there is in the case of a ballpoint) then the law of nature of the ballpoint tells you how these slices correlate with each other.

Until you reach the tip ! But there IS a "law": the ballpoint's designer's 3D drawing.

No, I mean the slices are correlated with each other, they are not simply arbitrary.

Well, they are correlated in the "Big Catalog" too, of course: they are on neighboring pages. But of course I know what you mean: you mean that you can isolate SMALL PARTS of the slices, and then find systematic, "simple" laws that apply to ALL these small parts. But THAT is exactly the kind of "statistical independence" I was talking about in the beginning, which is necessary for experimental science to even make sense. It was my impression that this "statistical independence" of small parts of the slices is somehow not something natural for a deterministic view, while it appears almost naturally in a probabilistic universe (where the "rigid 4D structure" doesn't make sense, and where we are from the beginning "slice - oriented").

What I have described is exactly deterministic. If the slices were arbitrary then one could claim this would lead an observer in this world to conclude that the world is indeterministic (the slices are not crrelated, there is no law), but if the slices are correlated then the determinsitic laws which decribe the correlations are the laws of nature.

They are not "arbitrary" in the sense of "completely statistically independent", they are arbitrary in exactly the same sense in which the shape of a ballpoint is "arbitrary", but functional. Not just "any shape of plastic, ink and steel" will give you a usefull ballpoint. But you cannot formulate a simple law that relates each 2D slice of the ballpoint to its next.
This is how I see a priori a "deterministic" universe. A (maybe useful) shape in 4D.

In a determinsitic world we observe the laws of nature to be apparently fixed with time (the law of gravity for example seems to be invariant). This means that we do not need to have a big catalog which contains a unique description of every slice - instead we can reduce the whole problem down to having just one page (description of one slice) plus a one-time description of the laws of nature which allows us (in principle) to calculate all of the other slices.

And moreover, these laws (as compared to the catalog) are EXTREMELY SIMPLE. This is what is so amazing, in a deterministic setting, no ?

Yes, again this is the look-up table version. But if the slices are correlated we can actually reduce the information, so that rather than having an infinite number of cross-sectional drawings, we have instead a small number of key cross-sections plus a series of equations (the "laws of the ballpoint") which tell us how the other (undrawn) cross sections are correlated with the drawn ones. In the ballpoint case, we may only be able to do this by postulating that the laws suddenly "change" are some points (ie there are discontinuities and the laws of the ballpoint are not in fact fixed) - but nevertheless we can still describe the ballpoint using a limited series of cross-sections plus a limited number of laws, rather than an infinite number of slices.

Ah, I see what you mean: any kind of "functionality" requires some regularity, which expresses itself as a simplification of the "catalog law of nature".

Any shape in 4D is possible in principle. But in a deterministic world with fixed laws, then specifying just one slice allows us to construct the entire world.
Yes, but even in the "catalog world", one slice allows us to construct the entire world: just read the catalog :-)
What is amazing is the simplicity, and a simplicity in such a way that statistics works. I have the impression that an "a priori probabilisitic world" would be inclined more naturally to such simple laws than a deterministic 4D shape.

There is a reason, I believe - and that reason (IMHO) is that the emergence of life would have been extremely unlikely in a world with variable laws (ie if the laws of nature were random or not fixed) - in fact we DO see that the laws of nature appear to be fixed - hence we may live in a deterministic universe.

Hehe, but that argument is probabilistic :-))

Note : even if the laws are not fixed, if they vary in a deterministic way then this variation becomes just another "law", and determinism is still possible.

EXACTLY ! And if you pull this through to the most general case, where the deterministic laws are varying with position and time, you're back to the "catalog". This, to me, is the "generic" deterministic universe.


cheers,
Patrick.
 
  • #46
and then a guy like me comes along! can you determine whether i will post again after this post? most likely? and you will say that my post will probaby be determined to not make any sense! but will it be determined to be based on this one post?
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
owner/ceo Heaven Sense
http://HeavenSense.WS
http://Allendale.WS
http://Czuhai.WS
http://LittleHoney.WS
p.s. i am convinced we live in a COMPLEX universe, which is neither completely
predictable (deterministic) nor completely random (indeterministic) so therefore
everyone is partly right and partly wrong; right or wrong?
 
Last edited:
  • #47
Kirk Gregory Czuhai said:
and then a guy like me comes along! can you determine whether i will post again after this post? most likely? and you will say that my post will probaby be determined to not make any sense! but will it be determined to be based on this one post?
I think you will find that no agent operating WITHIN a deterministic world can make infallible preictions ABOUT that deterministic world.

(thus : it matters not whether the world is deterministic or not, one cannot make infallible predictions from within that world)

MF
:smile:
 
Back
Top