Many-worlds true quantum event generator

Click For Summary
The discussion revolves around the concept of a quantum event generator that could illustrate the Many-Worlds Interpretation (MWI) of quantum mechanics, where choices lead to the splitting of realities. Participants debate the feasibility of experiencing such splits through a device, questioning whether true quantum events automatically create divergent worlds. Theoretical elegance is discussed, with some arguing that MWI avoids the need for hidden variables, while others find it counterintuitive and lacking in empirical support. The conversation also touches on the implications of chaos theory and the nature of quantum systems, emphasizing that every quantum event could lead to multiple outcomes. Ultimately, the MWI remains a contentious topic among physicists, with varying beliefs about its validity and implications.
  • #91
rodsika said:
Why do you put so much weight on DeWitt.
I don't, but the problem of needing to find a preferred basis seems specific to DeWitt's version, so since you were asking questions about how to pick it I figured you were asking about that version.
rodsika said:
Therefore why can't we just accept the Decoherence version of MWI as it needs the environment to define the Preferred basis.
Right, but with the decoherence version there is no precise definition of "worlds" and decoherence only approximately forces various subsystems into a mix of eigenstates of some observable like position, the interference terms don't entirely disappear and the whole business also depends on how you divide "subsystem" and "environment".
rodsika said:
Now in pure Decoherence version (without DeWitt Adhoc ness), is it possible other branches would have other environments (akin to parallel worlds with different laws of nature) such that the environment there with constants of nature that don't admit positions to have charge as the preferred basis?
We have to assume the same basic laws apply to all "worlds" in the MWI because you have to be able to represent the wavefunction of the universe as a single state vector evolving according to the Schroedinger equation. The paper you linked to earlier by Schlosshauer says that decoherence tends to drive subsystems towards an ensemble of position eigenstates, though in some cases it can be energy eigenstates instead, see page 14:
In general, three different cases have typically been
distinguished (for example, in Paz and Zurek, 1999) for
the kind of pointer observable emerging from an interaction
with the environment, depending on the relative
strengths of the system’s self-Hamiltonian bHS and of the
system-environment interaction Hamiltonian bHSE :

(1) When the dynamics of the system are dominated
by bHSE , i.e., the interaction with the environment,
the pointer states will be eigenstates of bHSE (and
thus typically eigenstates of position). This case
corresponds to the typical quantum measurement
setting; see, for example, the model of Zurek (1981,
1982), which is outlined in Sec. III.D.2 above.

(2) When the interaction with the environment is weak
and bHS dominates the evolution of the system (that
is, when the environment is “slow” in the above
sense), a case that frequently occurs in the microscopic
domain, pointer states will arise that are energy
eigenstates
of bHS (Paz and Zurek, 1999).

(3) In the intermediate case, when the evolution of
the system is governed by bHSE and bHS in roughly
equal strength, the resulting preferred states will
represent a “compromise” between the first two
cases; for instance, the frequently studied model
of quantum Brownian motion has shown the emergence
of pointer states localized in phase space,
i.e., in both position and momentum
(Eisert, 2004;
Joos et al., 2003; Unruh and Zurek, 1989; Zurek,
2003b; Zurek et al., 1993).
(again, look at the actual paper to see the notation rendered correctly, I didn't feel like translating the various Hamiltonian symbols into LaTeX)
 
Physics news on Phys.org
  • #92
bg032 said:
However my opinion is that at the macroscopic level the universal wave function has a strong tendency to decompose into permanently non-overlapping wave packets. The reason of this is basically the form of the potential of the hamiltonian + the process of macroscopic amplification and the interaction with the environment.

This is true only when Universe had cooled enough. In early Universe (quagma state) or even just an ordinary plasma matter was too hot, so no separate structures existed, and even more, no structures with any sort of "memory" were possible.

Of course, we are free to pick any basis, including "this area of quagma", but the result does not have a lot of sense, like the famous "photon perspective" question.
 
  • #93
Dmitry67 said:
This is true only when Universe had cooled enough. In early Universe (quagma state) or even just an ordinary plasma matter was too hot, so no separate structures existed, and even more, no structures with any sort of "memory" were possible.

Of course, we are free to pick any basis, including "this area of quagma", but the result does not have a lot of sense, like the famous "photon perspective" question.

I agree, but I do not see problems; now the universe is cooled and now we observe a quasi-classical realm.
 
  • #94
Hi. Can I say it seems to me that none of the standard or non standard explanations are very true. I think they are the best theories we can come up with in the hope that they will somehow spawn a better predictive capability but in another sense they are all just attempts to cover the fact that we just don't know, while giving us some sort of picture of what happens, based on our experience of the world around us and how she works. I agree that what "makes sense" to us may not be any sort of reality in the "absolute sense" Our mathematical analyses are based on having some sort of image in our minds about how maths should work, but although it seems to work out in practice most of the time, we shouldn't start to believe too much in the maths either. At root nature is a mystery.
 
  • #95
woolyhead said:
Hi. Can I say it seems to me that none of the standard or non standard explanations are very true. I think they are the best theories we can come up with in the hope that they will somehow spawn a better predictive capability but in another sense they are all just attempts to cover the fact that we just don't know, while giving us some sort of picture of what happens, based on our experience of the world around us and how she works. I agree that what "makes sense" to us may not be any sort of reality in the "absolute sense" Our mathematical analyses are based on having some sort of image in our minds about how maths should work, but although it seems to work out in practice most of the time, we shouldn't start to believe too much in the maths either. At root nature is a mystery.


I agree with you about this.
It seems a lot of people who are MWI proponents value math over observed reality.



Dmitry67,

So you are admitting that the Dewitt Many Worlds with real splitting of worlds are in violation with relativity, correct?
So you are also agreeing that since that MWI version cannot make sense of probability and has problems with relativity, it's basically worse than Bohm which can atleast get probability right?

So you are a proponent of the "pure wave mechanics" which has no relativity problem, but still can't make sense of probability without additional postulates?
 
  • #96
JesseM, you seem to be pretty knowledgeable in this subject.
Are you a proponent of MWI or just playing Devil's Advocate?
 
  • #97
Fyzix said:
JesseM, you seem to be pretty knowledgeable in this subject.
Are you a proponent of MWI or just playing Devil's Advocate?
Insofar as there's any "real truth" about what's going on with QM my hunch is that the truth would a) not involve anything special happening during "measurement", since measuring devices are just large collections of particles which should follow the same laws as smaller collections, and b) not involve any violation of relativistic locality. So given Bell's theorem I think something along the lines of the MWI is the best option, but I hold out hope that in the future someone may find a new formulation of a "many-worlds-like" interpretation that doesn't have the preferred basis problem of DeWitt's version or the ambiguity about how to derive probabilities of the "pure wavefunction" version.
 
  • #98
Well the thing is, DeWitt MWI got problems with relativity, Bohm got problems with relativity, Bohm derive Born Rule.

So Bohm is the obvious choice between the two, but personally I struggle with accepting problems with relativity.

If we discard both of those and move onto the "pure wavemechanics", we are still stuck with the Probability problem.
It seems that a lot of people don't really recognize the severity of the probability problem, it's flat out disproving MWI at this point.
It's saying "MWI DOES NOT FIT REALITY", so how may one go about solving it?

Well one is definitely forced to add postulates, which most MWI adherents are now starting to slowly accept... Such as either particles (Many Bohmian Worlds) or some other selection process, either way, "PURE" MWI is disproved.
(Unless you manage to fool yourself into believing that consciousness somehow solves it all...)

By the way, there seems to be some problems with decoherence too, that it alone isn't enough to account for our experience in "pure wave mechanics"

See here:
http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.1926v1.pdf


Not to mention that others such as Tim Maudlin also has critized this "pure wave mechanics decoherence approach).

I think it's safe to conclude that these 2 approaches (in their current forms) have been thoroughly refuted.
 
  • #99
Fyzix said:
If we discard both of those and move onto the "pure wavemechanics", we are still stuck with the Probability problem.
It seems that a lot of people don't really recognize the severity of the probability problem, it's flat out disproving MWI at this point.
How does it "disprove" it? It's not that the "pure wavemechanics" version gives incorrect probabilities, it's just that it's not clear how to get any probabilities from it (some MWI advocates claim that arguments from decision theory are sufficient), but I don't see why we can't hope that new insights might appear in the future. For example, one interesting suggestion I saw here was that one might describe the evolution of the universal wavefunction in computational terms, and somehow treat classical observers as sub-computations, so the computation required to compute the evolution of the universal wavefunction might naturally lead to a probability measure on different possible sub-computations. Another speculation I've seen is mangled worlds though I don't really understand this proposal very well. And there's the interesting result of http://www.lps.uci.edu/barrett/publications/SuggestiveProperties.pdf showing that if you model the state vector of an idealized observer performing an infinite series of measurements in some quantum experiment, as the number of experiments go to infinity the state will approach "an eigenstate of reporting that their measurement results were randomly distributed and statistically correlated in just the way the standard theory predicts", even with no assumption of anything like the Born rule.
Fyzix said:
By the way, there seems to be some problems with decoherence too, that it alone isn't enough to account for our experience in "pure wave mechanics"

See here:
http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.1926v1.pdf
But it depends what you mean by "account for our experience", for example the author of that problem has some philosophical (not technical) objections to the idea of using coarse-graining to define macroscopic "worlds", but if you define the range of possible "experiences" we could have as some collection of coarse-grained descriptions of our lab equipment or brain states, then decoherence could (I think) explain why we don't see interference between different possible coarse-grained macrostates. So it becomes a philosophical question of whether you think this is a good enough way of accounting for the apparent classical macro-world or whether you're bothered by the lack of any totally well-defined formula for what the most "natural" choice of coarse-graining would be, there's no debate about the technical details of what decoherence says or doesn't say. Personally I find such an approach unsatisfying, but to make definitive statements like this:
Fyzix said:
I think it's safe to conclude that these 2 approaches (in their current forms) have been thoroughly refuted.
...is just silly. For something to be "thoroughly refuted" in physics there needs to be some undeniable technical critique that causes the approach to fall apart, not just verbal philosophical objections.
 
Last edited by a moderator:
  • #100
First let's take the standard DeWitt MWI approach, I would say that YES, unless we are willing to let go of relativity, this approach is refuted by relativity...


JesseM said:
How does it "disprove" it? It's not that the "pure wavemechanics" version gives incorrect probabilities

Well, it sort of does...
Take a simple experiment where QM predicts 0.1% chance of X occurring and 0.9% of Y occurring.
We repeat this ten times and always get 1 X and 9 Y's, according to MWI this "probability" would always go to 50/50 as the universe branch into 2 branches.
So infact it does give us probabilities that are in direct conflict with reality...

(some MWI advocates claim that arguments from decision theory are sufficient)

Yes a few people do, but these are a minority of people who has just decided that they believe in MWI and therefore do not really care too much.
There are quite a few papers that adresses this decision theory approach and shows why it's wrong... I guess you are aware of these papers.


but I don't see why we can't hope that new insights might appear in the future.

Sure we can hope, but hoping isn't science, then we might as well hope for a interpretation that doesn't have any of these problems resulting from progress in ToE...

For example, one interesting suggestion I saw here was that one might describe the evolution of the universal wavefunction in computational terms, and somehow treat classical observers as sub-computations, so the computation required to compute the evolution of the universal wavefunction might naturally lead to a probability measure on different possible sub-computations.

Yes, I've been in contact with the author of this paper discussing MWI before.
He himself doesn't seem overly enthusiastic about it, giving MWI without modification less than 25% of being correct in the end...
That says a lot when the author of the paper admits it's in serious problems (which I admire him for).

Another speculation I've seen is mangled worlds though I don't really understand this proposal very well.

I'm aware of this approach, interestingly enough the previous author you mentioned has given a clear and simple critique of the Mangled Worlds theme right here:

http://onqm.blogspot.com/2009/09/decision-theory-other-approaches-to-mwi.html

And there's the interesting result of http://www.lps.uci.edu/barrett/publications/SuggestiveProperties.pdf

I haven't checked this out yet, but I've been in contact with Jeff Barrett and while he has faith in pure wave mechanics, it's not done like he said.
Bohmian Mechanics is also something he considers "pure wave mechanics" (although he is not particulary fond of Bohmian mechanics), but in his view Pure wave mechanics does not have to imply all outcomes, which changes the game quite a bit...

As for the problems with decoherence, I will try to dig up Tim Maudlin's objections too so you can see that there are infact technical and philosophical reasons for not being satisfied with this approach at all.
 
Last edited by a moderator:
  • #101
Fyzix said:
1 So you are admitting that the Dewitt Many Worlds with real splitting of worlds are in violation with relativity, correct?
2 So you are also agreeing that since that MWI version cannot make sense of probability and has problems with relativity, it's basically worse than Bohm which can at least get probability right?
3 So you are a proponent of the "pure wave mechanics" which has no relativity problem, but still can't make sense of probability without additional postulates?

1 Dewitt's point of view is not about 'real' splitting of worlds, but about the 'preferred' splitting 'in most cases' as I understand. As this description is non-mathematical and fuzzy. So as I understand, Dewitt does not suggest a NEW interpretation, but rather an interpretation over an interpretation :)

Even, as I suspect, he was trying to bring a false notion of 'objectiveness of splitting', we all agree that as unitary evolution of the universe wavefunction don't violate relativity, then all 'subproducts' of it are in agreement with relativity.

I don’t want to say that he is WRONG: It is like saying that 'world consists of separate stars, planets and gas clouds'. True as some approximation, but not on a fundamental level.

2 Issues with Born rule are well known, but let’s put them aside for now.
Regarding Bohmian mechanics, do you know the current status of the BM? AFAIK, what is called BM is not even relativistic. There are some *different* versions, compatible with relativity, one with hidden preferred frame (so the theory is 'secretly' Lorentz-non-invariant), another (Demystifier’s) does not have it, but his work is controversial (there was a discussion here, people did not agree on math). In any case, BM *is struggling with relativity issues right now*.
Regarding BM, I don’t see why an assumption of having extra particles is ‘weaker’ than the assumption of the Born rule it should ‘explain’. As you know, the laws of motion of these ‘particles’ as made just to satisfy the Born rule, working back. BM requires additional ‘curve fitting’ to explain new phenomena like Hawking radiation, Unruh effect etc (not sure if it is applicable to the Demystifier’s version: I know that his version handles these issues, but not sure if he had made any special assumptions to explain these phenomena). Finally, I expect BM to ‘break’ at the TOE level.

3 Yes. I even think that situation is much more complicated: at first, what is an observer? What basis I should use – my body, my head with my hair, my head without my hair, my brain, distinct ‘computational’ states of my brain? There are many microscopically different states of the brain mapped to the same ‘computational’ state (it is like microprocessor takes input value of +4.9V, +5.0V, +5.1V as ‘true’, ignoring the difference on the computational level). Then, we even haven’t started to address the continuous observation, when splitting of an observer constantly occurs, redefining the basis, and to make it more complicated, doing it in a basis-dependent manner!

It is soooo interesting and we are only in the beginning of our way. Ultimately, I believe only theory of consciousness would be able to answer why Born rule is observed Note: MWI (on frogs level) is not about what happens, it is about what is observed (Einstein would be happy!)
 
  • #102
As soon as you bring splitting in, relativity is violated, like I showed you with the quote from Jeff Barrett.
According to this, even Deutsch's version actually requires splitting and hence violates relativity...

I know there are some attempts to solve dBB's problems with relativity, I'm not sure if they have been successful yet.
However my point is that give nthe fact that the splitting MWI violates relativity AND can't make sense of Born Rule, dBB only violates relativity, and is thus favoured over the splitting MWI.

I understand that you are very fond of MWI, like you said, very much because you also subscribe to the hypothesis that the universe is made of math.
But seriously, I doubt your consciousness dream will ever yield anything.
The way I see it ( and most others ) is that if MWI is on the right path, it's still missing sometihng essential, like particles or something else.

By the way, Einstein would infact not be happy.
You have to think that this MWI view was obviously thought about by the founders, without making a full theory of it.
But was rejected due to it's many problems.
 
  • #103
Fyzix said:
As soon as you bring splitting in, relativity is violated, like I showed you with the quote from Jeff Barrett.
According to this, even Deutsch's version actually requires splitting and hence violates relativity...

Fyzix, repeating the same argument twice does not make it valid. It was shown that the argument you are repeating over and over again does not make any sense. Do you need an exact quote?

JesseM said:
Dmitry67's comment seems accurate to me, Schlosshauer's main criticism is that he personally finds it counterintuitive that systems would constantly be "copied" (and I think he's taking 'copying' too literally, it's just a metaphor for the different elements in the superposition), not that this would be incompatible with any known physical principles: "there is a problem in imagining that such a splitting process somehow physically copies the systems involved." His other criticism is that "A strong picture of spacetime somehow unzipping into connected spacetime regions along the forward light cone of the measurement event, would not be compatible with special relativity insofar as relativity presupposes that all events occur on the stage of Minkowski spacetime", but this is a strawman since the MWI does not offer such a "strong picture" picture of spacetime "unzipping", it says that there are superpositions of different macroscopic states in the same spacetime (though this would become trickier if we tried to incorporate different curvatures of spacetime in general relativity...without a theory of quantum gravity, what the MWI says about spacetime curvature is bound to be speculative though)
 
Last edited:
  • #104
Fyzix said:
As soon as you bring splitting in, relativity is violated, like I showed you with the quote from Jeff Barrett.
According to this, even Deutsch's version actually requires splitting and hence violates relativity...

If splitting is considered the representation of a pattern of the universal wave function, (like clouds are the representation of a pattern of the density of water vapor), I don't see any problem with relativity. Suppose that the two vectors \Psi(t) and \Psi'(t) represent the same universal wave function as seen by two diferent reference frames. Their patterns determine two different (time -dependent) decompositions:

\Psi(t)=\Psi_1(t) + \ldots + \Psi_{n_t}(t) and \Psi'(t)=\Psi'_1(t) + \ldots + \Psi'_{n'_t}(t).​

There is no direct law for transforming the first decomposition into the second one, and therefore there is no violation of relativity.
 
  • #105
Don't mind Dmitry67, he likes to see past things that doesn't fit his hypothesis.

Check out the link to the stanford entry I gave earlier in the paper explaining them problem.
 
  • #106
JesseM said:
This is getting rather philosophical, but my question here would be, what does it mean to "experience" superposition? Suppose we replace Schroedinger's cat with an intelligent being capable of communication (perhaps a person, but slightly more realistically it could be an A.I. running on a quantum computer), and instead of the random radioactive decay either killing them or letting them live, the outcome of the decay just determines which of two hidden photographs will be uncovered and shown to this being. The subject of the photos isn't known in advance to the being and they could be absolutely anything, perhaps one is a photo of a painting of George Washington and the other is a photo of a duck. So would "experiencing" superposition of (left photo uncovered, right photo remains hidden) and (right photo uncovered, left photo remains hidden) involve being aware of what was in both photos at once? The problem is, suppose we ask this being to then write down a story about whatever it is he has seen...obviously we'll get a superposition of stories, and significant amplitude will be assigned to both stories involving George Washington and stories involving ducks, but the amplitude assigned to stories that actually involve George Washington interacting with a duck will be totally negligible (perhaps not exactly zero since a person who just sees a picture of George Washington might by chance happen to write a story which also involves a duck and vice versa, but the amplitude to "George Washington interacts with a duck" stories shouldn't be any less negligible than "George Washington interacts with a tiger" or any other random animal). So, if you claim that this individual has "experienced" a superposition of George Washington and a duck, it seems like you have to say that somehow the individual can't act on this composite knowledge when writing a story (or superposition of stories), which seems to indicate a radically dualistic view of the relation between their "experience" and the actual behavior caused by their physical brain.

JesseM, going back to this message about the person experiencing both the drawing of George Washington and duck in superposition and why it's not possible because it can indicate a radically dualistic view of mind and brain. This is not a strong refutation because there is still fierce debate in consciousness research about this. What other reasons could forbid such actual superposition in one world. Let's go to the simple one electron at a time double slit experiment. Why. If the electron actually duplicated physically in one world and they both enter both slits. Would there be experimental differences to the data in the measurement such as the charge increasing by two or other anomaly? Or would there be no difference, what do you think?

I guess it's really more like superposition is simply superposition of possibilities, not the actual object. But then if not the actual object, then it just vanish or stay at the side while the possibilites are in superposition. This doesn't make a lot of sense unless the particle can be push just like the quantum potential in Bohm interpretation but this one conflicts severely with lorentz invariance

It seems we are being trapped in the corner that Many Worlds may be the option left. The reasoning is that in the 430 atom buckyball. If we have to believe that the wave is only wave of possibility, and the buckyball is not physically there.. then it doesn't make sense it just vanish into thin air. And it doesn't make sense the buckyball duplicated itself physically while being isolated. I guess we can detect changes in the data if there are 2 copies isn't it.

But not if there are in separate worlds just like in Many Worlds. Btw.. when interferences form, the destructive interferences are supposed to be places where the buckyball ball in different worlds at in the same space, hence they just dislodge themselves from that spot to make the region of the destructive inteferences empty? Is this the reason for the interference patterns? Also this proves that objects in different worlds can still affect one another or else there would be no interferences so can't we use this possibility to contact other parallel many worlds?
 
  • #107
rodsika said:
JesseM, going back to this message about the person experiencing both the drawing of George Washington and duck in superposition and why it's not possible because it can indicate a radically dualistic view of mind and brain. This is not a strong refutation because there is still fierce debate in consciousness research about this.
Even for dualists I think there are very few who would really suggest that I could have experiences which I was utterly unable to act upon in any way. Even if you suggest this, it's a useless hypothesis to talk about because you'd never be able to verbally affirm that you were having such an experience, you would continue to speak and write exactly as if you hadn't had any such experience.
rodsika said:
What other reasons could forbid such actual superposition in one world.
I didn't say there couldn't be such superpositions in one world, just that I don't know what it would even mean to say an observer is "experiencing" such a superposition. Are you just imagining a sort of blurry/double vision like being drunk, or something totally alien to our ordinary experience and impossible to visualize/imagine? To me it seems like an observer in superposition would really imply different parallel experiences, and then you really just have the MWI again.
rodsika said:
Btw.. when interferences form, the destructive interferences are supposed to be places where the buckyball ball in different worlds at in the same space, hence they just dislodge themselves from that spot to make the region of the destructive inteferences empty?
I don't really think the current versions of the MWI allow for such a concrete picture of why interference patterns form, you just kind of have to accept the math of wavefunction evolution as fundamental. Perhaps someone might someday derive the wavefunction from some slightly more intuitive axioms involving interactions between parallel versions of the same particle, who knows, but it hasn't been done yet anyway.
 
  • #108
Hey

Just a quick question - if decoherence were actually a mechanism for creating the classical world - wouldn't this refute any other interpretation that supposes quantum mechanics applies to the macroscopic world? I say this because as far as I know, decoherence only creates a more complex superposition, not a mixture... but people seem to claim that it solves the measurement problem... but there is no actual outcome from the basis states is there?
 
  • #109
In some sense it refutes Copenhagen int, because (as Decoherence is a part of QM, not any particular Int, so it exists in ALL interpretations) in CI there are 2 mechanisms to explain the same thing: Decoherence and Collapse. And it does not make any sense.

decoherence only creates a more complex superposition, not a mixture - it creates ALMOST an exact mixture, FAPP you can ignore terms with quotients like 10^-22 or so.

Decoherence solves the first part of the measurement problem. It explains how do we get a mixture of the outcomes, but it does not tell what outcome is 'real'. Then you have a choice between BM and MWI.
 
  • #110
Dmitry67 said:
it creates ALMOST an exact mixture

Dmitry67 said:
Decoherence solves the first part of the measurement problem. It explains how do we get a mixture of the outcomes, but it does not tell what outcome is 'real'. Then you have a choice between BM and MWI.

So it doesn't create a mixture? If its almost one... but almost one doesn't make it one.

But if it did create a mixture... there would be a definite physical state of a particular quantum system - we just don't know it?
 
  • #111
StevieTNZ said:
1 So it doesn't create a mixture? If its almost one... but almost one doesn't make it one.

2 But if it did create a mixture... there would be a definite physical state of a particular quantum system - we just don't know it?

1 Why should we care?
We don't refute 2nd law of thermodynamics because *theoretically* water in a glass can separate itself into hot and cold, and pieces from a broken vase can jump from the floor to form an unbroken vase. It is possible, but the probability is so tiny, that we can ignore it and call an approximate law a *law. The same is true for the decoherence, theoretically it is possible to ‘undo’ the measurement, but again the probability is too low.

2 The state is exact and is known as ‘universe wavefunction’. However, we observe only a tiny slice of it, so we can’t calculate it. Also, the definition of a ‘system’ in MWI is very complicated.
 
  • #112
ExecNight said:
Well MWI suggests for every random event, the universe splits into available options. I think "Random Event" is the key here. Because some things can't be mathematically modeled right now doesn't mean they are random.

Actually i will go as far as saying, almost everything above atomic level can be very well mathematically modeled from the beginning of the universe. Your conciousness is a product of this material world, so considering your actions as random is absurd :)

I thought that consciousness was being considered as a quantum event taking place within an area of the brain involving the tubules. No?
 
  • #113
sparkypaul said:
I thought that consciousness was being considered as a quantum event taking place within an area of the brain involving the tubules. No?
No, the Penrose/Hameroff microtubule theory is a view few other scientists find credible, for one thing it's thought to be impossible to maintain large-scale quantum coherence in biological tissue, see this paper (though there are some criticisms of that paper here, and further discussion here), and the wikipedia article discussing their theory also mentions:
Orch OR is no longer considered a good candidate for a quantum source of consciousness. In 2009, Jeffrey Reimers et al. showed that coherent Fröhlich condensates, the states Hameroff and Penrose implicated as the basis of Orch OR, could not exist in biological tissue. They found that coherent Fröhlich condensates of the sort required by Orch OR would require temperatures of between several thousand to several million kelvins, an environment not possible in biological tissue. If the energy required to keep the oscillators in a coherent state for the required 500 ms came from a chemical source, it would require the energy equivalent of a C-C bond being formed or broken every picosecond. The GTP mechanism proposed by Hameroff and Penrose would require the hydrolysis to GDP of approximately 4 or 5 GTP molecules every picosecond, a phenomenon that does not appear to occur in biological systems. [28]
 
Last edited:
  • #114
Dmitry67 said:
1 Why should we care?
We don't refute 2nd law of thermodynamics because *theoretically* water in a glass can separate itself into hot and cold, and pieces from a broken vase can jump from the floor to form an unbroken vase. It is possible, but the probability is so tiny, that we can ignore it and call an approximate law a *law. The same is true for the decoherence, theoretically it is possible to ‘undo’ the measurement, but again the probability is too low.

I care, because its obvious decoherence doesn't create a mixture. So claiming it does is incorrect. You can't ignore something because it has a low probability. This was my point in another post in another thread on here - even if something has 98% probability for happening, and something else 2%, clearly it is possible for the 'something else' to happen.
 
  • #115
JesseM said:
No, the Penrose/Hameroff microtubule theory is a view few other scientists find credible, for one thing it's thought to be impossible to maintain large-scale quantum coherence in biological tissue, see this paper (though there are some criticisms of that paper here, and further discussion here), and the wikipedia article discussing their theory also mentions:

Thanks Jesse.
 
  • #116
StevieTNZ said:
I care, because its obvious decoherence doesn't create a mixture. So claiming it does is incorrect. You can't ignore something because it has a low probability. This was my point in another post in another thread on here - even if something has 98% probability for happening, and something else 2%, clearly it is possible for the 'something else' to happen.

It is not 98% or 99.9%
Non-diagonal elements decay in few nanoseconds to 10^-23 or so.
You have more chance seeing a dancing red elephant merging in air from randomly moving air molecules, then dealing with the fact that mixture is not complete.

If you are still claiming that such probabilities are important, then you should also accept that 2nd law of thermodynamics is WRONG.
 
  • #117
Dmitry67 said:
It is not 98% or 99.9%
Non-diagonal elements decay in few nanoseconds to 10^-23 or so.
You have more chance seeing a dancing red elephant merging in air from randomly moving air molecules, then dealing with the fact that mixture is not complete.

If you are still claiming that such probabilities are important, then you should also accept that 2nd law of thermodynamics is WRONG.

Clearly it is wrong, if one were to give truth values to it being correct or not. Approximation is not an exact representation of reality.

The percentages I gave in my other post were made up to represent what I'm trying to get across.
 
  • #118
"Clearly it is wrong"? Any details?

Decoherence depends on the number of degrees of freedom of an observer system. So non-diagonal elements became very small after an interaction between a quantum system and an observer. How small - it depends on the number of atoms in an observer. It is about 1/(Avogadro number)
 
  • #119
Dmitry67 said:
"Clearly it is wrong"? Any details?

Decoherence depends on the number of degrees of freedom of an observer system. So non-diagonal elements became very small after an interaction between a quantum system and an observer. How small - it depends on the number of atoms in an observer. It is about 1/(Avogadro number)
But one can argue that any time there is any level of superposition rather than a statistical mixture, the notion of assigning "probabilities" to different outcomes becomes problematic (if we perform an imperfect measurement in the double-slit experiment which reduces the interference pattern on the screen to almost a non-interference pattern but still with a little remaining interference, does it make sense to assign a "probability" to whether the particle went through the left slit or the right slit?) One of the most prominent MWI advocates, David Deutsch, takes this position, I'll quote him from p. 332 of https://www.amazon.com/dp/0684814811/?tag=pfamazon01-20:
From the point of view of the interpretation of quantum mechanics, I think decoherence is almost completely unimportant. That's because decoherence is a quantitative matter. The interference phenomena never completely vanish; they only decrease exponentially until you can't be bothered to measure them anymore. The question of what the [interference] terms mean is still there, even if the coefficient in front of them is very small. It's like being a little pregnant. Those terms, however small, raise the same problem. If the argument is supposed to be that superpositions occur at a microscopic level but not to macroscopic objects, that's a bit like saying that you believe your bank is honest at the level of pennies but is cheating you at the level of pounds. It just doesn't make sense. It can't be that there are multiple universes at the levels of atoms but only a single universe at the level of cats.
 
Last edited by a moderator:
  • #120
The values of "probabilities", or "intensity of existence" how it is called in MWI, have an exact value in MWI as it is defined by the wavefunction. Instead of 0.5/0.5 you have say 0.5-1^-23, which leaves a room for the interference terms. So it makes sense to assign a probability.

I agree that outcomes in MWI are never definite, and (even very unlikely) you can arrange particles in a way to even 'undo' there measurement. Also, as the very definition of a "system" is fuzzy, the "outcome" is fuzzy as well.

Still, I fail to understand why Deutsch find thee 10^-23 important. THese numbers are always below any noice, created by macroscopic systems. In fact, if you observe something really weird, like unicorns, you can';t tell if it happens bacuse of the non-zero non-diagonal terms or because 2nd law of theormodinamics is violated.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
47
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K