Many-worlds: When does the universe split?

Click For Summary
The discussion centers on the many-worlds interpretation (MWI) of quantum mechanics, specifically questioning when universe splitting occurs and the role of superposition. Two main alternatives are presented: one suggests superposition exists in a single universe until measurement causes a split, while the other posits that multiple universes exist from the start, revealing themselves upon measurement. Participants express skepticism about both alternatives, finding them implausible and questioning the explanatory power of MWI regarding observed probabilities. The conversation also highlights that decoherence is a measurable feature of quantum mechanics, not exclusive to MWI, and emphasizes the deterministic nature of the Schrödinger equation in the evolution of the wave function. Ultimately, the dialogue reflects a broader uncertainty about the implications and testability of MWI in explaining quantum phenomena.
  • #121
bhobba said:
Maui said:
The fact is you cannot divide physical lengths an infinite number of times.
You might find it illuminating to actually describe, define, or whatever words you want to use, what a length is without invoking the concepts of geometry and real number.
He meant that distances in the physical universe cannot be divided into arbitrarily smaller distances in any physically meaningful way. Eventually you reach the Planck length and at that point, you are dealing with distances too small to hold information. In the physical world, two points that are less than 1 Plancks length apart have no Euclidean-like relationship to each other. For example, you cannot meaningfully say that one is closer to you than the other.
 
Physics news on Phys.org
  • #122
craigi said:
In the terminology that I'm using a random series of bits holds maximum information, whereas a series of a 0's hold minimal information. Which makes it synonymous with entropy. I don't doubt that the literature sometimes uses the term information in other contexts too.

Just being clear on terminology - got your meaning now.

I think that's the opposite meaning to what Kith was using though - I think he was using it in the sense of actually conveying information ie is highly compressible like the English Langauge is.

Thanks
Bill
 
  • #123
.Scott said:
He meant that distances in the physical universe cannot be divided into arbitrarily smaller distances in any physically meaningful way. Eventually you reach the Planck length and at that point, you are dealing with distances too small to hold information. In the physical world, two points that are less than 1 Plancks length apart have no Euclidean-like relationship to each other. For example, you cannot meaningfully say that one is closer to you than the other.

We are talking about complex numbers here as found in the definition of a Hilbert space - the concept of dividing a length was simply for pictorial vividness.

It is these properties that allow the the continual splitting by decoherence.

BTW the concept of length is a very general idea well and truly divorced from Euclidean geometry - eg length is defined in Hilbert spaces and is tied up with inner products and all sorts of abstract ideas - and that's just one example.

My 'challenge' to define length was to actually think about this stuff - many of the greatest mathematicians have and it is a very subtle concept. Just one result was the concept of measure - of great importance in the theory of Hilbert spaces - and probability - its a very general and powerful idea.

BTW we have no proof one way or another that the concept of length is not meaningful at the Plank scale.

For your information arguments to hold any water we first need you to give your definition of information.

Thanks
Bill
 
  • #124
bhobba said:
For your information arguments to hold any water we first need you to give your definition of information.
This is from post #111:
.Scott said:
Apparently there is a direct conversion from Entropy (S) to Bits of information (I):
I = S \frac{k}{ \ln 2 }
where k is the Boltzmann's constant (1.38 × 10^-23)J/K
 
  • #125
.Scott said:
Apparently there is a direct conversion from Entropy (S) to Bits of information (I):
I = S \frac{k}{ \ln 2 }

Errrr. Not quite.

The mathematical form of information entropy and Von Neumann entropy is the same - but they talk about different things.

I have to mention something though.

You are making all these wild claims based upon 'apparently' - not sure that is a good way to go about things.

Added later:

Did a bit of hunting around about this and it seems this equating the two concepts is quite controversial:
http://arxiv.org/pdf/quant-ph/0006087v3.pdf

Thanks
Bill
 
Last edited:
  • #126
bhobba said:
Just being clear on terminology - got your meaning now.

I think that's the opposite meaning to what Kith was using though - I think he was using it in the sense of actually conveying information ie is highly compressible like the English Langauge is.

Thanks
Bill

This has been troubling me a bit too. There's a bit of a paradox here. It seems to me that although in common parlance, randomness contains no meaningful information, any definition based upon this would be highly dependent upon context. To the contrary, we must conclude that randomness in the abscence of specific context, contains maximum information, even if we don't know what it means.

Without wishing to drag the thread into a discussion about randomness, let me illustrate what I mean. Suppose you generate a series of random bits, from quantum measurements. We would call them random and say that they contain no meaningful information. Now suppose you were to use those bits as a password and encrypt them. Decrypting or cracking would seek exactly that "information". We can go on to use that sequence of bits in all manner of ways, duplicating and basing ever more complex systems on them, to the extent that no one would argue that they were random other than in a historical context.

Without having a specific closed context, we can't possibly say that they contain no information. If the context is left open then the only meaningful definition must have a result that a random sequence of bits contains maximum possible information. If we treat a sequence of bits as just that, nothing more than a sequence of bits, then we can only use such a definition.
 
Last edited:
  • #127
craigi said:
This has been troubling me a bit too. There's a bit of a paradox here. It seems to me that although in common parlance, randomness contains no meaningful information, any definition based upon this would be highly dependent upon context. To the contrary, we must conclude that randomness in the abscence of specific context, contains maximum information, even if we don't know what it means.

Without wishing to drag the thread into a discussion about randomness, let me illustrate what I mean. Suppose you generate a series of random bits, from quantum measurements. We would call them random and say that they contain no meaningful information. Now suppose you were to use those bits as a password and encrypt them. Decrypting or cracking would seek exactly that "information". We can go on to use that sequence of bits in all manner of ways, duplicating and basing ever more complex systems on them, to the extent that no one would argue that they were random other than in a historical context.

Without having a specific closed context, we can't possibly say that they contain no information. If the context is left open then the only meaningful definition must have a result that a random sequence of bits contains maximum possible information. If we treat a sequence of bits as just that, nothing more than a sequence of bits, then we can only use such a definition.
With the MWI, that "random" sequence that you generated is part of the "serial number" of the world you ended up in. All the other "random" sequences were also generated, but they're being read by other-worldly instances of you.

There really isn't any useful meaning to "meaningful data" in physics.
 
  • #128
.Scott said:
With the MWI, that "random" sequence that you generated is part of the "serial number" of the world you ended up in. All the other "random" sequences were also generated, but they're being read by other-worldly instances of you.

It's a bit off-kilter, but that is one way to look at it. As I've explained before, your hypothetical serial number is a product of decoherence and is just as relevant to the CI as the MWI. The difference is that in the MWI, the other serial numbers occur in other inaccessible universes, whereas in the CI, those other serial numbers don't occur. Now you're probably going to argue that in the CI you only need one serial number, but that is just ignoring the mechanism that creates it, namely decohrence. Just because you don't see a reason to assign a hypothetical serial number to your CI universe doesn't mean that the process which generated your hypothetical serial number has gone away.

Try to look at it this way, your serial number in the MWI denotes your universe from the others that branched off and you could've been in. Your serial number in the CI denotes your universe from the others that you could've been in. Both are purely hypothetical serial numbers, but we can associate the generation of these serial numbers with exactly the same process and entropy (information) production.

Is this making sense to you?
 
Last edited:
  • #129
craigi said:
It's a bit off-kilter, but that is one way to look at it. As I've explained before, your hypothetical serial number is a product of decoherence and is just as relevant to the CI as the MWI. The difference is that in the MWI, the other serial numbers occur in other inaccessible universes, whereas in the CI, those other serial numbers don't occur. Now you're probably going to argue that in the CI you only need one serial number, but that is just ignoring the mechanism that creates it, namely decohrence. Just because you don't see a reason to assign a hypothetical serial number to your CI universe doesn't mean that the process which generated your hypothetical serial number has gone away.

Try to look at it this way, your serial number in the MWI denotes your universe from the others that branched off and you could've been in. Your serial number in the CI denotes your universe from the others that you could've been in. Both are purely hypothetical serial numbers, but we can associate the generation of these serial numbers with exactly the same process and entropy production.
The difference between CI and MWI was more important to me before I was able to get general acknowledgment that decoherence caused an increase in information. I have never really been an advocate of MWI - but I'll use it if think it will make the explanation easier. No one has mentioned my little cut at MWI in post #114.

It's like the virtual photons. I know that there's math behind them that doesn't really correspond to a virtual photon, but that doesn't make the concept worthless. And if decoherence from a measurement ends up generating 2.863 worlds, I'm not going to through out MWI as worthless. Perhaps I'll just wonder if my world is only 86.3% complete.
 
  • #130
Actually, I shouldn't worry about there being fractional worlds, because decoherence is always a quantum event. I was thinking that where a photon lands in an interference pattern is a "random continuum", but it really isn't. The screen on which the photon lands is actually a large set of discrete targets each one representing world out of the many possibilities.

I wonder if all points on the screen represent equal increases in entropy. Probably not. A photon landing in a relatively dark area would be more informative than one landing in a bright area.
 
  • #131
.Scott said:
Actually, I shouldn't worry about there being fractional worlds, because decoherence is always a quantum event. I was thinking that where a photon lands in an interference pattern is a "random continuum", but it really isn't. The screen on which the photon lands is actually a large set of discrete targets each one representing world out of the many possibilities.

I wonder if all points on the screen represent equal increases in entropy. Probably not. A photon landing in a relatively dark area would be more informative than one landing in a bright area.

In a dark area, not only does the photon decohere it dissipates its energy within the screen, which also involves an entropy increase.

In a light area, the diffuse component gives rise to a probability of dechorence and to reflect photons of different wavelengths. The specular component, gives a probability to reflect the photon coherently ie. no decoherence.

In reality, a surface would be a combination of the two.

Furthermore, the spatial probability distribution of the photon isn't going to homogenous at all points on the screen.

Your simplified picture of information isn't going to be good enough to tackle this problem, since entropy deals with probabilities too.

Basically, your idea of enumerating states, isn't enough to measure information. Entropy increases when, in your model, new states are enumerated, but that's as far as it goes. You'll need to look to a proper understanding of the physical processes to understand how much entropy increases by.
 
Last edited:
  • #132
Just a brief comment: in this thread, the following notions of information have been used yet:
1) subjective information which increases if we learn something we didn't know before
2) information content which corresponds to the entropy
3) information capacity which corresponds to the maximum of the entropy

During a measurement, some of them increase, some decrease and some remain constant.
 
  • #133
I'm starting to like MWI more and more.

One of my questions has an obvious answer that occurred to me last night - but with a cool exception that didn't occur to me until this morning. I doubt it's anything new - but I've never heard it addressed from the MWI outlook.

My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

I = -\frac{\ln p}{ \ln 2 } ... but only if you're sure of the time.

That caveat "only if you're sure of the time" describes the problem as the universe approaches heat death. As all your clocks fail and the best estimate of time becomes fuzzier and fuzzier, decoherence events will become less and less effective at generating additional information until there is no meaningful definition of either time or a change in entropy.

I don't think this is anything new, but with MWI, it's easy to see.

If your universe (or any isolated portion of it) has a finite radius beyond which it does not exchange energy and a fixed and finite mass, then it will have a Bekenstein Bound, a total information capacity. As time progresses, more and more information will be generated due to decoherence events until that capacity is challenged. At that point, it will become common for a decoherence event in one world to generate the same result as a difference decoherence event in another world. This remerge will result in reduced entropy. With enough remerging, a decoherence event could actually result in a substantial decrease in entropy.

What this looks like in the resulting world is an inability to determine the history of the world. There would be multiple histories from different world lines, not all necessarily spanning the same amount of time. Objects in the world and the world itself would have a fuzzy age - and as the information capacity was approached and remerge became the norm, the age would become fuzzier and fuzzier.

So, when there is a Bekenstein Bound:
I \leq -\frac{\ln p}{ \ln 2 }
 
  • #134
.Scott said:
My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

I = -\frac{\ln p}{ \ln 2 } ... but only if you're sure of the time.

I'm going to try to be brief here:

1) Your text doesn't match your maths.
2) How do you feel about your equation giving a non integer for the number of bits?
3) Where did your logarithm come from? You seem to suggest it just came to you, rather than through any deductive reasoning. It looks like you've just borrowed it from something you picked up from elsewhere, without understanding it.
5) Wherever got it from, you've missed the normalisation term.
6) The choice of base for the logarithm is arbitary.
7) Why do feel the need to invoke an observer measurement of time to prevent your equation becoming invalid?
8) Would it not be easier to suggest that at heat death coherence and hence decoherence cease to take place, statistically speaking, of course.
9) You still seem to think that this is relevant to the MWI, but this is just entropy generated at decoherence.
10) When you talk about Beckenstein bounds you ignore other sources of entropy.
11) Entropy has an actual definiton in physics, involving the Boltzmann constant.
12) My advice is to give the quantum physics a rest for a while and read a book on statistical mechanics. When you come back to it, this stuff should be a lot clearer. What you're doing here seems like trying to learn addition and subtraction by deconstrucing calculus.
 
Last edited:
  • #135
craigi said:
I'm going to try to be brief here:
1) Your text doesn't match your maths.
It's regarding this:
.Scott said:
My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

I = -\frac{\ln p}{ \ln 2 } ... but only if you're sure of the time.
From the "1 in 32" example, p = 1/32, 1/p = 32, log base 2 of 32 is ln(32)/ln(2) = 5.
When I wrote the equation, I converted the log base 2 to "ln" just to make it better match up with the entropy/information conversion terms.
craigi said:
2) How do you feel about your equation giving a non integer for the number of bits?
That was a misconception on my part. I retracted it at post #130.
craigi said:
3) Where did your logarithm come from? You seem to suggest it just came to you, rather than through any deductive reasoning. It looks like you've just borrowed it from something you picked up from elsewhere, without understanding it.
If you have "b" bits, you can describe 2^b states.
So if you have n states, you will need log base 2 of n bits to encode them - not worrying for the moment about whether that yields an integer number of bits.
Now if all of the states are of equal probability (1/n), that is as far as you can go. But what if there are three states with p = 50%, 25%, and 25%. Well those 25% are just like 1 of 4 choices and will require 2 bits while the 50% will require only 1. So what really matters is the probability, not the number of choices.
I could be more formal in the math, but it would be more work and I don't thing it would make it more convincing.
craigi said:
5) Wherever got it from, you've missed the normalisation term.
I don't understand.
craigi said:
6) The choice of base for the logarithm is arbitary.
The Berekstein bound is expressed in bits (base 2). An arbitrary choice, but I like it.
craigi said:
7) Why do feel the need to invoke an observer measurement of time to prevent your equation becoming invalid?
That's really the crux of the issue - and the part that I think is really cool for MWI.
I don't feel it, for a universe with a Berekstein Bound, I know it.
It's really a two-part argument. But let me give you part one for now.

We can use the universe as a whole as our base line. So when all the worlds are taken as a whole, we are at the "base information" or "no information" state. We will also presume a time-zero (T0) for the universe, a time before any decoherence event has happened. At that moment, all possible worlds are possible so at that moment we have "no information". That moment doesn't really need to exist, but it makes for a simpler picture. In fact, part 2 of the argument involves getting rid of T0.

So from T0, we allow time to run forward and for entropy to steadily increase. As we do this, we get a more and more diverse collection of worlds. From the MWI point of view, because we are increasing the diversity of the worlds, the entropy is increasing. But as we approach maximum entropy, we start to run out of diversity. It's kind of like the lottery. If the jackpot is only $1M and you win, you will probably discover that your ticket is the only one with the winning number. But is the jackpot reaches $1B and you win, you will probably have to share it with other ticket holders who chose the same number. So when several worlds picket the same state as one of their "tickets", you have a new world with no unambiguous history. You also have less increase in entropy. In fact, if one of the contributors to your new shared world is part of your history, you will have, in effect, skipped back through time - loosing entropy.

Now those are the characteristics of "heat death". But what does it look like from within the world experiencing it? How would you fight it? Well, you would try to keep good time - so if you suddenly skipped back or forth through time you'd have a clock that told you what happened. But clocks do not last forever. They eventually blend in with the heat death. All time-keeping mechanisms, every one of them, will eventually be gone. Then we will drift into worlds where decoherence will have no meaningful consequence on entropy. All of the worlds states will have been generated and time will simply cause its world to move into one of several other already existing worlds.

craigi said:
8) You still seem to think this is relevant to the MWI, but this is just entropy generated at decoherence.
I'm not saying that the tail-off in entropy generation can't be explained with other interpretations. Only that I find MWI interesting because it makes it easy to see what happens as heat death approaches.
craigi said:
9) When you talk about Beckenstein bounds you ignore other sources of entropy.
I'm not sure I understand. In all cases, I am presuming that we are constrained to a Bekenstein Bound - although small leaks that cross the bound would not change the overall picture.
craigi said:
10) Entropy has an actual definition in physics, involving the Boltzmann constant.
Yup. And that is the one that I am talking about.
I am really saying that there can be world conditions where decoherence will not result in a net increase in entropy.
 
Last edited:
  • #136
I don't have time to go through your entire post, but you're still missing this error. You say:

.Scott said:
My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

Then you use this equation:

I = − ln p / ln 2

I think we both understand that this is equivelant to:

I = - log2 p

Where is the inverse of the probability that you talk about in the text?

The equation that you use bares some resemblance to actual defintions of information and entropy, but it doesn't match what you describe. I find it hard to imagine that it's a fluke. I think you've borrowed this part of an equation from actual definitions, but failed to interpret it in terms of your own view of information.
 
Last edited:
  • #137
craigi said:
I don't have time to go through your entire post, but you're still missing this error. You say:



Then you use this equation:

I = − ln p / ln 2

I think we both understand that this is equivelant to:

I = - log2 p

Where is the inverse of the probability that you talk about in the text?

The equation that you use bares some resemblance to actual defintions of information and entropy, but it doesn't match what you describe. I find it hard to imagine that it's a fluke. I think you've borrowed this part of an equation from actual definitions, but failed to interpret it in terms of your own view of information.
I = log2 (1/p) = - log2 p
 
  • #138
.Scott said:
I = log2 (1/p) = - log2 p

You're aboslutely correct. I wrote my last post in a hurry and realized the mistake as soon as I left. Ignore everything I said in it. It's wrong.
 
  • #139
.Scott said:
There would be multiple histories from different world lines, not all necessarily spanning the same amount of time. Objects in the world and the world itself would have a fuzzy age - and as the information capacity was approached and remerge became the norm, the age would become fuzzier and fuzzier.
This idea of Penrose might be interesting for you.

I still have some objections. Unfortunately, your non-standard terminology makes it hard to get to the core of the issue. For example it doesn't make sense to call an event which reduces entropy a "decoherence event" although the underlying idea may well be valid.

The most obvious point is about the Bekenstein bound. The bound takes it's maximum entropy value for a black hole. A black hole is not isolated from it's environment: it absorbs matter and emits Hawking radiation. My understanding is that the bound occurs in the first place because a region of a certain radius which contains a certain amount of matter (resp. energy) cannot be isolated better from it's surroundings than a black hole. I don't see how it makes sense to apply this bound to the universe as a whole.

/edit: Also I think we need to keep in mind that we are not talking about the MWI here but about a speculative combination of the MWI and general relativity. As far as I know, the Bekenstein bound is derived from both GR and QM. We know that the simple combination of GR and QM is impossible at least in some cases. So the bound could be an expression of this incompatibility.
 
Last edited:
  • #140
kith said:


Indeed. This is where it gets very complicated.

As Bob suggested, decoherence is just a probabalistic process, in the same way as for other entropic events. In theory, we could consider a concept of recoherence, but this really brings in the question of what we mean by time.

We have time from the relativistic space-time continuum, time as a parameter in the Schrodinger equation and time as increased entropy. I don't think we know how to unite them. There are even concepts of time being directed by the expansion of the unvierse, and suggestions that time and space may extend beyond the universe (observable or otherwise). The reason that we, as humans, observe time in the direction that we do is due to the entropic direction of time. Entropy is at the heart of all the chemical and biological processes that give rise to our consciousness. A clock behaves the same way. This is really a macroscopic defintion of time. Is it really the same macroscopic definition of time as from general relativity? It seems not.

If we talk of decoherence happening backwards, then perhaps we're actually talking about microscopic time reversing for the particle (or entangled particles even), but then does this arrow of time match the direction given from the parameter in the Schrodinger equation? It would seem not.

Scott's idea of "remergence" is based upon his serial number concept. In that, the same serial number emerges from different branches. I'm not even sure how it can occur, because as he describes his serial number, when a branch occurs he splits a bit to generate 2 new ones. Nevertheless, presuming that there's an extra mechanism to his serial number generation, I don't see how this could map to a reversal of decoherence or entropy. I would expect it just to map to probabilties of occurance of a particular macrostate, rather than anything more interesting.
 
Last edited by a moderator:
  • #141
kith said:
This idea of Penrose might be interesting for you.
Just looking at the chapter titles in his book in sections 1 and 2, it appears Penrose and myself are on the same page.
This won't be the first time. I give kudos to Penrose for being emphatic that consciousness, at its root, is an artifact of QM. I think most Physicists would agree - but it's not an allowed topic on this board.

kith said:
I still have some objections. Unfortunately, your non-standard terminology makes it hard to get to the core of the issue. For example it doesn't make sense to call an event which reduces entropy a "decoherence event" although the underlying idea may well be valid.
Given my background, and specifically lack of a serious Physics background, I am certainly open to challenges on my use of proper terminology. Although I was familiar with the term before using it on this board, my use of it on this board was based of how previous posts were using it.
Having reviewed the wiki article, the term "decoherence" clearly carries some luggage that is interesting but not fundamental to how I am describing the increase in information or entropy.

What is essential to my arguments is that decoherence (or any other rose by any other name) creates a set of outcomes with no possibility, in principle, of knowing which one you will observe. The real problem is with the notion of "increasing entropy" or "adding information". I noticed that Penrose's section 2 title is "The oddly special nature of the Big Bang". It did create something special. It created an environment filled with clocks. We know the Big Bang happened about 13.8 billion years ago! In our current universe, there is no reasonable doubt about which world predates which world. Whenever there is a decoherence event, all of the possible outcomes create worlds that we have never occurred before, so our formula for added information is not challenged. However, if we are in a world when heat death has occurred and your world has limited mass and space, then you can't presume that entropy has increased - even though it is the very same type of event.
For example, in the current world a single photon leaves a flood lamp with a miniscule aperture and strikes one of a billion atoms. Each of those atoms representing the transition into a world that has never existed since the Big Bang. So, if for one particular atom, if its chance of being struck is one in a billion, then the world entered for the photon hitting that atom will have an additional 30 bits from the pre-decoherence world.
But that same scenario in a world that has, in principle, lost track of time, will add 30 bits but possibly land you in a world that has already been "created", one with a shorter time line than the world where the photon had not yet decohered. The decoherence event is fundamentally the same. The probability is the same. You still have 30 "added" bits, but they're not really new bits because they don't land you in a unique, never-before-seen world.
kith said:
The most obvious point is about the Bekenstein bound. The bound takes it's maximum entropy value for a black hole. A black hole is not isolated from it's environment: it absorbs matter and emits Hawking radiation. My understanding is that the bound occurs in the first place because a region of a certain radius which contains a certain amount of matter (resp. energy) cannot be isolated better from it's surroundings than a black hole. I don't see how it makes sense to apply this bound to the universe as a whole.

/edit: Also I think we need to keep in mind that we are not talking about the MWI here but about a speculative combination of the MWI and general relativity. As far as I know, the Bekenstein bound is derived from both GR and QM. We know that the simple combination of GR and QM is impossible at least in some cases. So the bound could be an expression of this incompatibility.
The critical part of the Bekenstein bound to my argument is that it puts a cap on the amount of information that can be held by any world. If there is no such limit, then there is no upper bound on entropy and we don't have to worry as much about heat death and we don't have to worry at all about being, in principle, unable to track time.
 
  • #142
.Scott said:
Just looking at the chapter titles in his book in sections 1 and 2, it appears Penrose and myself are on the same page.
This won't be the first time. I give kudos to Penrose for being emphatic that consciousness, at its root, is an artifact of QM. I think most Physicists would agree - but it's not an allowed topic on this board.


Given my background, and specifically lack of a serious Physics background, I am certainly open to challenges on my use of proper terminology. Although I was familiar with the term before using it on this board, my use of it on this board was based of how previous posts were using it.
Having reviewed the wiki article, the term "decoherence" clearly carries some luggage that is interesting but not fundamental to how I am describing the increase in information or entropy.

What is essential to my arguments is that decoherence (or any other rose by any other name) creates a set of outcomes with no possibility, in principle, of knowing which one you will observe. The real problem is with the notion of "increasing entropy" or "adding information". I noticed that Penrose's section 2 title is "The oddly special nature of the Big Bang". It did create something special. It created an environment filled with clocks. We know the Big Bang happened about 13.8 billion years ago! In our current universe, there is no reasonable doubt about which world predates which world. Whenever there is a decoherence event, all of the possible outcomes create worlds that we have never occurred before, so our formula for added information is not challenged. However, if we are in a world when heat death has occurred and your world has limited mass and space, then you can't presume that entropy has increased - even though it is the very same type of event.
For example, in the current world a single photon leaves a flood lamp with a miniscule aperture and strikes one of a billion atoms. Each of those atoms representing the transition into a world that has never existed since the Big Bang. So, if for one particular atom, if its chance of being struck is one in a billion, then the world entered for the photon hitting that atom will have an additional 30 bits from the pre-decoherence world.
But that same scenario in a world that has, in principle, lost track of time, will add 30 bits but possibly land you in a world that has already been "created", one with a shorter time line than the world where the photon had not yet decohered. The decoherence event is fundamentally the same. The probability is the same. You still have 30 "added" bits, but they're not really new bits because they don't land you in a unique, never-before-seen world.
The critical part of the Bekenstein bound to my argument is that it puts a cap on the amount of information that can be held by any world. If there is no such limit, then there is no upper bound on entropy and we don't have to worry as much about heat death and we don't have to worry at all about being, in principle, unable to track time.

You misunderstand heat death. At heat death there are no flood lamps and there are no apertures. There are no atoms and there is no coherence to decohere.
 
  • #143
craigi said:
Indeed. This is where it gets very complicated.

As Bob suggested, decoherence is just a probabalistic process, in the same way as for other entropic events. In theory, we could consider a concept of recoherence, but this really brings in the question of what we mean by time.

We have time from the relativistic space-time continuum, time as a parameter in the Schrodinger equation and time as increased entropy.
I'm hoping that you can see from my last post that I really was saying that decoherence can cause a reduction in entropy. The issue isn't with the local particles involved in the decoherence but in the "other-worldly" circumstances.
In the MWI, your space-time continuum is assembled with tiny instants of time, each leading to two or more others. The time part of the space-time continuum is what drives the laws of physics and transitions you from one world to the next. An increase in total entropy is what provides us with a sense of the long-term direction of time. If you were dropped into a world with heat death, you could still perceive time, but after a googol years, you would die and blend in with the heat - and all of your time-keeping potential would be lost.
craigi said:
There are even concepts of time being directed by the expansion of the unvierse, and suggestions that time and space may extend beyond the universe (observable or otherwise). The reason that we, as humans, observe time in the direction that we do is due to the entropic direction of time.
Sounds good to me.

craigi said:
Entropy is at the heart of all the chemical and biological processes that give rise to our consciousness.
I would be more specific and say "our human consciousness", only because I believe that consciousness is a basic part of all physics.[/QUOTE]
craigi said:
If we talk of decoherence happening backwards, then perhaps we're actually talking about microscopic time reversing for the particle (or entangled particles even), but then does this arrow of time match the direction given from the parameter in the Schrodinger equation? It would seem not.
I am not saying that the time is reversed at the particle level. There is still an arrow of time. It is simply that heat death can lead to a situation where every step forward in time brings you to a world with less history.
craigi said:
Scott's idea of "remergence" is based upon his serial number concept. In that, the same serial number emerges from different branches. I'm not even sure how it can occur, because as he describes his serial number, when a branch occurs he splits a bit to generate 2 new ones. Nevertheless, presuming that there's an extra mechanism to his serial number generation, I don't see how this could map to a reversal of decoherence or entropy. I would expect it just to map to probabilties of occurance of a particular macrostate, rather than anything more interesting.
You can only avoid remergence in a world with unlimited information capacity. That "extra mechanism" is the limit, such as the Bekenstein limit. If there is no limit, remergence can be avoided.
 
  • #144
.Scott said:
I'm hoping that you can see from my last post that I really was saying that decoherence can cause a reduction in entropy. The issue isn't with the local particles involved in the decoherence but in the "other-worldly" circumstances.
In the MWI, your space-time continuum is assembled with tiny instants of time, each leading to two or more others. The time part of the space-time continuum is what drives the laws of physics and transitions you from one world to the next. An increase in total entropy is what provides us with a sense of the long-term direction of time. If you were dropped into a world with heat death, you could still perceive time, but after a googol years, you would die and blend in with the heat - and all of your time-keeping potential would be lost.
Sounds good to me.

I would be more specific and say "our human consciousness", only because I believe that consciousness is a basic part of all physics.
I am not saying that the time is reversed at the particle level. There is still an arrow of time. It is simply that heat death can lead to a situation where every step forward in time brings you to a world with less history.
You can only avoid remergence in a world with unlimited information capacity. That "extra mechanism" is the limit, such as the Bekenstein limit. If there is no limit, remergence can be avoided.


Sorry man, none of that is physics. You'd be better off taking it somewhere like David Icke's site.

You seem to presume that I have a similar resilience to a heat death scenario as a black hole might. I can assure you that I don't. There are few estimations that you could make in physics that are so far out.

If you're interested in physics it's time to take reading more seriously and put your homegrown theories and postmodern philosophies to bed for a while.
 
Last edited:
  • #145
.Scott said:
The critical part of the Bekenstein bound to my argument is that it puts a cap on the amount of information that can be held by any world.
The Bekenstein bound is about a region of space. You either have to show that the worlds are confineable to such regions of space or that the Bekenstein bound applies to the universe as a whole. I don't think either of these statements is correct.

Also some points about terminology as well as about relevant concepts:
-In QM, the entropy of a subsystem can be higher than the entropy of the whole system (this is because of entanglement).
-Decoherence is simply the evolution from a state with zero entropy towards a state with maximal entropy wrt to some constraints.
-Decoherence occurs only if an environment is present and fully decohered states are only approximately stable.
-Decoherence is not the cause of anything, it is the result of entanglement in the whole system.
-The same thing which leads to entanglement/decoherence can lead to disentanglement/recoherence
-This thing is simply the Schrödinger equation for the whole system
-The Schrödinger equation preserves entropy (note that the Schrödinger equation isn't valid in the subsystem)
-All of this can be observed in experiments, so nothing of this is specific to the MWI
-Difficulties arise only if you ask what is the relation between the QM system and the experimenter
-There are no decoherence "events". If you talk about selecting outcomes you are not talking about decoherence but about collapse
 
Last edited:
  • Like
Likes 1 person
  • #146
kith said:
The Bekenstein bound is about a region of space. You either have to show that the worlds are confineable to such regions of space or that the Bekenstein bound applies to the universe as a whole. I don't think either of these statements is correct.
I never presumed they were. If the universe is either allowed to increase in mass without limit or is allowed to increase spatially without limit, then there will always be a "clock".

kith said:
Also some points about terminology as well as about relevant concepts:
-In QM, the entropy of a subsystem can be higher than the entropy of the whole system (this is because of entanglement).
...
-The same thing which leads to entanglement/decoherence can lead to disentanglement/recoherence
-This thing is simply the Schrödinger equation for the whole system
-The Schrödinger equation preserves entropy (note that the Schrödinger equation isn't valid in the subsystem)
...
-There are no decoherence "events". If you talk about selecting outcomes you are not talking about decoherence but about collapse
Thanks. I actually picked that term up from you. In post 2 you said "In the MWI, decoherence is what splits the worlds...". From what you're saying now, I suspect that collapse is what splits them in a more permanent fashion.
 
  • #147
.Scott said:
I never presumed they were. If the universe is either allowed to increase in mass without limit or is allowed to increase spatially without limit, then there will always be a "clock".
Also if this isn't the case: it is far from obvious that the Bekenstein bound should apply to the isolated system of the whole universe. As I said: the bound is reached for a black hole which is an open system.

.Scott said:
Thanks. I actually picked that term up from you. In post 2 you said "In the MWI, decoherence is what splits the worlds...". From what you're saying now, I suspect that collapse is what splits them in a more permanent fashion.
Yes, this may have been a little misleading. In the MWI, full decoherence during an observation is interpreted as a splitting of worlds. In the Copenhagen interpretation, a fully decohered state is interpreted as a set of possibilities from which one "true" observation outcome is chosen by an additional process called collapse. In the MWI, this process is simply absent because all outcomes are equally "true".

The splitting of worlds gets its significance only from the inside perspective of an observer who perfroms a series of observations. If there was an entropy limit, there would be a configuration where all subsystems you consider important are fully decohered and recoherence would be inevitable. The worlds would remerge. But at this point, the observer, who can only exist in a low entropy state, would be gone. Once his inside perspective is lost, why should we continue to take such a viewpoint?
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
30
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 48 ·
2
Replies
48
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K