Many-worlds: When does the universe split?

In summary, the many-worlds interpretation of quantum mechanics is that the wave function of the universe splits into two when a measurement is performed, and this can be interpreted in a variety of ways.
  • #106
bhobba said:
That is WAY wrong. A real or complex valued function encodes an infinite amount of information. That is why the partitioning of it can continue indefinitely and each sub partition contains an infinite amount of information.

QM is a MATEMATICAL MODEL and as such is exactly that - it contains mathematical objects created by mathematicians with whacky counter intuitive properties.

Thanks
Bill



That is because you are so deeply involved in your mathematical model that you seem to forget that:


1. QM is just a model
2. it's not the real world out there(the real world is actually not described by qm as it is - major flaw)
3. your considerations are practically wrong because of the Planck length limit. You cannot divide length further than the limit allows.

You are basically equating mathematics with physics and in the case you were presenting, the experimental confirmation of your theory(that a line's length is infinitely divisible) fails at the Planck length and that is all physics cares about.

You do understand that a real interval of any length contains exactly the same information as a real interval of any other length? Split any length in two and each part contains exactly the same information as before it was split?



If this quote was about quantum information and I misunderstood your idea(it's not obvious to me that it is so), I retract my statements about the practical limit on the amount of information that can be encoded in a given volume/area.
 
Last edited:
Physics news on Phys.org
  • #107
An increase in entropy corresponds to a decrease of information. This is the opposite of .Scotts assertions.

This thread is not productive because he's focusing on the collapse part (which he calls "which world information") and ignores decoherence, while most of the others talk about decoherence only.
 
  • #108
kith said:
What are A and B?
There two orthogonal measurements. One disappears when you measure the other.
 
  • #109
.Scott said:
There two orthogonal measurements. One disappears when you measure the other.
Okay, so initially we have a superposition in the A direction and don't know whether the spin is A+ or A- but we know it is B+. After the measurement in the A direction, we know it is -say- A- but not whether it is B+ or B-. Where's the net information increase?
 
  • #110
kith said:
Okay, so initially we have a superposition in the A direction and don't know whether the spin is A+ or A- but we know it is B+. After the measurement in the A direction, we know it is -say- A- but not whether it is B+ or B-. Where's the net information increase?

It's in the measuring apparatus. You can't make a measurement without it. It's not the measured particle that stores the information increase but the macroscopic system, that is used to measure it. The superposition terms are very quickly and irreversibly, dissipated through the extensive coupling causing a small amount of disorder, which takes an increased amount of information to represent.
 
Last edited:
  • #111
bhobba said:
.Scott said:
The information I am talking about is not specific to the particle. I am not saying that the particle itself holds more information. I am saying that once a measurement is made of any truly "random" event (which would include the selection of an MWI world), there is more total information in the world.
What we are saying is that is WRONG. The information encoded in the complex valued wavefunction (or more correctly state) of the universe is infinite. When that wavefunction is 'partitioned' by decoherence each world also contains infinite information. This can continue indefinitely without bound with no information being gained or lost.

Thanks
Bill

There is a Berekstein Bound that limits the amount of information based on energy (E) and radius (R):
[itex]S \leq \frac{2 \pi k R E}{\hbar c}[/itex] (thanx Wikipedia)
All of those other symbols are constants except for the E and R.

Apparently there is a direct conversion from Entropy (S) to Bits of information (I):
[itex] I = S \frac{k}{ \ln 2 }[/itex]

So: [itex] I \leq \frac{2 \pi R E}{\hbar c \ln 2}[/itex]

What makes the universe different from a line is the Planck distance.

kith said:
An increase in entropy corresponds to a decrease of information. This is the opposite of .Scotts assertions.
Please note the equation above:
[itex] I = S \frac{k}{ \ln 2 }[/itex]
 
Last edited:
  • #112
craigi said:
kith said:
Okay, so initially we have a superposition in the A direction and don't know whether the spin is A+ or A- but we know it is B+. After the measurement in the A direction, we know it is -say- A- but not whether it is B+ or B-. Where's the net information increase?
It's in the environment. You can't make a measurement without it. It's not the measured particle that stores the information increase but the macroscopic system and its coupling to the external environment, that is used to measure it.
Yes!
 
  • #113
Maui said:
1. QM is just a model

And yet it makes predictions remarkably consistent with experiment.

Maui said:
2. it's not the real world out there(the real world is actually not described by qm as it is - major flaw)

The real world not described by QM? We have zero evidence for that.

Maui said:
3. your considerations are practically wrong because of the Planck length limit. You cannot divide length further than the limit allows.

And you know that's because QM is wrong exactly how? We know our current theories break down there but if that's because QM is wrong is far from certain. In fact the best current candidate for doing exactly that, String Theory, is a Quantum Theory

Maui said:
You are basically equating mathematics with physics and in the case you were presenting, the experimental confirmation of your theory(that a line's length is infinitely divisible) fails at the Planck length and that is all physics cares about.

Our current theories break down at the Plank length. If that is because the continuity assumption fails is unknown.

BTW I am not saying that mathematics is physics. I am saying exactly what Feynman said:


Physics is not mathematics but is written in the language of mathematics. You can't pick and choose what aspects of that language to ignore or not.

Thanks
Bill
 
Last edited by a moderator:
  • #114
craigi said:
Are you aware of the Second Law of Thermodynamics?
Decoherence involves thermodynamically irreversible processes. This involves a loss of order and an increase in entropy as interference spreads to the surrounding environment and as such would require an increased amount of information to represent.

This entropy growth is nothing new. The understanding of it predates quantum mechanics by 100's of years.

The entropy growth involved in QM is a direct result of the formulation and is not interpretation dependent. It is the observer dependent reality that contains all relevant information. If you wish to consider the entropy in the MWI multiverse then I'd argue that it is zero and contains no information.
I wasn't sure about all of this, but I have always kept open the option of the infomation continuously increasing. The only tie to MWI is that MWI clearly creates new information where as other views are consistent with no new information.

Correct me if I'm wrong, but we can calculate the increase in entropy for any decoherence. That being the case, we should be able to count the number worlds created in such events:

S: increase in entropy
I: increase number of bits
W: number of worlds after event

[itex] I = S \frac{k}{ \ln 2 }[/itex]

[itex] W = {2}^{I} [/itex]

Someone will have to tell me if this always works out to an integer. If it doesn't, I'm not bothered with fractional worlds if you're not.

Also, what will stop entropy from increasing? I know a black hole does. Put simply, time stops at the event horizon putting a cap on information inflation. But if the universe is not destine to freeze onto the surface of a black hole, will anything stop entropy?

If not, we have this continuous increase in entropy perhaps requiring a continuous increase in either the mass or radius of the universe - so as not to exceed the Berekstein bound.

Of course, as was pointed out earlier in this thread, it's possible that worlds will begin to merge as they run out of mass and room to remain different.
 
Last edited:
  • #115
craigi said:
You can't make a measurement without it.

For theories that use decoherence it seems generally assumed a measurement is when decoherence occurs. For example a few stray photons is enough to decohere dust particles and give it an effective position. That is generally considered a 'measurement' even though its not associated with an actual observational apparatus. In fact the VAST majority of 'measurements' are like this rather than the formal apparatus type.

I also thought that formally Von Neumann entropy was the same as Shannon entropy and an increase in either represents an increase in randomness. Highly random 'messages' contain no information so that Kith is correct - information content decreases with increasing entropy. However highly random 'messages' require the greatest amount of information to encode - so in that sense information increases. I am unsure what sense is meant here.

Thanks
Bill
 
Last edited:
  • #116
bhobba said:
The real world not described by QM? We have zero evidence for that.
If you mean that QM presents a non-contextual, objective, local and realistic world(as is known to scientific models and experience) you really couldn't be more wrong.
And you know that's because QM is wrong exactly how?
Where did I say that? I said that mathematics is not the world out there and qm does not represent the world as experienced(certainly not without the addition of fairy tales and personal beliefs).
Our current theories break down at the Plank length. If that is because the continuity assumption fails is unknown.
The fact is you cannot divide physical lengths an infinite number of times.
 
  • #117
.Scott said:
I wasn't sure about all of this, but I have always kept open the option of the infomation continuously increasing. The only tie to MWI is that MWI clearly creates new information where as other views are consistent with no new information.

Correct me if I'm wrong, but we can calculate the increase in entropy for any decoherence. That being the case, we should be able to count the number worlds created in such events:

S: increase in entropy
I: increase number of bits
W: number of worlds after event

[itex] I = S \frac{k}{ \ln 2 }[/itex]

[itex] W = {2}^{I} [/itex]

Someone will have to tell me if this always works out to an integer. If it doesn't, I'm not bothered with fractional worlds if you're not.

Also, what will stop entropy from increasing? I know a black hole does. Put simply, time stops at the event horizon putting a cap on information inflation. But if the universe is not destine to freeze onto the surface of a black hole, will anything stop entropy?

If not, we have this continuous increase in entropy perhaps requiring a continuous increase in either the mass or radius of the universe - so as not to exceed the Berekstein bound.

Of course, as was pointed out earlier in this thread, it's possible that worlds will begin to merge as they run out of mass and room to remain different.

So, since we know the increasOne this we can do is count the number of worlds created in any
But this does lead us into other problems.

Nope. Decoherence isn't special to the MWI. It's part of the QM formulation. Entropy increases at 'measurement' events regardless of interpretation and by the same amount.

The other remaining issue of the Beckenstein bound prohibiting the entropy growth is explained by understanding that the entropy is dissipated spatially. If we have a localised particle that is subject to constant alternating measurements it doesn't increase entropy at it's location. The entropy is contained within the measuring device. Your next thought might be that even the measuring device can't store the entropy if the measurements continue indefinitely. You'd be right, but the measuring apparatus is coupled to its external environment and dissipates entropy to the rest of the universe.

Ulitmately, one possibility is the heat death of the universe where the universe would become so disordered that your measuring apparatus would no longer be able to function, statistically speaking.
 
Last edited:
  • #118
Maui said:
If you mean that QM presents a non-contextual, objective, local and realistic world(as is known to scientific models and experience) you really couldn't be more wrong.

Why you think I meant that has me beat. Its dead simple. We do not have a TOE - if that theory is based on QM or not is unknowen.

Maui said:
Where did I say that? I said that mathematics is not the world out there and qm does not represent the world as experienced(certainly not without the addition of fairy tales and personal beliefs).

Gee mate - so your issue is we only describe the world and that is not the same as the world itself. Wow - what an Earth shattering observation.

Maui said:
The fact is you cannot divide physical lengths an infinite number of times.

You might find it illuminating to actually describe, define, or whatever words you want to use, what a length is without invoking the concepts of geometry and real number.

Thanks
Bill
 
  • #119
craigi said:
Ultimately, one possibility is the heat death of the universe where the universe would become so disordered that your measuring apparatus would no longer be able to function, statistically speaking.
Is there some feature of heat death that would stop occurrences of decoherence?
 
  • #120
bhobba said:
For theories that use decoherence it seems generally assumed a measurement is when decoherence occurs. For example a few stray photons is enough to decohere a dust particles and give it an effective position. That is generally considered a 'measurement' even though its not associated with an actual observational apparatus. In fact the VAST majority of 'measurements' are like this rather than the formal apparatus type.

I also thought that formally Von Neumann entropy was the same as Shannon entropy and an increase in either represents an increase in randomness. Highly random 'messages' contain no information so that Kith is correct - information content decreases with increasing entropy. However highly random 'messages' require the greatest amount of information to encode - so in that sense information increases. I am unsure what sense is meant here.

Thanks
Bill

Sure, feel free to use 'measurement apparatus' and 'macroscopic system' interchangably, in what I was saying. The former seems easier to visualise, the latter, a generalisation where it holds equally well.

Regarding randomness and information. In the terminology that I'm using a random series of bits holds maximum information, whereas a series of a 0's hold minimal information. Which makes it synonymous with entropy. I don't doubt that the literature sometimes uses the term information in other contexts too. I can understand why we might say that randomness contains minimal (useful) information, but I can't see how we could argue that a series of all 0's could constitute maximum information.
 
Last edited:
  • #121
bhobba said:
Maui said:
The fact is you cannot divide physical lengths an infinite number of times.
You might find it illuminating to actually describe, define, or whatever words you want to use, what a length is without invoking the concepts of geometry and real number.
He meant that distances in the physical universe cannot be divided into arbitrarily smaller distances in any physically meaningful way. Eventually you reach the Planck length and at that point, you are dealing with distances too small to hold information. In the physical world, two points that are less than 1 Plancks length apart have no Euclidean-like relationship to each other. For example, you cannot meaningfully say that one is closer to you than the other.
 
  • #122
craigi said:
In the terminology that I'm using a random series of bits holds maximum information, whereas a series of a 0's hold minimal information. Which makes it synonymous with entropy. I don't doubt that the literature sometimes uses the term information in other contexts too.

Just being clear on terminology - got your meaning now.

I think that's the opposite meaning to what Kith was using though - I think he was using it in the sense of actually conveying information ie is highly compressible like the English Langauge is.

Thanks
Bill
 
  • #123
.Scott said:
He meant that distances in the physical universe cannot be divided into arbitrarily smaller distances in any physically meaningful way. Eventually you reach the Planck length and at that point, you are dealing with distances too small to hold information. In the physical world, two points that are less than 1 Plancks length apart have no Euclidean-like relationship to each other. For example, you cannot meaningfully say that one is closer to you than the other.

We are talking about complex numbers here as found in the definition of a Hilbert space - the concept of dividing a length was simply for pictorial vividness.

It is these properties that allow the the continual splitting by decoherence.

BTW the concept of length is a very general idea well and truly divorced from Euclidean geometry - eg length is defined in Hilbert spaces and is tied up with inner products and all sorts of abstract ideas - and that's just one example.

My 'challenge' to define length was to actually think about this stuff - many of the greatest mathematicians have and it is a very subtle concept. Just one result was the concept of measure - of great importance in the theory of Hilbert spaces - and probability - its a very general and powerful idea.

BTW we have no proof one way or another that the concept of length is not meaningful at the Plank scale.

For your information arguments to hold any water we first need you to give your definition of information.

Thanks
Bill
 
  • #124
bhobba said:
For your information arguments to hold any water we first need you to give your definition of information.
This is from post #111:
.Scott said:
Apparently there is a direct conversion from Entropy (S) to Bits of information (I):
[itex] I = S \frac{k}{ \ln 2 }[/itex]
where k is the Boltzmann's constant (1.38 × 10^-23)J/K
 
  • #125
.Scott said:
Apparently there is a direct conversion from Entropy (S) to Bits of information (I):
[itex] I = S \frac{k}{ \ln 2 }[/itex]

Errrr. Not quite.

The mathematical form of information entropy and Von Neumann entropy is the same - but they talk about different things.

I have to mention something though.

You are making all these wild claims based upon 'apparently' - not sure that is a good way to go about things.

Added later:

Did a bit of hunting around about this and it seems this equating the two concepts is quite controversial:
http://arxiv.org/pdf/quant-ph/0006087v3.pdf

Thanks
Bill
 
Last edited:
  • #126
bhobba said:
Just being clear on terminology - got your meaning now.

I think that's the opposite meaning to what Kith was using though - I think he was using it in the sense of actually conveying information ie is highly compressible like the English Langauge is.

Thanks
Bill

This has been troubling me a bit too. There's a bit of a paradox here. It seems to me that although in common parlance, randomness contains no meaningful information, any definition based upon this would be highly dependent upon context. To the contrary, we must conclude that randomness in the abscence of specific context, contains maximum information, even if we don't know what it means.

Without wishing to drag the thread into a discussion about randomness, let me illustrate what I mean. Suppose you generate a series of random bits, from quantum measurements. We would call them random and say that they contain no meaningful information. Now suppose you were to use those bits as a password and encrypt them. Decrypting or cracking would seek exactly that "information". We can go on to use that sequence of bits in all manner of ways, duplicating and basing ever more complex systems on them, to the extent that no one would argue that they were random other than in a historical context.

Without having a specific closed context, we can't possibly say that they contain no information. If the context is left open then the only meaningful definition must have a result that a random sequence of bits contains maximum possible information. If we treat a sequence of bits as just that, nothing more than a sequence of bits, then we can only use such a definition.
 
Last edited:
  • #127
craigi said:
This has been troubling me a bit too. There's a bit of a paradox here. It seems to me that although in common parlance, randomness contains no meaningful information, any definition based upon this would be highly dependent upon context. To the contrary, we must conclude that randomness in the abscence of specific context, contains maximum information, even if we don't know what it means.

Without wishing to drag the thread into a discussion about randomness, let me illustrate what I mean. Suppose you generate a series of random bits, from quantum measurements. We would call them random and say that they contain no meaningful information. Now suppose you were to use those bits as a password and encrypt them. Decrypting or cracking would seek exactly that "information". We can go on to use that sequence of bits in all manner of ways, duplicating and basing ever more complex systems on them, to the extent that no one would argue that they were random other than in a historical context.

Without having a specific closed context, we can't possibly say that they contain no information. If the context is left open then the only meaningful definition must have a result that a random sequence of bits contains maximum possible information. If we treat a sequence of bits as just that, nothing more than a sequence of bits, then we can only use such a definition.
With the MWI, that "random" sequence that you generated is part of the "serial number" of the world you ended up in. All the other "random" sequences were also generated, but they're being read by other-worldly instances of you.

There really isn't any useful meaning to "meaningful data" in physics.
 
  • #128
.Scott said:
With the MWI, that "random" sequence that you generated is part of the "serial number" of the world you ended up in. All the other "random" sequences were also generated, but they're being read by other-worldly instances of you.

It's a bit off-kilter, but that is one way to look at it. As I've explained before, your hypothetical serial number is a product of decoherence and is just as relevant to the CI as the MWI. The difference is that in the MWI, the other serial numbers occur in other inaccessible universes, whereas in the CI, those other serial numbers don't occur. Now you're probably going to argue that in the CI you only need one serial number, but that is just ignoring the mechanism that creates it, namely decohrence. Just because you don't see a reason to assign a hypothetical serial number to your CI universe doesn't mean that the process which generated your hypothetical serial number has gone away.

Try to look at it this way, your serial number in the MWI denotes your universe from the others that branched off and you could've been in. Your serial number in the CI denotes your universe from the others that you could've been in. Both are purely hypothetical serial numbers, but we can associate the generation of these serial numbers with exactly the same process and entropy (information) production.

Is this making sense to you?
 
Last edited:
  • #129
craigi said:
It's a bit off-kilter, but that is one way to look at it. As I've explained before, your hypothetical serial number is a product of decoherence and is just as relevant to the CI as the MWI. The difference is that in the MWI, the other serial numbers occur in other inaccessible universes, whereas in the CI, those other serial numbers don't occur. Now you're probably going to argue that in the CI you only need one serial number, but that is just ignoring the mechanism that creates it, namely decohrence. Just because you don't see a reason to assign a hypothetical serial number to your CI universe doesn't mean that the process which generated your hypothetical serial number has gone away.

Try to look at it this way, your serial number in the MWI denotes your universe from the others that branched off and you could've been in. Your serial number in the CI denotes your universe from the others that you could've been in. Both are purely hypothetical serial numbers, but we can associate the generation of these serial numbers with exactly the same process and entropy production.
The difference between CI and MWI was more important to me before I was able to get general acknowledgment that decoherence caused an increase in information. I have never really been an advocate of MWI - but I'll use it if think it will make the explanation easier. No one has mentioned my little cut at MWI in post #114.

It's like the virtual photons. I know that there's math behind them that doesn't really correspond to a virtual photon, but that doesn't make the concept worthless. And if decoherence from a measurement ends up generating 2.863 worlds, I'm not going to through out MWI as worthless. Perhaps I'll just wonder if my world is only 86.3% complete.
 
  • #130
Actually, I shouldn't worry about there being fractional worlds, because decoherence is always a quantum event. I was thinking that where a photon lands in an interference pattern is a "random continuum", but it really isn't. The screen on which the photon lands is actually a large set of discrete targets each one representing world out of the many possibilities.

I wonder if all points on the screen represent equal increases in entropy. Probably not. A photon landing in a relatively dark area would be more informative than one landing in a bright area.
 
  • #131
.Scott said:
Actually, I shouldn't worry about there being fractional worlds, because decoherence is always a quantum event. I was thinking that where a photon lands in an interference pattern is a "random continuum", but it really isn't. The screen on which the photon lands is actually a large set of discrete targets each one representing world out of the many possibilities.

I wonder if all points on the screen represent equal increases in entropy. Probably not. A photon landing in a relatively dark area would be more informative than one landing in a bright area.

In a dark area, not only does the photon decohere it dissipates its energy within the screen, which also involves an entropy increase.

In a light area, the diffuse component gives rise to a probability of dechorence and to reflect photons of different wavelengths. The specular component, gives a probability to reflect the photon coherently ie. no decoherence.

In reality, a surface would be a combination of the two.

Furthermore, the spatial probability distribution of the photon isn't going to homogenous at all points on the screen.

Your simplified picture of information isn't going to be good enough to tackle this problem, since entropy deals with probabilities too.

Basically, your idea of enumerating states, isn't enough to measure information. Entropy increases when, in your model, new states are enumerated, but that's as far as it goes. You'll need to look to a proper understanding of the physical processes to understand how much entropy increases by.
 
Last edited:
  • #132
Just a brief comment: in this thread, the following notions of information have been used yet:
1) subjective information which increases if we learn something we didn't know before
2) information content which corresponds to the entropy
3) information capacity which corresponds to the maximum of the entropy

During a measurement, some of them increase, some decrease and some remain constant.
 
  • #133
I'm starting to like MWI more and more.

One of my questions has an obvious answer that occurred to me last night - but with a cool exception that didn't occur to me until this morning. I doubt it's anything new - but I've never heard it addressed from the MWI outlook.

My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

[itex] I = -\frac{\ln p}{ \ln 2 }[/itex] ... but only if you're sure of the time.

That caveat "only if you're sure of the time" describes the problem as the universe approaches heat death. As all your clocks fail and the best estimate of time becomes fuzzier and fuzzier, decoherence events will become less and less effective at generating additional information until there is no meaningful definition of either time or a change in entropy.

I don't think this is anything new, but with MWI, it's easy to see.

If your universe (or any isolated portion of it) has a finite radius beyond which it does not exchange energy and a fixed and finite mass, then it will have a Bekenstein Bound, a total information capacity. As time progresses, more and more information will be generated due to decoherence events until that capacity is challenged. At that point, it will become common for a decoherence event in one world to generate the same result as a difference decoherence event in another world. This remerge will result in reduced entropy. With enough remerging, a decoherence event could actually result in a substantial decrease in entropy.

What this looks like in the resulting world is an inability to determine the history of the world. There would be multiple histories from different world lines, not all necessarily spanning the same amount of time. Objects in the world and the world itself would have a fuzzy age - and as the information capacity was approached and remerge became the norm, the age would become fuzzier and fuzzier.

So, when there is a Bekenstein Bound:
[itex] I \leq -\frac{\ln p}{ \ln 2 }[/itex]
 
  • #134
.Scott said:
My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

[itex] I = -\frac{\ln p}{ \ln 2 }[/itex] ... but only if you're sure of the time.

I'm going to try to be brief here:

1) Your text doesn't match your maths.
2) How do you feel about your equation giving a non integer for the number of bits?
3) Where did your logarithm come from? You seem to suggest it just came to you, rather than through any deductive reasoning. It looks like you've just borrowed it from something you picked up from elsewhere, without understanding it.
5) Wherever got it from, you've missed the normalisation term.
6) The choice of base for the logarithm is arbitary.
7) Why do feel the need to invoke an observer measurement of time to prevent your equation becoming invalid?
8) Would it not be easier to suggest that at heat death coherence and hence decoherence cease to take place, statistically speaking, of course.
9) You still seem to think that this is relevant to the MWI, but this is just entropy generated at decoherence.
10) When you talk about Beckenstein bounds you ignore other sources of entropy.
11) Entropy has an actual definiton in physics, involving the Boltzmann constant.
12) My advice is to give the quantum physics a rest for a while and read a book on statistical mechanics. When you come back to it, this stuff should be a lot clearer. What you're doing here seems like trying to learn addition and subtraction by deconstrucing calculus.
 
Last edited:
  • #135
craigi said:
I'm going to try to be brief here:
1) Your text doesn't match your maths.
It's regarding this:
.Scott said:
My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

[itex] I = -\frac{\ln p}{ \ln 2 }[/itex] ... but only if you're sure of the time.
From the "1 in 32" example, p = 1/32, 1/p = 32, log base 2 of 32 is ln(32)/ln(2) = 5.
When I wrote the equation, I converted the log base 2 to "ln" just to make it better match up with the entropy/information conversion terms.
craigi said:
2) How do you feel about your equation giving a non integer for the number of bits?
That was a misconception on my part. I retracted it at post #130.
craigi said:
3) Where did your logarithm come from? You seem to suggest it just came to you, rather than through any deductive reasoning. It looks like you've just borrowed it from something you picked up from elsewhere, without understanding it.
If you have "b" bits, you can describe 2^b states.
So if you have n states, you will need log base 2 of n bits to encode them - not worrying for the moment about whether that yields an integer number of bits.
Now if all of the states are of equal probability (1/n), that is as far as you can go. But what if there are three states with p = 50%, 25%, and 25%. Well those 25% are just like 1 of 4 choices and will require 2 bits while the 50% will require only 1. So what really matters is the probability, not the number of choices.
I could be more formal in the math, but it would be more work and I don't thing it would make it more convincing.
craigi said:
5) Wherever got it from, you've missed the normalisation term.
I don't understand.
craigi said:
6) The choice of base for the logarithm is arbitary.
The Berekstein bound is expressed in bits (base 2). An arbitrary choice, but I like it.
craigi said:
7) Why do feel the need to invoke an observer measurement of time to prevent your equation becoming invalid?
That's really the crux of the issue - and the part that I think is really cool for MWI.
I don't feel it, for a universe with a Berekstein Bound, I know it.
It's really a two-part argument. But let me give you part one for now.

We can use the universe as a whole as our base line. So when all the worlds are taken as a whole, we are at the "base information" or "no information" state. We will also presume a time-zero (T0) for the universe, a time before any decoherence event has happened. At that moment, all possible worlds are possible so at that moment we have "no information". That moment doesn't really need to exist, but it makes for a simpler picture. In fact, part 2 of the argument involves getting rid of T0.

So from T0, we allow time to run forward and for entropy to steadily increase. As we do this, we get a more and more diverse collection of worlds. From the MWI point of view, because we are increasing the diversity of the worlds, the entropy is increasing. But as we approach maximum entropy, we start to run out of diversity. It's kind of like the lottery. If the jackpot is only $1M and you win, you will probably discover that your ticket is the only one with the winning number. But is the jackpot reaches $1B and you win, you will probably have to share it with other ticket holders who chose the same number. So when several worlds picket the same state as one of their "tickets", you have a new world with no unambiguous history. You also have less increase in entropy. In fact, if one of the contributors to your new shared world is part of your history, you will have, in effect, skipped back through time - loosing entropy.

Now those are the characteristics of "heat death". But what does it look like from within the world experiencing it? How would you fight it? Well, you would try to keep good time - so if you suddenly skipped back or forth through time you'd have a clock that told you what happened. But clocks do not last forever. They eventually blend in with the heat death. All time-keeping mechanisms, every one of them, will eventually be gone. Then we will drift into worlds where decoherence will have no meaningful consequence on entropy. All of the worlds states will have been generated and time will simply cause its world to move into one of several other already existing worlds.

craigi said:
8) You still seem to think this is relevant to the MWI, but this is just entropy generated at decoherence.
I'm not saying that the tail-off in entropy generation can't be explained with other interpretations. Only that I find MWI interesting because it makes it easy to see what happens as heat death approaches.
craigi said:
9) When you talk about Beckenstein bounds you ignore other sources of entropy.
I'm not sure I understand. In all cases, I am presuming that we are constrained to a Bekenstein Bound - although small leaks that cross the bound would not change the overall picture.
craigi said:
10) Entropy has an actual definition in physics, involving the Boltzmann constant.
Yup. And that is the one that I am talking about.
I am really saying that there can be world conditions where decoherence will not result in a net increase in entropy.
 
Last edited:
  • #136
I don't have time to go through your entire post, but you're still missing this error. You say:

.Scott said:
My question was about how the different probabilities on an event affect the amount of information (or entropy) that is added to a world. With little thought, the answer is pretty clear. The number of new bits added to the world will be log base 2 of the inverse of the probability of the event. So, for example, if its a 1 in 32 chance, the amount of added information will be 5 bits.

So here's the equation with the exception:

I: the information added to the world
p: the probability of the event the created the world

Then you use this equation:

I = − ln p / ln 2

I think we both understand that this is equivelant to:

I = - log2 p

Where is the inverse of the probability that you talk about in the text?

The equation that you use bares some resemblance to actual defintions of information and entropy, but it doesn't match what you describe. I find it hard to imagine that it's a fluke. I think you've borrowed this part of an equation from actual definitions, but failed to interpret it in terms of your own view of information.
 
Last edited:
  • #137
craigi said:
I don't have time to go through your entire post, but you're still missing this error. You say:



Then you use this equation:

I = − ln p / ln 2

I think we both understand that this is equivelant to:

I = - log2 p

Where is the inverse of the probability that you talk about in the text?

The equation that you use bares some resemblance to actual defintions of information and entropy, but it doesn't match what you describe. I find it hard to imagine that it's a fluke. I think you've borrowed this part of an equation from actual definitions, but failed to interpret it in terms of your own view of information.
I = log2 (1/p) = - log2 p
 
  • #138
.Scott said:
I = log2 (1/p) = - log2 p

You're aboslutely correct. I wrote my last post in a hurry and realized the mistake as soon as I left. Ignore everything I said in it. It's wrong.
 
  • #139
.Scott said:
There would be multiple histories from different world lines, not all necessarily spanning the same amount of time. Objects in the world and the world itself would have a fuzzy age - and as the information capacity was approached and remerge became the norm, the age would become fuzzier and fuzzier.
This idea of Penrose might be interesting for you.

I still have some objections. Unfortunately, your non-standard terminology makes it hard to get to the core of the issue. For example it doesn't make sense to call an event which reduces entropy a "decoherence event" although the underlying idea may well be valid.

The most obvious point is about the Bekenstein bound. The bound takes it's maximum entropy value for a black hole. A black hole is not isolated from it's environment: it absorbs matter and emits Hawking radiation. My understanding is that the bound occurs in the first place because a region of a certain radius which contains a certain amount of matter (resp. energy) cannot be isolated better from it's surroundings than a black hole. I don't see how it makes sense to apply this bound to the universe as a whole.

/edit: Also I think we need to keep in mind that we are not talking about the MWI here but about a speculative combination of the MWI and general relativity. As far as I know, the Bekenstein bound is derived from both GR and QM. We know that the simple combination of GR and QM is impossible at least in some cases. So the bound could be an expression of this incompatibility.
 
Last edited:
  • #140
kith said:


Indeed. This is where it gets very complicated.

As Bob suggested, decoherence is just a probabalistic process, in the same way as for other entropic events. In theory, we could consider a concept of recoherence, but this really brings in the question of what we mean by time.

We have time from the relativistic space-time continuum, time as a parameter in the Schrodinger equation and time as increased entropy. I don't think we know how to unite them. There are even concepts of time being directed by the expansion of the unvierse, and suggestions that time and space may extend beyond the universe (observable or otherwise). The reason that we, as humans, observe time in the direction that we do is due to the entropic direction of time. Entropy is at the heart of all the chemical and biological processes that give rise to our consciousness. A clock behaves the same way. This is really a macroscopic defintion of time. Is it really the same macroscopic definition of time as from general relativity? It seems not.

If we talk of decoherence happening backwards, then perhaps we're actually talking about microscopic time reversing for the particle (or entangled particles even), but then does this arrow of time match the direction given from the parameter in the Schrodinger equation? It would seem not.

Scott's idea of "remergence" is based upon his serial number concept. In that, the same serial number emerges from different branches. I'm not even sure how it can occur, because as he describes his serial number, when a branch occurs he splits a bit to generate 2 new ones. Nevertheless, presuming that there's an extra mechanism to his serial number generation, I don't see how this could map to a reversal of decoherence or entropy. I would expect it just to map to probabilties of occurance of a particular macrostate, rather than anything more interesting.
 
Last edited by a moderator:

Similar threads

Replies
1
Views
1K
Replies
5
Views
2K
Replies
41
Views
4K
Replies
3
Views
2K
Replies
3
Views
3K
Replies
20
Views
2K
Replies
14
Views
1K
Back
Top