Shannon Entropy and Origins of the Universe

In summary, The big bang is considered to be a low Shannon entropy state due to the high degree of order and low number of degrees of freedom. However, there is some debate and uncertainty surrounding the exact amount of Shannon entropy at the big bang and whether information is truly lost due to entropy. Quantum gravity theories may offer a new perspective on understanding the origin of the universe.
  • #1
mr_whisk
7
0
Hello Community,

I have a question that I'm struggling to get clarification on and I would greatly appreciate your thoughts.

Big bang theories describe an extremely low thermodynamic entropy (S) state of origin (very ordered).
Question: Is the big bang considered to be a high or low shannon entropy (H -information) state?

Here's why I question:

S and H are related: S = the amount of H needed to define the detailed microscopic state of the system, given its macroscopic description.
Thus: Gain in entropy always means loss of information.

2nd Law of Thermodynamics: S always tends to increase for closed systems.
Therefore: H is generally decreasing.

This suggests a very high Shannon entropy state at the big bang.

However Shannon entropy is very closely connected to the number of degrees of freedom, etc.
i.e Wasn't there a lot less degrees of freedom around the time of the big bang? Also, wasn't the universe a lot smaller and given the max information density, didn't it have a much smaller max H?

This suggests a very low Shannon entropy state at the big bang.

I'm confused, can't find anything on this anywhere and would love some input.

Blessings to you all :)
 
Last edited:
Space news on Phys.org
  • #2
Anyone out there?
 
  • #3
That is the kind of question that has led researchers to conclude we need a different playbook to fathom the origin of the universe. One that we have been fiddling with for most of the past century is called quantum gravity. For a deeper discussion of Shannon entropy, this may be helpful; http://micro.stanford.edu/~caiwei/me334/Chap7_Entropy_v04.pdf, and, http://arxiv.org/abs/0712.0029 Entropy Growth in the Early Universe. There is some dissent on whether information is truly lost due to entropy as illustrated by the Hawking and Susskind debates over information loss in black holes.
 
  • #4
mr_whisk said:
S and H are related: S = the amount of H needed to define the detailed microscopic state of the system, given its macroscopic description.
Thus: Gain in entropy always means loss of information.
Where have you seen H defined this way in a physical system? Shannon entropy deals with situations where you are communicating outcomes drawn from some probability distribution. If outcomes have different probabilities, H is a -1 times a weighted average of the logarithm of the probability for each outcome, weighted by the probability of that outcome--in an optimal coding scheme where more probable outcomes are assigned shorter bitstrings, this would be the same as the average length of the bitstring you get if someone repeatedly draws outcomes from this probability distribution, and on each trial they send a string of bits that tells you the outcome on that trial. If all outcomes are equally probable, the Shannon entropy reduces to just the logarithm of the number of possible outcomes.

The formulas for physical entropy look identical, except for being multiplied by plus-or-minus Boltzmann's constant k. In a situation where different microstates have different probabilities (my recollection is that this is usually a scenario where the system is connected to some external reservoir that it can trade energy/volume/particles with, and that the system may be in different possible macrostates with different probabilities), the Gibbs entropy is -k times a weighted average of the logarithm of the probability for each microstate, weighted by the probability of that microstate. And the Boltzmann entropy for a given macrostate is k times the logarithm of the number of microstates associated with that macrostate.

To think about the relation between Shannon entropy and physical entropy, I think it's easier to consider Boltzmann entropy. Suppose someone is measuring the exact microstate of a system whose macrostate you both already know. Here all microstates associated with that macrostate are equally probable, so the Shannon entropy is just the logarithm of the number of possible microstates they might find. We could imagine a large set of trials where they repeatedly measure the microstate of systems prepared in that same known macrostate, and communicate the result to you each time--the lower the entropy of the macrostate, the shorter the bitstring they would need to communicate the microstate to you on each trial (for example, in an extremely low-entropy macrostate where there are only 2 possible microstates, a 1-bit message would be sufficient to tell you the microstate each time...but with 4 possible microstates they'd need 2 bits, with 8 possible microstates they'd need 3 bits, etc.) So it seems that the Shannon entropy of this sort of "outcome"--a measurement of a microstate given a known macrostate--is just the physical Boltzmann entropy of the macrostate with Boltzmann's constant taken out, meaning that a gain in physical entropy is a gain in Shannon entropy, not a loss as you suggested.

I guess an alternate situation would be if someone was randomly drawing microstates from the system's entire phase space of possible microstates, and then sending you a message only about the macrostate--in this case, since high-entropy macrostates would be far more probable, in an optimal coding scheme they would involve shorter bitstrings, whereas low-entropy macrostates would require longer bitstrings. But Shannon entropy is supposed to involve a weighted average of the length of all possible bitstrings you might get in an ensemble of trials, not the length of any specific bitstring, so this still doesn't quite seem to make sense of the claim that higher entropy states involve smaller Shannon entropy. Perhaps what you mean is that low-entropy macrostates have higher self-information--the length of the bitstring required to communicate them in an optimal coding scheme (with Shannon entropy being the expectation value for the self-information as shown here)--assuming that you are communicating messages about the macrostate only, drawing the system's state randomly from the set of all possible microstates.
 
  • #5


Hello,

I can provide some clarification on this topic. First, it is important to understand that the concept of entropy, whether it is thermodynamic entropy (S) or Shannon entropy (H), is a measure of disorder or randomness in a system. In the context of the origins of the universe, the term "entropy" can be a bit misleading because it is often used to describe the state of the universe at a specific point in time, rather than a change over time.

In terms of Shannon entropy, it is true that a decrease in entropy can mean a decrease in information, but this does not necessarily apply to the origins of the universe. At the time of the big bang, the universe was in a highly ordered and homogeneous state, which means there was very little variation or complexity. This can be seen as a low Shannon entropy state, as there were fewer possible configurations or degrees of freedom.

However, the second law of thermodynamics does not necessarily apply to the universe as a whole. It is a fundamental law that applies to closed systems, but the universe is not a closed system. It is constantly expanding and evolving, and new energy and matter are being introduced. Therefore, the concept of entropy in the universe is more complex and cannot be simply described as increasing or decreasing.

In conclusion, the big bang is generally considered to be a low Shannon entropy state due to the high level of order and homogeneity, but it is important to note that this is just one aspect of the entire picture. The origins of the universe involve many other factors and processes that cannot be fully explained by the concept of entropy alone. I hope this helps to clarify your question.
 

1. What is Shannon Entropy?

Shannon Entropy is a measurement of the amount of uncertainty or randomness in a system. It was first introduced by Claude Shannon in the 1940s as a way to quantify information in a communication system.

2. How does Shannon Entropy relate to the origins of the universe?

In the context of the origins of the universe, Shannon Entropy can be used to understand the level of disorder or randomness in the early universe. It can also be applied to theories of cosmic inflation, which suggest that the early universe underwent a rapid expansion and increase in entropy.

3. Can Shannon Entropy help explain the beginning of the universe?

While Shannon Entropy can provide insights into the early stages of the universe, it is not able to fully explain the beginning or origin of the universe. It is just one aspect of a complex system and does not address the fundamental question of how the universe came into existence.

4. How is Shannon Entropy calculated?

Shannon Entropy is calculated by taking the logarithm of the number of possible outcomes in a system, multiplied by the probability of each outcome. This calculation results in a value that represents the amount of uncertainty or randomness in the system.

5. Is Shannon Entropy applicable to other scientific fields?

Yes, Shannon Entropy is widely used in various scientific fields, including physics, biology, and information theory. It can be applied to systems such as DNA sequences, thermodynamics, and communication networks to measure the amount of information or disorder present in these systems.

Similar threads

Replies
1
Views
1K
Replies
5
Views
1K
Replies
7
Views
837
  • Cosmology
Replies
10
Views
2K
Replies
27
Views
3K
  • Thermodynamics
Replies
26
Views
1K
  • Special and General Relativity
Replies
7
Views
257
Replies
10
Views
2K
Replies
15
Views
1K
Replies
13
Views
1K
Back
Top