Conservation of Information Law

Click For Summary
The discussion centers on the concept of a conservation of information law, exploring both speculative and experimental arguments. It highlights the relationship between information and entropy, noting that while entropy increases and is not conserved, defining information in terms of reversible systems may lead to a conservation law. The idea of complete information, where known and unknown aspects are indistinguishable, is proposed as a potential framework for conservation. However, this approach may sacrifice practical utility, as information entropy, which differentiates between known and unknown, is more applicable in real-world scenarios. Ultimately, while a conservation law for information could be defined, its practical usefulness remains questionable.
Jon_Trevathan
Messages
44
Reaction score
0
Is there a conservation of information law?

I am particularly interested in the arguments (speculative and experimentally based) that may support a conservation of information law, but need to know the arguments on both sides.
 
Physics news on Phys.org
I think perhaps a different way to frame your question is, how can we define information such that it will be conserved? We like conservation laws in physics, so we often define quantities in such a way that they will be conserved. Information will require some special handling to get it to be conserved, because it is normally associated with entropy (indeed there is such a thing as "information entropy", which essentially distinguishes between what we know about a system and what we don't know about it, and lumps all the information that we don't know into "information entropy"). Entropy is not conserved, indeed we have a law that it increases, but this just means that if we group systems based on what we know about them, and let what we don't know about them be regarded as a set of equally likely substates, then the overall probability the system will end up in one of our groups is simply proportional to the number of equally likely substates in each group. The information we need to assign the groups comes from what we know, the number of substates in each group comes from all the things we don't know, and the natural logarithm of that number of substates is called the information entropy. In that light, we see that the second law of thermodynamics is simply the law of large numbers-- systems involving many trials will tend toward the most likely outcomes, and the most likely outcome is that our system will end up in a group with more unknown substates than the group it started out in.

So given all this, has someone defined information in a way that produces a conservation law? Well, the one situation in which entropy is conserved is in "reversible" systems, where whatever happens can also unhappen just as easily. So I think that would be a starting point to trying to create a law of conservation of information-- think of information in the context of completely reversible occurences. Since the basic laws of physics are all reversible (except in certain specialized circumstances), I would tend to say that defining information in terms of complete information, where you don't distinguish between what is known and what is not known, would yield a type of information that is conserved in that sense. This is tantamount to the concept of absolute determinism, but gets deeply into the philosophy of science to try and assert whether or not the universe is actually deterministic! So the way I'd put it is, it might be possible to define a type of information that is conserved, but it might not end up being a very useful way to think about information-- too much sacrifice in utility to get that conservation law. Instead, information entropy, where we distinguish what we know from what we don't know, is the way we use information in practical applications, and that is not conserved-- systems tend to "get away from us" in terms of what we can know about them, and it is useful for us to recognize and respond to this effect.
 
Last edited:
For simple comparison, I think the same thought process can be followed as a block slides down a hill, - for block down hill, simple starting PE of mgh to final max KE 0.5mv^2 - comparing PE1 to max KE2 would result in finding the work friction did through the process. efficiency is just 100*KE2/PE1. If a mousetrap car travels along a flat surface, a starting PE of 0.5 k th^2 can be measured and maximum velocity of the car can also be measured. If energy efficiency is defined by...

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 53 ·
2
Replies
53
Views
5K
Replies
5
Views
2K