Conservation of Information Law

1. May 19, 2012

Jon_Trevathan

Is there a conservation of information law?

I am particularly interested in the arguments (speculative and experimentally based) that may support a conservation of information law, but need to know the arguments on both sides.

2. May 19, 2012

Ken G

I think perhaps a different way to frame your question is, how can we define information such that it will be conserved? We like conservation laws in physics, so we often define quantities in such a way that they will be conserved. Information will require some special handling to get it to be conserved, because it is normally associated with entropy (indeed there is such a thing as "information entropy", which essentially distinguishes between what we know about a system and what we don't know about it, and lumps all the information that we don't know into "information entropy"). Entropy is not conserved, indeed we have a law that it increases, but this just means that if we group systems based on what we know about them, and let what we don't know about them be regarded as a set of equally likely substates, then the overall probability the system will end up in one of our groups is simply proportional to the number of equally likely substates in each group. The information we need to assign the groups comes from what we know, the number of substates in each group comes from all the things we don't know, and the natural logarithm of that number of substates is called the information entropy. In that light, we see that the second law of thermodynamics is simply the law of large numbers-- systems involving many trials will tend toward the most likely outcomes, and the most likely outcome is that our system will end up in a group with more unknown substates than the group it started out in.

So given all this, has someone defined information in a way that produces a conservation law? Well, the one situation in which entropy is conserved is in "reversible" systems, where whatever happens can also unhappen just as easily. So I think that would be a starting point to trying to create a law of conservation of information-- think of information in the context of completely reversible occurences. Since the basic laws of physics are all reversible (except in certain specialized circumstances), I would tend to say that defining information in terms of complete information, where you don't distinguish between what is known and what is not known, would yield a type of information that is conserved in that sense. This is tantamount to the concept of absolute determinism, but gets deeply into the philosophy of science to try and assert whether or not the universe is actually deterministic! So the way I'd put it is, it might be possible to define a type of information that is conserved, but it might not end up being a very useful way to think about information-- too much sacrifice in utility to get that conservation law. Instead, information entropy, where we distinguish what we know from what we don't know, is the way we use information in practical applications, and that is not conserved-- systems tend to "get away from us" in terms of what we can know about them, and it is useful for us to recognize and respond to this effect.

Last edited: May 19, 2012