Proof that all information can be coded in binary?

Click For Summary
The discussion centers on the assertion from Pinter's "A Book of Abstract Algebra" that all information can be encoded in binary strings of 0s and 1s. The participant expresses interest in whether a rigorous proof exists for this claim, noting that while it seems intuitive, defining "information" is crucial for such a proof. They suggest that finite information can be represented by finite binary strings, but acknowledge the complexity of encoding all possible information, especially in terms of physical measurements. The conversation also touches on techniques like Gödel numbering as a method for encoding information. Ultimately, the topic raises questions about the limits of information representation in binary form.
jack476
Messages
327
Reaction score
125
I just got Pinter's book, "A Book of Abstract Algebra", for the modern algebra course that I'm taking. It's a very nice book, I'm enjoying reading through it so far.

What's especially interesting is the connections to computer science and controls, mostly because I switched to math and physics out of electrical engineering. Anyway, in its introduction chapter on groups, it makes the following statement:

Groups in Binary Codes
The most basic way of transmitting information is to code it into strings of 0s and 1s, such as 0010111, 1010011, etc. Such strings are called binary words, and the number of 0s and 1s in any binary word is called its length. All information may be coded in this fashion.

(Emphasis mine).

Out of curiosity, I am wondering if there exists a proof of this statement, that is, that any and every single piece of information in the universe, of arbitrary complexity and abstraction, could be encoded as a string of binary digits, assuming one could access that information and had a storage device large enough.

Intuitively, I would say that it's obvious, a piece of information can be stored in every digit and in theory we can always increase the information capacity by adding digits, but I'm wondering if a rigorous proof exists.

(Note: I put this in the abstract algebra section because it came up in an abstract algebra textbook, I will understand if the mods feel it is more appropriate in the computer science section).
 
Physics news on Phys.org
In order to prove it, first you'd have to define "information".
 
I think the statement is intended to convey that "information of any type may be coded in this fashion", or more rigorously "finite information of any type may be represented by a finite binary string", rather than "the set of all sets of information..."

You can't even measure the position and velocity of every proton in a drop of water, let alone encode it.
 
You could, for example, use Godel numbering to assign a number to every symbol, letter, or formula used in stating the "information", then write that number in binary notation.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

Replies
13
Views
3K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
5K
Replies
10
Views
5K
Replies
4
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 7 ·
Replies
7
Views
23K
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
5K