How can we transmit data using only 2 bits?

Click For Summary

Discussion Overview

The discussion explores the concept of transmitting data using only 2 bits, focusing on theoretical approaches to data transfer, the implications of timing and distance in encoding data, and potential applications or analogies. Participants examine the feasibility and limitations of such methods in practical scenarios.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant proposes a method where machine A sends two bits to machine B, which calculates the delay between the bits to determine the data sent, suggesting that this could theoretically transmit any amount of data given enough time.
  • Another participant notes that this method would be subject to the rules of special relativity, implying that the correct delay would only be observed if the machines are at rest in the same frame.
  • A different participant points out that the proposed method monopolizes the receiving computer's bandwidth, suggesting that a b-bit identifier could be sent instead to allow multiple transmissions, though this would increase the total bits sent.
  • One participant suggests a compromise approach to send n-bit strings in less than exponential time by splitting the data into groups and sending an initial bit to signal transmission, which would reduce the number of bits sent while still requiring significant wait times.
  • Another participant raises questions about the relationship between time and space in data encoding, suggesting that encoding data in time allows for the possibility of using fewer space bits.

Areas of Agreement / Disagreement

Participants express various viewpoints on the feasibility and implications of the proposed methods, with no consensus reached on the practicality or efficiency of the approaches discussed. Multiple competing views remain regarding the effectiveness of different strategies for data transmission.

Contextual Notes

Limitations include the dependence on specific conditions such as clock speeds, the effects of network delays, and the assumptions about the physical setup of the machines involved. The discussion does not resolve the mathematical complexities involved in the proposed methods.

-Job-
Science Advisor
Messages
1,152
Reaction score
4
Since this is a Computer Science forum now i thought i'd post this impractical yet interesting take on data transfer.

If machine A has to send n bits over to machine B, and A knows the clock speed of B, rather than sending the n bits, A can instead always send two bits bit-1 and bit-2. B calculates the delay between the arrival of bit-1 and bit-2 (in B's clock cycles) which gives a number whose bit representation is the data that A sent.

So if A wanted to send a 5-bit string to B, for example 10011 (which is 19 in decimal) and knowing that B's clock speed is 1Ghz, A would send two bits, the second one being sent 19 billionths of a second after the first. B receives the two bits, computes the delay (which amounts to 19 computer cycles) and knows that A sent the value 19, or 10011.

This isn't practical for a many reasons, networks have long and largely variable delays (bits could take different routes, through faster or slower routers and shared mediums). But it's interesting, i think, that we can transmit any quantity of data between two machines using only 2 bits, given enough time (exponential time actually, with respect to the number of bits that need to be sent, which makes this completely impractical since just 50 bits would take forever, literally).

What's happening is actually that, if we were to look at the two bits in traffic, we'd see that they are separated by a distance d at all times. The value of d in binary is the data being sent. So we're not breaking any rules, physically there is a valid encoding of the data, it's encoded in a distance.

It's the same idea as encoding data in a long shoe lace, for example. We can take a long shoe lace and cut it to a size s where s in binary is the data we want to store. Or, instead of a shoe lace, we can take two baseballs and throw them into the air, one after the other. The amount of time between when the two baseballs hit the ground (in some time unit) gives the encoded data. By delaying the throw of the second ball, we can encode as much data as we want, without the need for a storage medium.

We can also choose to split the data we want to send into groups. For each group we would use the two-bit technique. If we do this we can obtain a process which enables us to send n bits from A to B, by sending only (1+n/logn) physical bits and waiting a total of (n^2/logn) clock cycles.
 
Computer science news on Phys.org
Interesting.
It occurs to me that in this case data transmission would be subject to the rules of SR.
You would only observe the correct delay if the the transmitter and receiver are relatively at rest in the same frame.

I would have to think a while about this, but I wonder if this might constitute a simplified explanation of how GPS works.
 
Of course in all that time, the receiving computer's bandwidth is monopolized by the sender, since any other transmission in the meantime will ruin the communication. Unless, that is, a b-bit identifier is sent instead of a single bit. In that case the bandwidth taken is 2b+k (for some small k, whatever the network requires, perhaps 2) and the protocol is limited to 2^b members communicating to a single computer.
 
It's possible to compromise and be able to send n-bit strings in less than exponential time at the cost of sending more bits.

For example, suppose we split the n bits into n/logn groups of logn bits. We send an initial bit that signals the start of transmission, then every subsequent bit establishes the delay which encodes one of the n/logn bit groups. This means that we send 1+n/logn bits.

For sending each bit group, we must wait for 2^logn = n cycles to elapse. Since there are n/logn groups, we must wait a total of n^2/logn cycles, which is feasible from an implementation perspective. This particular approach would be only for a one-time use, but it shows that we can reach a balance between sent bits and elapsed cycles.

So for example, if we have any 100 bit string and B is running at 1Ghz, then we can send all the data over by:
- sending over 1+100/6 = 18 bits approximately.
- waiting a total of 100*100/6 = 1667 billionths of a second.

So every time we can get away with sending less bits. This brings up interesting questions about the relationship between time and space. Superficially we might say that we're encoding some data in time, so we can get away with using less space bits.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
7
Views
3K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • Sticky
  • · Replies 13 ·
Replies
13
Views
8K
Replies
4
Views
7K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
4K
  • · Replies 146 ·
5
Replies
146
Views
11K