Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Low Bandwidth Networking

  1. Sep 6, 2008 #1

    -Job-

    User Avatar
    Science Advisor

    Since this is a Computer Science forum now i thought i'd post this impractical yet interesting take on data transfer.

    If machine A has to send n bits over to machine B, and A knows the clock speed of B, rather than sending the n bits, A can instead always send two bits bit-1 and bit-2. B calculates the delay between the arrival of bit-1 and bit-2 (in B's clock cycles) which gives a number whose bit representation is the data that A sent.

    So if A wanted to send a 5-bit string to B, for example 10011 (which is 19 in decimal) and knowing that B's clock speed is 1Ghz, A would send two bits, the second one being sent 19 billionths of a second after the first. B receives the two bits, computes the delay (which amounts to 19 computer cycles) and knows that A sent the value 19, or 10011.

    This isn't practical for a many reasons, networks have long and largely variable delays (bits could take different routes, through faster or slower routers and shared mediums). But it's interesting, i think, that we can transmit any quantity of data between two machines using only 2 bits, given enough time (exponential time actually, with respect to the number of bits that need to be sent, which makes this completely impractical since just 50 bits would take forever, literally).

    What's happening is actually that, if we were to look at the two bits in traffic, we'd see that they are separated by a distance d at all times. The value of d in binary is the data being sent. So we're not breaking any rules, physically there is a valid encoding of the data, it's encoded in a distance.

    It's the same idea as encoding data in a long shoe lace, for example. We can take a long shoe lace and cut it to a size s where s in binary is the data we want to store. Or, instead of a shoe lace, we can take two baseballs and throw them into the air, one after the other. The amount of time between when the two baseballs hit the ground (in some time unit) gives the encoded data. By delaying the throw of the second ball, we can encode as much data as we want, without the need for a storage medium.

    We can also choose to split the data we want to send into groups. For each group we would use the two-bit technique. If we do this we can obtain a process which enables us to send n bits from A to B, by sending only (1+n/logn) physical bits and waiting a total of (n^2/logn) clock cycles.
     
  2. jcsd
  3. Sep 7, 2008 #2

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    Interesting.
    It occurs to me that in this case data transmission would be subject to the rules of SR.
    You would only observe the correct delay if the the transmitter and receiver are relatively at rest in the same frame.

    I would have to think a while about this, but I wonder if this might constitute a simplified explanation of how GPS works.
     
  4. Sep 7, 2008 #3

    CRGreathouse

    User Avatar
    Science Advisor
    Homework Helper

    Of course in all that time, the receiving computer's bandwidth is monopolized by the sender, since any other transmission in the meantime will ruin the communication. Unless, that is, a b-bit identifier is sent instead of a single bit. In that case the bandwidth taken is 2b+k (for some small k, whatever the network requires, perhaps 2) and the protocol is limited to 2^b members communicating to a single computer.
     
  5. Sep 7, 2008 #4

    -Job-

    User Avatar
    Science Advisor

    It's possible to compromise and be able to send n-bit strings in less than exponential time at the cost of sending more bits.

    For example, suppose we split the n bits into n/logn groups of logn bits. We send an initial bit that signals the start of transmission, then every subsequent bit establishes the delay which encodes one of the n/logn bit groups. This means that we send 1+n/logn bits.

    For sending each bit group, we must wait for 2^logn = n cycles to elapse. Since there are n/logn groups, we must wait a total of n^2/logn cycles, which is feasible from an implementation perspective. This particular approach would be only for a one-time use, but it shows that we can reach a balance between sent bits and elapsed cycles.

    So for example, if we have any 100 bit string and B is running at 1Ghz, then we can send all the data over by:
    - sending over 1+100/6 = 18 bits approximately.
    - waiting a total of 100*100/6 = 1667 billionths of a second.

    So every time we can get away with sending less bits. This brings up interesting questions about the relationship between time and space. Superficially we might say that we're encoding some data in time, so we can get away with using less space bits.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Low Bandwidth Networking
Loading...