How Does a Computer Know When Data Transfer Has Ended?

  • Thread starter Thread starter Bararontok
  • Start date Start date
  • Tags Tags
    Data
AI Thread Summary
Computers determine the end of data transfer using file size specifications, escape characters, and synchronization patterns to prevent misinterpretation of a continuous stream of '0' bits. Digital signaling employs run-length limited transmission to maintain clock synchronization between transmitter and receiver, ensuring accurate data interpretation. Fluctuating transfer rates during file copying or downloading are managed through negotiation protocols that adjust the transfer rate as needed. Special codes are included in data packets to indicate when to switch transfer rates, minimizing errors caused by delays. Ultimately, the method of data transfer and synchronization can vary based on the type of storage device and bus protocol used.
Bararontok
Messages
296
Reaction score
0
Because digital signaling uses an almost instantaneous transition from a set power level to 0W to represent the '1's and '0's of binary data, how does the computer determine when a file has stopped being copied to its storage device? Because even after the signal has stopped being transmitted, the power level would drop to and remain at 0W, so what is done to prevent the computer from mistaking the 0W power level for a continuous stream of '0' bits and keep saving the '0' bits to the storage device endlessly?
 
Technology news on Phys.org
Generally digital signaling involves run length limited transmission or something similar, in order to keep the transmitter and receiver clocks in sync, by limiting the maximum amount of time with no transition in the digital stream. Similar to the "escape characters" mentioned by mfb, special patterns are used to synchronize the clocks and indicate the start of a message. If the length of messages is not fixed, then another special pattern can be used to indicate the end of a message. Wiki article:

http://en.wikipedia.org/wiki/Run_Length_Limited
 
Alright, now here is another question. Since the transfer rates of copying or downloading files are not fixed, what is done to prevent a computer from mistaking a slow lower frequency signal with longer periods for the '1's and '0's for a higher frequency signal with shorter periods and with continuous streams of multiple '1's and '0's instead of just long periods of '1's and '0's?

A possible explanation: maybe the reason why transfer rates fluctuate is because when downloading a file from the internet, the server computer has to alternate between requests for data, but the individual packets sent to a specific computer are of fixed length and fixed frequency while when a file is being copied from one local hard drive connected to another local hard drive, the computer performing the copy has to alternate requests between the copy operation and the other processes that are also running so once again the packets of data being transferred are of fixed length and frequency. Perhaps, according to the article on RLL, the clocks of the transmitter and receiver are synchronized to a given frequency before the copying starts, and to prevent the error of saving redundant '0' bits after every packet is downloaded due to the copy process being stalled because other requests are being handled, the file size, escape characters and synchronization data are added to each packet so that the save operation stops if there arises a situation where a packet is sent and the next packets are delayed. The file size, escape characters and synchronization data are once again used for the last packet so that when it is sent, the save operation stops completely.
 
Last edited:
Bararontok said:
Since the transfer rates of copying or downloading files are not fixed ...
If transfer rates are not fixed, then a special protocol is used to transition between transfer rates (called negotiation in some cases), generally starting at the slowest rate, then sending messages indicating what the transfer rate should switch to.
 
rcgldr said:
If transfer rates are not fixed, then a special protocol is used to transition between transfer rates (called negotiation in some cases), generally starting at the slowest rate, then sending messages indicating what the transfer rate should switch to.

So that means that there should be an allowance for a time interval when the transfer rate remains fixed and then a code indicating what the transfer rate should switch to when the next segment of code is to be copied. Of course, for this to work without errors the switching of transfer rates should not be too frequent or always occurring at random intervals.
 
Surely the answer to this question is different depending on storage device type, bus protocol etc. Storage devices could talk to a chip in all kinds of ways, and negotiating this would be the responsibility of the driver for that device (if on a PC).
 
Bararontok said:
the switching of transfer rates should not be too frequent or always occurring at random intervals.
In the case of SATA, the bus is allowed to go completely dormant (a sleep state for the bus). To reduce costs, only the SATA host has a very accurate clock, while the SATA device goes through handshake sequence to synchronize it's programmable oscillator with the host clock. There's also a handshake sequence to switch from the default 1.5 gbps to 3.0 gbps or 6.0 gbps.
 
Back
Top