How Does a Computer Know When Data Transfer Has Ended?

  • Thread starter Thread starter Bararontok
  • Start date Start date
  • Tags Tags
    Data
Click For Summary

Discussion Overview

The discussion revolves around how computers determine the end of data transfer during file copying or downloading. It explores concepts related to digital signaling, synchronization, transfer rates, and the protocols involved in data transmission. The scope includes technical explanations and conceptual clarifications regarding data transfer mechanisms.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant questions how a computer distinguishes the end of a file transfer when the power level drops to 0W, suggesting concerns about mistaking this for continuous '0' bits.
  • Another participant proposes that specifying file size or using escape characters can help indicate the end of a transfer.
  • Some participants discuss the role of run length limited transmission in maintaining synchronization between transmitter and receiver clocks, mentioning the use of special patterns to indicate the start and end of messages.
  • A participant raises a question about how to prevent confusion between slow and fast transfer rates, suggesting that synchronization data and file size information are crucial for accurate data handling.
  • There is a mention of negotiation protocols that allow for transitioning between different transfer rates, starting from the slowest rate and adjusting as needed.
  • Another participant notes that the approach may vary depending on the type of storage device and bus protocol, indicating that device drivers play a role in managing these interactions.
  • One participant highlights that in SATA, the bus can enter a sleep state, and synchronization involves a handshake sequence to align the device's clock with the host clock.

Areas of Agreement / Disagreement

Participants express various viewpoints on the mechanisms of data transfer and synchronization, indicating that multiple competing views remain. There is no consensus on a single method or explanation for how computers determine the end of data transfer.

Contextual Notes

Limitations include the dependence on specific protocols and device types, as well as the potential for unresolved details regarding synchronization and transfer rate negotiation.

Bararontok
Messages
296
Reaction score
0
Because digital signaling uses an almost instantaneous transition from a set power level to 0W to represent the '1's and '0's of binary data, how does the computer determine when a file has stopped being copied to its storage device? Because even after the signal has stopped being transmitted, the power level would drop to and remain at 0W, so what is done to prevent the computer from mistaking the 0W power level for a continuous stream of '0' bits and keep saving the '0' bits to the storage device endlessly?
 
Technology news on Phys.org
Generally digital signaling involves run length limited transmission or something similar, in order to keep the transmitter and receiver clocks in sync, by limiting the maximum amount of time with no transition in the digital stream. Similar to the "escape characters" mentioned by mfb, special patterns are used to synchronize the clocks and indicate the start of a message. If the length of messages is not fixed, then another special pattern can be used to indicate the end of a message. Wiki article:

http://en.wikipedia.org/wiki/Run_Length_Limited
 
Alright, now here is another question. Since the transfer rates of copying or downloading files are not fixed, what is done to prevent a computer from mistaking a slow lower frequency signal with longer periods for the '1's and '0's for a higher frequency signal with shorter periods and with continuous streams of multiple '1's and '0's instead of just long periods of '1's and '0's?

A possible explanation: maybe the reason why transfer rates fluctuate is because when downloading a file from the internet, the server computer has to alternate between requests for data, but the individual packets sent to a specific computer are of fixed length and fixed frequency while when a file is being copied from one local hard drive connected to another local hard drive, the computer performing the copy has to alternate requests between the copy operation and the other processes that are also running so once again the packets of data being transferred are of fixed length and frequency. Perhaps, according to the article on RLL, the clocks of the transmitter and receiver are synchronized to a given frequency before the copying starts, and to prevent the error of saving redundant '0' bits after every packet is downloaded due to the copy process being stalled because other requests are being handled, the file size, escape characters and synchronization data are added to each packet so that the save operation stops if there arises a situation where a packet is sent and the next packets are delayed. The file size, escape characters and synchronization data are once again used for the last packet so that when it is sent, the save operation stops completely.
 
Last edited:
Bararontok said:
Since the transfer rates of copying or downloading files are not fixed ...
If transfer rates are not fixed, then a special protocol is used to transition between transfer rates (called negotiation in some cases), generally starting at the slowest rate, then sending messages indicating what the transfer rate should switch to.
 
rcgldr said:
If transfer rates are not fixed, then a special protocol is used to transition between transfer rates (called negotiation in some cases), generally starting at the slowest rate, then sending messages indicating what the transfer rate should switch to.

So that means that there should be an allowance for a time interval when the transfer rate remains fixed and then a code indicating what the transfer rate should switch to when the next segment of code is to be copied. Of course, for this to work without errors the switching of transfer rates should not be too frequent or always occurring at random intervals.
 
Surely the answer to this question is different depending on storage device type, bus protocol etc. Storage devices could talk to a chip in all kinds of ways, and negotiating this would be the responsibility of the driver for that device (if on a PC).
 
Bararontok said:
the switching of transfer rates should not be too frequent or always occurring at random intervals.
In the case of SATA, the bus is allowed to go completely dormant (a sleep state for the bus). To reduce costs, only the SATA host has a very accurate clock, while the SATA device goes through handshake sequence to synchronize it's programmable oscillator with the host clock. There's also a handshake sequence to switch from the default 1.5 gbps to 3.0 gbps or 6.0 gbps.
 

Similar threads

Replies
10
Views
5K
  • Sticky
  • · Replies 13 ·
Replies
13
Views
8K
Replies
29
Views
5K
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
5K
Replies
2
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K