How do disk operations affect CPU utilization?

  • Thread starter Thread starter es619512
  • Start date Start date
  • Tags Tags
    cpu Disk
Click For Summary
The CPU's involvement in disk operations varies based on the technology used, such as PIO (Programmed Input/Output) and DMA (Direct Memory Access). In PIO mode, the CPU actively manages each byte transfer, leading to higher CPU utilization, while DMA allows the disk to transfer data independently, significantly reducing CPU load. During long disk operations like file copying, CPU utilization remains low, enabling users to perform other tasks, including CPU-intensive activities like gaming, without significant impact. However, multiple simultaneous disk operations can increase CPU demand due to shared memory bandwidth and the overhead of managing these transfers. The efficiency of data transfers is also influenced by factors such as memory latency, cache access times, and the need for the CPU to handle task switching and setup for each transfer. Understanding the relative speeds of CPU instructions, memory access, and disk transfer rates can provide insights into the overall performance and efficiency of disk operations and their impact on CPU usage.
es619512
Messages
1
Reaction score
0
How much is the CPU involved in disk operations. Let's say I have a long disk operation such as copying a large file for several minutes. When I look at the CPU utilization on my computer (Windows XP), it's pretty low; basically zero. So how involved in such an operation is the CPU? Does it just tell the disk to do something and checks back in with the operation every so often to see if it's complete? Should I be able to do CPU intensive tasks like play games while and disk operation is occurring?

What if several disk operations are going on at the same time? Is that more CPU intensive?

Are lots of small CPU operations more intensive than one large one?

What about other types of I/O such as network and memory? How do those affect the CPU?
 
Computer science news on Phys.org
es619512 said:
How much is the CPU involved in disk operations.

It depends. For example with the older IDE (parallel ATE) drives there were PIO (programmed Input Output) modes where the processor was involved in reading each byte from the drive and storing it into memory, but there were also DMA (direct memory access) modes where the processor would send commands to the drive telling it to start the transfer and then dedicated direct memory access hardware that wasn't really part of the CPU would handle the transfer of bytes into memory with no CPU usage. The same is almost certainly true of the newer SATA drives. The PIO modes use far more of the CPU than the DMA and this is reflected in the speed of the transfer.

You should be able to do other things while a file transfer is in process, the few dozen megabytes/second from the drive will use some of the available memory bandwidth, but that probably won't be overwhelming.

The drive may not be able to do more than one transaction at a time.

Currently the CPU is much faster than the time it takes to communicate with external memory, so there is some delay in getting a memory transfer started. The cache is trying to minimize that by providing more rapid access to repeatedly used material.

You might want to try to construct a table showing the relative speeds of a single simple CPU instruction, memory latency to fetch a byte from a new location, latency to fetch a byte from cache, latencies to fetch a page from each of those, transfer speed of a byte from a drive, time for the drive to rotate around to get to a particular sector, time to seek to a particular track on a drive, transfer speed of a byte over a network, etc and see how those compare. Compare all those and try to get a feel for the relative speeds. Remember that drivers and software can add large amounts of delay to all this, but finding information to create that table yourself may give you a better feel for all this.
 
Part of the overhead for a file copy is parse the file info to break it up into continuous fragments on a disk, so each fragment is handled via a single large transfer. This will involve some CPU overhead and task switching as the copy goes through each fragment.

Part of the overhead for each transfer is the shared usage of memory bandwidth, and part of the overhead is the setup for each data transfer. For a system with virtual memory, each DMA or bus mastered I/O operation requires all the memory pages containing the I/O buffer be locked and their virtual addresses be translated into physical addresses. The physical addresses and transfer size for each page (to handle partially filled pages), are programmed into "descriptors" in the I/O hardware. If there aren't enough descriptors for the entire transfer, then multiple commands are used or a single comand with partial transfers is done and an interrupt occurs after each partial transfer to reprogram the set of descriptors for the next partial transfer until all of the data is transferred.
 
Thread 'ChatGPT Examples, Good and Bad'
I've been experimenting with ChatGPT. Some results are good, some very very bad. I think examples can help expose the properties of this AI. Maybe you can post some of your favorite examples and tell us what they reveal about the properties of this AI. (I had problems with copy/paste of text and formatting, so I'm posting my examples as screen shots. That is a promising start. :smile: But then I provided values V=1, R1=1, R2=2, R3=3 and asked for the value of I. At first, it said...

Similar threads

  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 30 ·
2
Replies
30
Views
3K
Replies
9
Views
3K
Replies
10
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
29
Views
5K
  • · Replies 34 ·
2
Replies
34
Views
8K
  • · Replies 2 ·
Replies
2
Views
4K
Replies
2
Views
8K
Replies
8
Views
3K