Is Displaying Photos on a Computer More Efficient with a Dedicated Video Card?

  • Thread starter Thread starter Alex_Sanders
  • Start date Start date
Click For Summary
SUMMARY

Displaying photos on a computer involves both the CPU and the GPU, leading to potential lag when dragging large images. While the GPU is capable of directly reading and displaying JPEG files from RAM or hard drives, current software architectures do not fully utilize this capability. The discussion highlights the ongoing debate in computer design regarding the allocation of tasks between CPUs and specialized hardware, suggesting that advancements in video card software could optimize image rendering processes.

PREREQUISITES
  • Understanding of CPU and GPU architecture
  • Knowledge of image file formats, specifically JPEG
  • Familiarity with hardware acceleration concepts
  • Basic principles of software development related to graphics rendering
NEXT STEPS
  • Research advancements in GPU software for image processing
  • Explore hardware acceleration techniques in graphics rendering
  • Learn about FPGA applications in image decoding
  • Investigate the historical shifts in CPU and GPU task allocation
USEFUL FOR

This discussion is beneficial for computer hardware designers, software developers focused on graphics applications, and anyone interested in optimizing image rendering performance on personal computers.

Alex_Sanders
Messages
73
Reaction score
0
If you try to drag a large photo in your not-so-newly assembled computer, you'll notice there is a lag, it's because displaying a photo on your screen combines works of both CPU and your video card as we know.

I know some of the graphical work has been "outsourced" to video card, but I'm really not sure if displaying a picture or a photo has been tasked to video card only? I don't think there would be any technical difficulties, after all, letting CPU decode a jpg file and then pass processed data to video card telling it what to display seems to be a pretty redundant, it can be done in this way: CPU detects the request of displaying a jpg file, then it surrender the control of the main bus temporarily to GPU, GPU reads the file directly from the RAM or even harddrive, then display the file, the decoding can be done hardwarely with codex stored inside a FPGA.

All we need, is a newly developed software that comes with the video card we bought.

And I know it quite well that since things are not done this way, there must be reasons. What are they? Or may be some of my thoughts has been done? Like calculating 3D graphics?
 
Engineering news on Phys.org
it is analogous to smart terminals versus dumb terminals. Time and time again, designers move tasks from the CPU to specialized hardware. Time and again, other designers move them back. Hackers call it the cycle of reincarnation.The best way keeps shifting.
 

Similar threads

Replies
10
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • Sticky
  • · Replies 13 ·
Replies
13
Views
8K
Replies
3
Views
3K
Replies
29
Views
5K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 3 ·
Replies
3
Views
18K
  • · Replies 23 ·
Replies
23
Views
5K
  • · Replies 62 ·
3
Replies
62
Views
8K
  • · Replies 5 ·
Replies
5
Views
8K