If you try to drag a large photo in your not-so-newly assembled computer, you'll notice there is a lag, it's because displaying a photo on your screen combines works of both CPU and your video card as we know. I know some of the graphical work has been "outsourced" to video card, but I'm really not sure if displaying a picture or a photo has been tasked to video card only? I don't think there would be any technical difficulties, after all, letting CPU decode a jpg file and then pass processed data to video card telling it what to display seems to be a pretty redundant, it can be done in this way: CPU detects the request of displaying a jpg file, then it surrender the control of the main bus temporarily to GPU, GPU reads the file directly from the RAM or even harddrive, then display the file, the decoding can be done hardwarely with codex stored inside a FPGA. All we need, is a newly developed software that comes with the video card we bought. And I know it quite well that since things are not done this way, there must be reasons. What are they? Or may be some of my thoughts has been done? Like calculating 3D graphics?