Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

More than 2GB of VRAM for 1080p gaming?

  1. Nov 14, 2012 #1
    For 1920x1080 resolution, is it necessary to have more than 2GB of VRAM on your graphics card(s)?

    Someone told me that the only time when 3-4GB of video memory enter the equation is if you're using more than one monitor and/or running games on 2560x1600 resolution. Other than that, 2GB is more than sufficient for 1080p gaming.

    Can you confirm the above statement?
     
  2. jcsd
  3. Nov 14, 2012 #2
    Yes and no, its very generalised to state you need 3-4gb due to resolution, for example i could play Unreal Tournament(original) at 2560x1600 resolution and be lucky to use 256mb of graphics memory. It comes down to the texture resolution rather than the screen resolution, for a rough guideline would say "the larger the resolution the larger the memory usage" that is really just for texture resolution.

    The other generalisation that the more ram you have the faster the graphics card will be, there are many graphics cards that have high volumes of ram but have extremely slow GPU's for example the gtx610 has 1024Mebibytes of ram with to 810mhz core and 1620mhz shader(ram) this is an average entry level card and would be suited for standard office use and no gaming intended(3d gaming)

    so now onto the gtx640 2048MiB with a core of 797mhz and a shader clock at 797mhz now what one of the two will be better performance?

    It would come down to the application, they both have weak areas.

    There is a massive leap from Gddr3 and Gddr5 in terms of performance and there are multiple releases of the same graphics card with gddr3 gddr5 and different memory sizes 1024, 1536, 2048 and 3072. but if you are looking at the high end graphics cards its a similar story only comes down to if you want to future proof your rig.

    If you look to the near future 2k res monitors will be readily available, current graphics cards are limited to either 1.6k or 4096x2160(2k) with the gtx690 so if you jump straight to 4k res or 8k res you would be wasting your money on purchasing this current generation of graphics cards.

    But each to their own, i personally wont be buying this gen as new consoles will be coming out hopefully creating the option to increase graphics for pc gamers, hopefully the valve/microsoft/steam rumours are all not true, judging by how windows 8 is headed ill be turning to linux as my gaming platform okay getting off topic.

    2GB of ram is currently enough to run the current 1080p games, when 2k resolution becomes mainstream they wont be able to keep up and you would need at least 4gb ram minimum.
     
  4. Nov 14, 2012 #3

    All you need is the standard 1gb vram to play any modern game at 1080p and adding more is only useful at that screen resolution for increasing the texture resolutions. All games are not created equal and increasing the textures can dramatically impact how much vram is required anywhere up to 3.5gb. There are a lot of different technical reasons for this, but suffice to say if you want every game to look it's best you need more vram and developers are always trying to make new games look better than older ones. Right now for high end video cards like the gtx 680 or 7970 I'd recommend getting at least 2gb.
     
  5. Nov 15, 2012 #4
    There are a few games like the PC port of GTA 4, Metro 2033, Crysis 1 and 2, Skyrim, and some others which take advantage of 2GB of VRAM @1920x1080 resolution. Third-party graphics mods for these games increase the taxation on the hardware.

    For example, it is impossible to fully max out GTA 4 unless you have at least 2GB of VRAM. But then again, the PC version of GTA 4 is extremely poorly coded and optimized, which is why it runs so slow even on high-end gaming rigs.

    2013 and 2014 games are going to make use of 2-4GB of VRAM. And in the near future, 2160p monitors are going to become standard.
     
  6. Nov 15, 2012 #5
    Which is pretty much what I've been saying. The entire history of video games has been about how to push more pixels onto the screen faster and cheaper. The latest 3D displays as well require more pixels be processed and running Metro 2033 maxed out in 3D just at 1080p requires the most powerful desktop rigs in the world today. Between now and when the next generation consoles come out we'll see a steady increase in the number of games that can take advantage of more vram, but it won't be until the new consoles come out that high end desktop video cards adopt 2+gb vram as a new minimum standard.
     
  7. Nov 15, 2012 #6
    I doubt that the new consoles will outperform today's fastest possible desktops.

    The playstation 4 is supposed to use a highly modified version of the PS3 "cell" processor. And at best, we might see graphics performance on par with the GTX 590.

    Putting hardware that is many times faster than the fastest gaming desktops in such a small console system container would cause the chips to melt and catch fire.
     
  8. Nov 16, 2012 #7
    I kinda doubt it too, but it's impossible to say because consoles use striped down operating systems and games designed just to run on their hardware. They also use custom hardware and you might as well try to compare apples and oranges that aren't even on the market yet.
     
  9. Nov 16, 2012 #8
    According to what I've googled about the new consoles, the Playstation 4 is going to use a custom processor based on the AMD A8-3850 chips with the GPU (Radeon HD 7670) series integrated into the processor. All of the chips (CPU/GPU/RAM) are permanently soldered into the motherboard in the consoles and cannot be removed.

    Not sure about the XBOX 720, but I'm guessing it will probably use similar hardware.

    http://www.ign.com/articles/2012/04/04/sources-detail-the-playstation-4s-processing-specs
     
  10. Nov 16, 2012 #9
    Speculation has it all the next consoles will also use interposers, that is, a thin silicon base on which the apu will be surrounded by ram to reduce latencies and the number of inputs and outputs. The biggest bone of contention among developers is how much system ram it will have with Crytek insisting they will need at least 8gb and others suggesting they'll be lucky to get half of that. Whether you'll have the option of adding more ram later is unknown and whether the discrete gpu will include AMD's hardware acceleration for partially resident textures is unknown. In other words, all the big performance issue unknowns involve the raw bandwidth potential of the system which could make a huge difference.
     
  11. Nov 16, 2012 #10
  12. Nov 16, 2012 #11
    It isn't one or the other because an APU can be crossfired with a discrete gpu asynchronously. The APU is particularly good for processing gpu compute functions, physics, AI, and ambient occlusion, but the idea is exactly what it is used for can change from one moment to the next according to the demands of the game at any given moment.
     
  13. Nov 16, 2012 #12
    So the APU renders physics and AI while the discrete GPU renders the graphics?

    How come the APU doesn't work this way in PC's?
     
  14. Nov 17, 2012 #13

    They've been working their way up to it. These first APUs can only render graphics and can't do gpu compute functions. Intel's plan is to integrate their Larrabee architecture for gpu compute functions in 2014 and AMD has similar plans. There are a lot of technical reasons why they haven't done it yet including not least of all they are working their way up to redesigning the entire PC architecture from the ground up to maximize bandwidth.

    Essentially the slowest component in any PC for the last twenty years or so is the motherboard itself. Signals can only be sent so fast from one part to another and modern motherboards can use complicated 16 phase power and controllers to get around this limitation. At 2-5ghz if you hooked a cpu directly up to the mobo it would broadcast everything into space as microwaves before any signal could reach the next part of the mobo. Sending signals across the mobo using faster optical connections and whatnot is simply prohibitively expensive. The issue then has been how to squeeze more of the motherboard onto individual chips and how to organize the different components so they can share information as efficiently as possible.

    We're now approaching the next major revision in this process with advances in the technology such as APUs, gpu compute functions, chip stacking, and hardware accelerated transactional memory. Optical interconnects are also making rapid progress, but what the next major revision will be is anyone guess at this point.
     
    Last edited: Nov 17, 2012
  15. Nov 17, 2012 #14
    A dream..... or may be that was sarcastic? I'm pretty bad at picking up sarcasm.
     
  16. Nov 17, 2012 #15
    Steam is porting to Linux and Nvidia has just provided updated drivers. Now that we've got gpu compute functions and even openGL has caught up to Directx there's no reason Linux can't at least provide the next generation operating system for cheap portable devices. Within five years you could see cheap gaming tablets capable of running Crysis. If they play their cards right and shoot for low latency Linux systems made just for gaming they could outdo Microsoft altogether and become the gaming platform of choice.
     
  17. Nov 17, 2012 #16
    So the APU is going to work in tandem with the discrete graphics card(s) to increase graphical performance? But the motherboard can't send signals between components through the buses fast enough?
     
  18. Nov 18, 2012 #17
    The mobo is like a series of copper pipes that can only move water so fast and we have pumps and filters that can push the water through them so fast they just spring leaks. You can add more pipes if you want, but eventually you have to redesign and rearrange the entire series of pumps, filters, and pipes to make the system as a whole work as efficiently and quickly as possible.

    The APU isn't just used for graphics, but whatever the programmers want it to do and to some extent the computer will be able to decide for itself whether to use the APU or gpu for any given task. With all the pipes and water going in every direction all at once the system has to be able to decide for itself to some extent how to do things or bottlenecks will occur frequently. That's the single biggest limitation with computers and programs as well is that with all the millions of lines of code and gigabytes of data flying around bottlenecks crop up at every turn.
     
  19. Nov 18, 2012 #18
    Are there ways to dramatically increase the BUS speeds in motherboards?
     
  20. Nov 18, 2012 #19
    There are a number of ways, but the semiconductor industry is very brute force oriented. Companies like Intel might have advanced technology they sit on for years while they figure out the cheapest way to implement it. That's just the way it goes when you're talking about companies with multiple factories costing billions of dollars each just to build. Money is what drives progress because you can easily lose a fortune if you don't keep your eye on the bottom line.

    The general trend has been to eliminate the mobo altogether whenever possible. That's what the entire history of integrated circuits is all about is eliminating the mobo when ever possible and just putting everything on the chip. As a result we now have cellphones with the power of what would have been a supercomputer 20 years ago. The latest trend is toward chip stacking where you can stack chips side by side on the same piece of silicon and/or right on top of each other.
     
  21. Nov 18, 2012 #20
    With the scientific limit of Moore's law fast approaching, what will chip makers like Intel and AMD do to further increase the performance and efficiency of their products without reducing the size of the transistors? 22nm is pushing it as far as Moore's law goes.

    I'd like to see what AMD/Nvidia will do with 22nm GPU's for the time being. The next-gen HD 8xxx and GTX 7xx series are still going to be 32nm from what I've read.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: More than 2GB of VRAM for 1080p gaming?
Loading...