More than 2GB of VRAM for 1080p gaming?

  • Thread starter Kutt
  • Start date
In summary, if you're playing games at 1920x1080 resolution or lower, 2GB of VRAM is more than sufficient. If you're playing games at higher resolutions, such as 2560x1600 or 3840x2160, then you will need more VRAM.
  • #1
Kutt
237
1
For 1920x1080 resolution, is it necessary to have more than 2GB of VRAM on your graphics card(s)?

Someone told me that the only time when 3-4GB of video memory enter the equation is if you're using more than one monitor and/or running games on 2560x1600 resolution. Other than that, 2GB is more than sufficient for 1080p gaming.

Can you confirm the above statement?
 
Computer science news on Phys.org
  • #2
Yes and no, its very generalised to state you need 3-4gb due to resolution, for example i could play Unreal Tournament(original) at 2560x1600 resolution and be lucky to use 256mb of graphics memory. It comes down to the texture resolution rather than the screen resolution, for a rough guideline would say "the larger the resolution the larger the memory usage" that is really just for texture resolution.

The other generalisation that the more ram you have the faster the graphics card will be, there are many graphics cards that have high volumes of ram but have extremely slow GPU's for example the gtx610 has 1024Mebibytes of ram with to 810mhz core and 1620mhz shader(ram) this is an average entry level card and would be suited for standard office use and no gaming intended(3d gaming)

so now onto the gtx640 2048MiB with a core of 797mhz and a shader clock at 797mhz now what one of the two will be better performance?

It would come down to the application, they both have weak areas.

There is a massive leap from Gddr3 and Gddr5 in terms of performance and there are multiple releases of the same graphics card with gddr3 gddr5 and different memory sizes 1024, 1536, 2048 and 3072. but if you are looking at the high end graphics cards its a similar story only comes down to if you want to future proof your rig.

If you look to the near future 2k res monitors will be readily available, current graphics cards are limited to either 1.6k or 4096x2160(2k) with the gtx690 so if you jump straight to 4k res or 8k res you would be wasting your money on purchasing this current generation of graphics cards.

But each to their own, i personally won't be buying this gen as new consoles will be coming out hopefully creating the option to increase graphics for pc gamers, hopefully the valve/microsoft/steam rumours are all not true, judging by how windows 8 is headed ill be turning to linux as my gaming platform okay getting off topic.

2GB of ram is currently enough to run the current 1080p games, when 2k resolution becomes mainstream they won't be able to keep up and you would need at least 4gb ram minimum.
 
  • #3
Kutt said:
For 1920x1080 resolution, is it necessary to have more than 2GB of VRAM on your graphics card(s)?

Someone told me that the only time when 3-4GB of video memory enter the equation is if you're using more than one monitor and/or running games on 2560x1600 resolution. Other than that, 2GB is more than sufficient for 1080p gaming.

Can you confirm the above statement?


All you need is the standard 1gb vram to play any modern game at 1080p and adding more is only useful at that screen resolution for increasing the texture resolutions. All games are not created equal and increasing the textures can dramatically impact how much vram is required anywhere up to 3.5gb. There are a lot of different technical reasons for this, but suffice to say if you want every game to look it's best you need more vram and developers are always trying to make new games look better than older ones. Right now for high end video cards like the gtx 680 or 7970 I'd recommend getting at least 2gb.
 
  • #4
wuliheron said:
All you need is the standard 1gb vram to play any modern game at 1080p and adding more is only useful at that screen resolution for increasing the texture resolutions. All games are not created equal and increasing the textures can dramatically impact how much vram is required anywhere up to 3.5gb. There are a lot of different technical reasons for this, but suffice to say if you want every game to look it's best you need more vram and developers are always trying to make new games look better than older ones. Right now for high end video cards like the gtx 680 or 7970 I'd recommend getting at least 2gb.

There are a few games like the PC port of GTA 4, Metro 2033, Crysis 1 and 2, Skyrim, and some others which take advantage of 2GB of VRAM @1920x1080 resolution. Third-party graphics mods for these games increase the taxation on the hardware.

For example, it is impossible to fully max out GTA 4 unless you have at least 2GB of VRAM. But then again, the PC version of GTA 4 is extremely poorly coded and optimized, which is why it runs so slow even on high-end gaming rigs.

2013 and 2014 games are going to make use of 2-4GB of VRAM. And in the near future, 2160p monitors are going to become standard.
 
  • #5
Kutt said:
There are a few games like the PC port of GTA 4, Metro 2033, Crysis 1 and 2, Skyrim, and some others which take advantage of 2GB of VRAM @1920x1080 resolution. Third-party graphics mods for these games increase the taxation on the hardware.

For example, it is impossible to fully max out GTA 4 unless you have at least 2GB of VRAM. But then again, the PC version of GTA 4 is extremely poorly coded and optimized, which is why it runs so slow even on high-end gaming rigs.

2013 and 2014 games are going to make use of 2-4GB of VRAM. And in the near future, 2160p monitors are going to become standard.

Which is pretty much what I've been saying. The entire history of video games has been about how to push more pixels onto the screen faster and cheaper. The latest 3D displays as well require more pixels be processed and running Metro 2033 maxed out in 3D just at 1080p requires the most powerful desktop rigs in the world today. Between now and when the next generation consoles come out we'll see a steady increase in the number of games that can take advantage of more vram, but it won't be until the new consoles come out that high end desktop video cards adopt 2+gb vram as a new minimum standard.
 
  • #6
wuliheron said:
Which is pretty much what I've been saying. The entire history of video games has been about how to push more pixels onto the screen faster and cheaper. The latest 3D displays as well require more pixels be processed and running Metro 2033 maxed out in 3D just at 1080p requires the most powerful desktop rigs in the world today. Between now and when the next generation consoles come out we'll see a steady increase in the number of games that can take advantage of more vram, but it won't be until the new consoles come out that high end desktop video cards adopt 2+gb vram as a new minimum standard.

I doubt that the new consoles will outperform today's fastest possible desktops.

The playstation 4 is supposed to use a highly modified version of the PS3 "cell" processor. And at best, we might see graphics performance on par with the GTX 590.

Putting hardware that is many times faster than the fastest gaming desktops in such a small console system container would cause the chips to melt and catch fire.
 
  • #7
Kutt said:
I doubt that the new consoles will outperform today's fastest possible desktops.

The playstation 4 is supposed to use a highly modified version of the PS3 "cell" processor. And at best, we might see graphics performance on par with the GTX 590.

Putting hardware that is many times faster than the fastest gaming desktops in such a small console system container would cause the chips to melt and catch fire.

I kinda doubt it too, but it's impossible to say because consoles use striped down operating systems and games designed just to run on their hardware. They also use custom hardware and you might as well try to compare apples and oranges that aren't even on the market yet.
 
  • #8
According to what I've googled about the new consoles, the Playstation 4 is going to use a custom processor based on the AMD A8-3850 chips with the GPU (Radeon HD 7670) series integrated into the processor. All of the chips (CPU/GPU/RAM) are permanently soldered into the motherboard in the consoles and cannot be removed.

Not sure about the XBOX 720, but I'm guessing it will probably use similar hardware.

http://www.ign.com/articles/2012/04/04/sources-detail-the-playstation-4s-processing-specs
 
  • #9
Kutt said:
According to what I've googled about the new consoles, the Playstation 4 is going to use a custom processor based on the AMD A8-3850 chips with the GPU (Radeon HD 7670) series integrated into the processor. All of the chips (CPU/GPU/RAM) are permanently soldered into the motherboard in the consoles and cannot be removed.

Not sure about the XBOX 720, but I'm guessing it will probably use similar hardware.

http://www.ign.com/articles/2012/04/04/sources-detail-the-playstation-4s-processing-specs

Speculation has it all the next consoles will also use interposers, that is, a thin silicon base on which the apu will be surrounded by ram to reduce latencies and the number of inputs and outputs. The biggest bone of contention among developers is how much system ram it will have with Crytek insisting they will need at least 8gb and others suggesting they'll be lucky to get half of that. Whether you'll have the option of adding more ram later is unknown and whether the discrete gpu will include AMD's hardware acceleration for partially resident textures is unknown. In other words, all the big performance issue unknowns involve the raw bandwidth potential of the system which could make a huge difference.
 
  • #10
Ah, the XBOX 720 is going to use one of AMD's APU processors as it's GPU.

http://www.conceivablytech.com/4414/products/xbox-720-to-get-amd-fusion-processor
 
  • #11
Kutt said:
Ah, the XBOX 720 is going to use one of AMD's APU processors as it's GPU.

http://www.conceivablytech.com/4414/products/xbox-720-to-get-amd-fusion-processor

It isn't one or the other because an APU can be crossfired with a discrete gpu asynchronously. The APU is particularly good for processing gpu compute functions, physics, AI, and ambient occlusion, but the idea is exactly what it is used for can change from one moment to the next according to the demands of the game at any given moment.
 
  • #12
wuliheron said:
It isn't one or the other because an APU can be crossfired with a discrete gpu asynchronously. The APU is particularly good for processing gpu compute functions, physics, AI, and ambient occlusion, but the idea is exactly what it is used for can change from one moment to the next according to the demands of the game at any given moment.

So the APU renders physics and AI while the discrete GPU renders the graphics?

How come the APU doesn't work this way in PC's?
 
  • #13
Kutt said:
So the APU renders physics and AI while the discrete GPU renders the graphics?

How come the APU doesn't work this way in PC's?
They've been working their way up to it. These first APUs can only render graphics and can't do gpu compute functions. Intel's plan is to integrate their Larrabee architecture for gpu compute functions in 2014 and AMD has similar plans. There are a lot of technical reasons why they haven't done it yet including not least of all they are working their way up to redesigning the entire PC architecture from the ground up to maximize bandwidth.

Essentially the slowest component in any PC for the last twenty years or so is the motherboard itself. Signals can only be sent so fast from one part to another and modern motherboards can use complicated 16 phase power and controllers to get around this limitation. At 2-5ghz if you hooked a cpu directly up to the mobo it would broadcast everything into space as microwaves before any signal could reach the next part of the mobo. Sending signals across the mobo using faster optical connections and whatnot is simply prohibitively expensive. The issue then has been how to squeeze more of the motherboard onto individual chips and how to organize the different components so they can share information as efficiently as possible.

We're now approaching the next major revision in this process with advances in the technology such as APUs, gpu compute functions, chip stacking, and hardware accelerated transactional memory. Optical interconnects are also making rapid progress, but what the next major revision will be is anyone guess at this point.
 
Last edited:
  • #14
r4z0r84 said:
... ill be turning to linux as my gaming platform ...

A dream... or may be that was sarcastic? I'm pretty bad at picking up sarcasm.
 
  • #15
mishrashubham said:
A dream... or may be that was sarcastic? I'm pretty bad at picking up sarcasm.

Steam is porting to Linux and Nvidia has just provided updated drivers. Now that we've got gpu compute functions and even openGL has caught up to Directx there's no reason Linux can't at least provide the next generation operating system for cheap portable devices. Within five years you could see cheap gaming tablets capable of running Crysis. If they play their cards right and shoot for low latency Linux systems made just for gaming they could outdo Microsoft altogether and become the gaming platform of choice.
 
  • #16
So the APU is going to work in tandem with the discrete graphics card(s) to increase graphical performance? But the motherboard can't send signals between components through the buses fast enough?
 
  • #17
Kutt said:
So the APU is going to work in tandem with the discrete graphics card(s) to increase graphical performance? But the motherboard can't send signals between components through the buses fast enough?

The mobo is like a series of copper pipes that can only move water so fast and we have pumps and filters that can push the water through them so fast they just spring leaks. You can add more pipes if you want, but eventually you have to redesign and rearrange the entire series of pumps, filters, and pipes to make the system as a whole work as efficiently and quickly as possible.

The APU isn't just used for graphics, but whatever the programmers want it to do and to some extent the computer will be able to decide for itself whether to use the APU or gpu for any given task. With all the pipes and water going in every direction all at once the system has to be able to decide for itself to some extent how to do things or bottlenecks will occur frequently. That's the single biggest limitation with computers and programs as well is that with all the millions of lines of code and gigabytes of data flying around bottlenecks crop up at every turn.
 
  • #18
wuliheron said:
The mobo is like a series of copper pipes that can only move water so fast and we have pumps and filters that can push the water through them so fast they just spring leaks. You can add more pipes if you want, but eventually you have to redesign and rearrange the entire series of pumps, filters, and pipes to make the system as a whole work as efficiently and quickly as possible.

The APU isn't just used for graphics, but whatever the programmers want it to do and to some extent the computer will be able to decide for itself whether to use the APU or gpu for any given task. With all the pipes and water going in every direction all at once the system has to be able to decide for itself to some extent how to do things or bottlenecks will occur frequently. That's the single biggest limitation with computers and programs as well is that with all the millions of lines of code and gigabytes of data flying around bottlenecks crop up at every turn.

Are there ways to dramatically increase the BUS speeds in motherboards?
 
  • #19
Kutt said:
Are there ways to dramatically increase the BUS speeds in motherboards?

There are a number of ways, but the semiconductor industry is very brute force oriented. Companies like Intel might have advanced technology they sit on for years while they figure out the cheapest way to implement it. That's just the way it goes when you're talking about companies with multiple factories costing billions of dollars each just to build. Money is what drives progress because you can easily lose a fortune if you don't keep your eye on the bottom line.

The general trend has been to eliminate the mobo altogether whenever possible. That's what the entire history of integrated circuits is all about is eliminating the mobo when ever possible and just putting everything on the chip. As a result we now have cellphones with the power of what would have been a supercomputer 20 years ago. The latest trend is toward chip stacking where you can stack chips side by side on the same piece of silicon and/or right on top of each other.
 
  • #20
wuliheron said:
There are a number of ways, but the semiconductor industry is very brute force oriented. Companies like Intel might have advanced technology they sit on for years while they figure out the cheapest way to implement it. That's just the way it goes when you're talking about companies with multiple factories costing billions of dollars each just to build. Money is what drives progress because you can easily lose a fortune if you don't keep your eye on the bottom line.

The general trend has been to eliminate the mobo altogether whenever possible. That's what the entire history of integrated circuits is all about is eliminating the mobo when ever possible and just putting everything on the chip. As a result we now have cellphones with the power of what would have been a supercomputer 20 years ago. The latest trend is toward chip stacking where you can stack chips side by side on the same piece of silicon and/or right on top of each other.

With the scientific limit of Moore's law fast approaching, what will chip makers like Intel and AMD do to further increase the performance and efficiency of their products without reducing the size of the transistors? 22nm is pushing it as far as Moore's law goes.

I'd like to see what AMD/Nvidia will do with 22nm GPU's for the time being. The next-gen HD 8xxx and GTX 7xx series are still going to be 32nm from what I've read.
 
  • #21
Kutt said:
With the scientific limit of Moore's law fast approaching, what will chip makers like Intel and AMD do to further increase the performance and efficiency of their products without reducing the size of the transistors? 22nm is pushing it as far as Moore's law goes.

I'd like to see what AMD/Nvidia will do with 22nm GPU's for the time being. The next-gen HD 8xxx and GTX 7xx series are still going to be 32nm from what I've read.

Around 1nm is the molecular scale and there are carbon nanotubes that function extremely well at those sizes, however, Moore's law was replaced already with Koomey's law. Devices are getting so small that energy efficiency and reducing waste heat have become bigger issues. With advances like chip stacking and 3D circuitry you could theoretically stack a hundred chips right on top of each other and fit the equivalent of a basketball court sized supercomputer in a walnut, but only if you can overcome the physical limitations such as waste heat. Spintronics and quantum computing offer the best possible energy efficiency know to date with quantum computers theoretically even capable of absorbing their own excess heat and using it to power themselves and being compatible with spintronics.

Another trend is towards reconfigurable computing. Instead of shrinking parts or packing more chips closer together these use fewer parts to do the same amount of work. For example, IBM's neuromorphic 4nm memristor chip can literally turn a transistor into memory and vice versa on the fly as needed. Their goal in ten years is to put nothing less than the equivalent of the neurons of a cat or human brain on a single chip which gives you some idea of how compact such circuitry can be.
 
  • #22
wuliheron said:
Around 1nm is the molecular scale and there are carbon nanotubes that function extremely well at those sizes, however, Moore's law was replaced already with Koomey's law. Devices are getting so small that energy efficiency and reducing waste heat have become bigger issues. With advances like chip stacking and 3D circuitry you could theoretically stack a hundred chips right on top of each other and fit the equivalent of a basketball court sized supercomputer in a walnut, but only if you can overcome the physical limitations such as waste heat. Spintronics and quantum computing offer the best possible energy efficiency know to date with quantum computers theoretically even capable of absorbing their own excess heat and using it to power themselves and being compatible with spintronics.

Another trend is towards reconfigurable computing. Instead of shrinking parts or packing more chips closer together these use fewer parts to do the same amount of work. For example, IBM's neuromorphic 4nm memristor chip can literally turn a transistor into memory and vice versa on the fly as needed. Their goal in ten years is to put nothing less than the equivalent of the neurons of a cat or human brain on a single chip which gives you some idea of how compact such circuitry can be.

These chips sound incomprehensibly complex and even more difficult to write programs for. The human brain is the most complex known object in the universe, I can't imagine a computer chip exceeding that kind of complexity.

I remember reading somewhere something about "optical transistors" which use tiny bursts of light (photons) to turn transistors on and off instead of electrons. Theoretically, these chips could run at speeds of terahertz and produce very little heat.
 
  • #23
Kutt said:
These chips sound incomprehensibly complex and even more difficult to write programs for. The human brain is the most complex known object in the universe, I can't imagine a computer chip exceeding that kind of complexity.

I remember reading somewhere something about "optical transistors" which use tiny bursts of light (photons) to turn transistors on and off instead of electrons. Theoretically, these chips could run at speeds of terahertz and produce very little heat.

A foolish consistency is the hobgoblin of little minds. RW Emerson

The complexity of modern technology would be considered unthinkable just a few centuries ago. Already there is an attempt to create the first super Von Neumann architecture and the neuromorphic IBM chip I mentioned. These are disruptive technologies with potentials nobody can predict. A full scale quantum computer of 128 qubits could shake the very foundations of the sciences themselves. All I can say is be prepared to be amazed because this roller coaster ride only gets faster and more interesting from this point on.
 
  • #24
wuliheron said:
These are disruptive technologies with potentials nobody can predict. A full scale quantum computer of 128 qubits could shake the very foundations of the sciences themselves. All I can say is be prepared to be amazed because this roller coaster ride only gets faster and more interesting from this point on.

Don't worry about what to do with all that computing power. Somebody will invent an even more bloated user interface that needs 128 qbits just to display a 3D holographic desktop ... :devil:
 
  • #25
AlephZero said:
Don't worry about what to do with all that computing power. Somebody will invent an even more bloated user interface that needs 128 qbits just to display a 3D holographic desktop ... :devil:

LOL, it's the drivers I'm worried about.
 
  • #26
wuliheron said:
LOL, it's the drivers I'm worried about.

Yeah, as if drivers weren't already frustratingly hard enough to program. :rofl:
 
  • #27
AlephZero said:
Don't worry about what to do with all that computing power. Somebody will invent an even more bloated user interface that needs 128 qbits just to display a 3D holographic desktop ... :devil:

Compiz 4D lol
 

1. What is VRAM and why is it important for gaming?

VRAM (Video Random Access Memory) is a type of memory used to store graphics data on a graphics card. It is important for gaming because it allows the graphics card to quickly access and process large amounts of data, resulting in smoother and more detailed graphics.

2. How much VRAM do I need for 1080p gaming?

For 1080p gaming, 2GB of VRAM is typically enough to run most games at high settings. However, newer and more demanding games may require more VRAM for optimal performance.

3. What happens if I have more than 2GB of VRAM for 1080p gaming?

If you have more than 2GB of VRAM for 1080p gaming, it means your graphics card has more memory available to store and process graphics data. This can result in better performance and the ability to run games at even higher settings.

4. Can I use more than 2GB of VRAM for 1080p gaming on a budget graphics card?

It is possible to use more than 2GB of VRAM for 1080p gaming on a budget graphics card, but it may not result in significant performance improvements. This is because budget graphics cards may not have the processing power to fully utilize the additional VRAM.

5. Is it worth investing in a graphics card with more than 2GB of VRAM for 1080p gaming?

It depends on your gaming needs and budget. If you primarily play newer and more demanding games, investing in a graphics card with more than 2GB of VRAM can result in better performance. However, if you mostly play older or less graphically intensive games, 2GB of VRAM may be sufficient.

Similar threads

  • Computing and Technology
Replies
24
Views
23K
  • Computing and Technology
2
Replies
46
Views
4K
  • Computing and Technology
Replies
11
Views
3K
  • Computing and Technology
Replies
1
Views
4K
Replies
29
Views
4K
  • Computing and Technology
Replies
2
Views
5K
Replies
5
Views
3K
  • Computing and Technology
Replies
4
Views
4K
  • Computing and Technology
Replies
10
Views
2K
  • Computing and Technology
Replies
8
Views
8K
Back
Top