GTX 680 vs GTX 980 performance difference?

Click For Summary
SUMMARY

The discussion centers on the performance comparison between Nvidia GeForce GTX 680 and GTX 980 graphics cards, specifically in SLI configurations. The GTX 980, utilizing Maxwell architecture, offers more than double the performance of the GTX 680's Kepler architecture, with a compute capability of 5.2 compared to 3.0. Users express concerns about potential CPU bottlenecks, particularly with an Intel Core i7 3770K, and whether upgrading to PCIe 4.0 x16 is necessary for optimal performance. Recommendations include considering GTX 970 SLI for better cost-effectiveness, as the performance difference between GTX 970 and GTX 980 is minimal.

PREREQUISITES
  • Understanding of Nvidia GPU architectures: Kepler and Maxwell
  • Familiarity with compute capability metrics for GPUs
  • Knowledge of SLI configurations and their performance implications
  • Basic understanding of CPU and motherboard compatibility with GPUs
NEXT STEPS
  • Research Nvidia GTX 970 SLI performance benchmarks
  • Learn about PCIe 4.0 x16 specifications and benefits
  • Explore CUDA programming fundamentals and capabilities
  • Investigate the differences in cooling solutions among GTX 970 brands
USEFUL FOR

Gamers, PC builders, and hardware enthusiasts looking to optimize graphics performance and understand the implications of upgrading GPU configurations.

ElliotSmith
Messages
167
Reaction score
104
I have two Nvidia GeForce GTX 680's in SLI and was considering saving up some money to buy two GTX 980's and put them in SLI.

How much more performance should I expect to see?

Based on what I've read, the 980 is more than twice as fast as the Kepler. And would an Intel Core i7 3770K (Ivy Bridge) @4.0 GHz with 16GB of RAM bottleneck those two GPU's?

Do I really need to upgrade my CPU and motherboard to the latest iteration of the Core i7 processor to handle the massive throughput of these monster GPU's? Is it absolutely imperative to have PCIe 4.0 x16 to properly run these cards?
 
Computer science news on Phys.org
Just out of curiosity what games are you try to max out settings for?
 
ElliotSmith said:
I have two Nvidia GeForce GTX 680's in SLI and was considering saving up some money to buy two GTX 980's and put them in SLI.

How much more performance should I expect to see?

Based on what I've read, the 980 is more than twice as fast as the Kepler.
nVidia publishes a "compute capability" measure for their GPUs (https://developer.nvidia.com/cuda-gpus). The GTX 680's you have are shown as having a 3.0 compute capability, using the Tesla architecture. The 980 is shown as having a 5.2 compute capability, and uses, I'm pretty sure, the Maxwell architecture, the latest one they've produced.

Here are some of the specs for the Tesla architecture (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#architecture-3-0):
192 CUDA cores for arithmetic operations
32 special function units for single-precision floating-point transcendental functions
4 warp schedulers

For the Maxwell architecture:
Same as above except that there are 128 CUDA cores. Presumably the clock speeds are much higher to produce a significantly higher compute capability.
ElliotSmith said:
And would an Intel Core i7 3770K (Ivy Bridge) @4.0 GHz with 16GB of RAM bottleneck those two GPU's?
Don't know. However, if the games are well-written, most of the work should be done on the GPUs, with the CPU acting as host to start the ball rolling.
ElliotSmith said:
Do I really need to upgrade my CPU and motherboard to the latest iteration of the Core i7 processor to handle the massive throughput of these monster GPU's? Is it absolutely imperative to have PCIe 4.0 x16 to properly run these cards?
What CPU are you running now? If it's a fairly modern CPU such as Sandy Bridge, I wouldn't think so, as I don't believe there is all that much difference between these two CPUs.

BTW, I just bought a GEForce GTX 750, which has a compute capability of 5.0, and set me back only about $100. I'm not at all interested in gaming, but I am very much interested in getting involved with CUDA programming to write code that uses the parallel capabilities of the GPU.
 
Greg Bernhardt said:
Just out of curiosity what games are you try to max out settings for?

All of the graphics-intensive games like Battlefield 4 and Battlefield Hardline, Crysis 3, and future titles like Grand Theft Auto 5, Alien Isolation, Doom 4, etc...

Someone told me that the GTX 970 SLI gives you the biggest bang for your buck and saves you $400 instead of getting two GTX 980's.

There is only a 10% performance difference between the 970 and 980.
 
If you're doing CUDA, remember Nvidia cripples double-precision math on all but a few of its gaming cards.
 
vociferous said:
If you're doing CUDA, remember Nvidia cripples double-precision math on all but a few of its gaming cards.
This is where it's helpful to know the compute capability of the card in question. A double-precision floating point unit is available only on devices with compute capability of 1.3 or above. This page, https://developer.nvidia.com/cuda-gpus, lists the compute capabilities of the various NVIDIA GPUs.
 
Mark44 said:
This is where it's helpful to know the compute capability of the card in question. A double-precision floating point unit is available only on devices with compute capability of 1.3 or above. This page, https://developer.nvidia.com/cuda-gpus, lists the compute capabilities of the various NVIDIA GPUs.

This page lists the specs:

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.
 
vociferous said:
This page lists the specs:

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.
What I did was look for the GPUs with the highest compute capability, which turns out to be the Maxwell architecture, the most recent. I picked the GEFORCE GTX 750, which I got for about $100 US.

Again, my interest is getting up to speed in CUDA programming. I have no interest in gaming, other than to play Solitaire :D.
 
vociferous said:
This page lists the specs:

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.

No, I won't be doing any CUDA computing, just hardcore gaming.

I haven't decided which brand of GTX 970 I should buy. Someone told me that the EVGA ACX cooler has one of it's heat pipes misaligned and it does not make contact with the GPU. However, the ASUS STRIX and MSI Twinfrozr are also two other very good options.

Off-topic, but before I quit playing a few months ago (because of health concerns) I was ranked as one of the top 5 players in the world for the PC version of battlefield 4. That's how much of a gaming enthusiast I am.
 
  • #10
IMO, go for the MSI Twinfrozr. I've always been an ASUS fanboy, but the Twinfrozr's are hard to beat for the price/performance.
 
  • #11
MSI is known for over-engineering their cards, especially their flagship brands.
 
  • #12
I've been playing Wolfenstien New Order on medium/high with an old Geforce 560M and it's smooth. It's hard for me to imagine that two 980s is going to make enough difference to justify the price.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
4K
Replies
27
Views
5K
  • · Replies 11 ·
Replies
11
Views
7K
  • · Replies 17 ·
Replies
17
Views
5K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
9
Views
7K
  • · Replies 1 ·
Replies
1
Views
4K
Replies
4
Views
3K