Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

GTX 680 vs GTX 980 performance difference?

Tags:
  1. Sep 26, 2014 #1
    I have two Nvidia GeForce GTX 680's in SLI and was considering saving up some money to buy two GTX 980's and put them in SLI.

    How much more performance should I expect to see?

    Based on what I've read, the 980 is more than twice as fast as the Kepler. And would an Intel Core i7 3770K (Ivy Bridge) @4.0 GHz with 16GB of RAM bottleneck those two GPU's?

    Do I really need to upgrade my CPU and motherboard to the latest iteration of the Core i7 processor to handle the massive throughput of these monster GPU's? Is it absolutely imperative to have PCIe 4.0 x16 to properly run these cards?
     
  2. jcsd
  3. Sep 28, 2014 #2
    Just out of curiosity what games are you try to max out settings for?
     
  4. Sep 29, 2014 #3

    Mark44

    Staff: Mentor

    nVidia publishes a "compute capability" measure for their GPUs (https://developer.nvidia.com/cuda-gpus). The GTX 680's you have are shown as having a 3.0 compute capability, using the Tesla architecture. The 980 is shown as having a 5.2 compute capability, and uses, I'm pretty sure, the Maxwell architecture, the latest one they've produced.

    Here are some of the specs for the Tesla architecture (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#architecture-3-0):
    192 CUDA cores for arithmetic operations
    32 special function units for single-precision floating-point transcendental functions
    4 warp schedulers

    For the Maxwell architecture:
    Same as above except that there are 128 CUDA cores. Presumably the clock speeds are much higher to produce a significantly higher compute capability.
    Don't know. However, if the games are well-written, most of the work should be done on the GPUs, with the CPU acting as host to start the ball rolling.
    What CPU are you running now? If it's a fairly modern CPU such as Sandy Bridge, I wouldn't think so, as I don't believe there is all that much difference between these two CPUs.

    BTW, I just bought a GEForce GTX 750, which has a compute capability of 5.0, and set me back only about $100. I'm not at all interested in gaming, but I am very much interested in getting involved with CUDA programming to write code that uses the parallel capabilities of the GPU.
     
  5. Sep 30, 2014 #4
    All of the graphics-intensive games like Battlefield 4 and Battlefield Hardline, Crysis 3, and future titles like Grand Theft Auto 5, Alien Isolation, Doom 4, etc...

    Someone told me that the GTX 970 SLI gives you the biggest bang for your buck and saves you $400 instead of getting two GTX 980's.

    There is only a 10% performance difference between the 970 and 980.
     
  6. Oct 4, 2014 #5
    If you're doing CUDA, remember Nvidia cripples double-precision math on all but a few of its gaming cards.
     
  7. Oct 5, 2014 #6

    Mark44

    Staff: Mentor

    This is where it's helpful to know the compute capability of the card in question. A double-precision floating point unit is available only on devices with compute capability of 1.3 or above. This page, https://developer.nvidia.com/cuda-gpus, lists the compute capabilities of the various NVIDIA GPUs.
     
  8. Oct 5, 2014 #7
    This page lists the specs:

    http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

    You'll notice that Nvidia has so-far crippled double-precision floating point math on all but three of its GPU's (GTX Titan, Titan Black, Titan Z) in order to stifle competition with its Tesla and Quatro series. Still, if you have priced a similar Tesla, you would understand that even the dramatically overpriced Titan series is still quite a deal compared to the "professional" solution Nvidia is pushing for CUDA.
     
  9. Oct 5, 2014 #8

    Mark44

    Staff: Mentor

    What I did was look for the GPUs with the highest compute capability, which turns out to be the Maxwell architecture, the most recent. I picked the GEFORCE GTX 750, which I got for about $100 US.

    Again, my interest is getting up to speed in CUDA programming. I have no interest in gaming, other than to play Solitaire :D.
     
  10. Oct 6, 2014 #9
    No, I won't be doing any CUDA computing, just hardcore gaming.

    I haven't decided which brand of GTX 970 I should buy. Someone told me that the EVGA ACX cooler has one of it's heat pipes misaligned and it does not make contact with the GPU. However, the ASUS STRIX and MSI Twinfrozr are also two other very good options.

    Off-topic, but before I quit playing a few months ago (because of health concerns) I was ranked as one of the top 5 players in the world for the PC version of battlefield 4. That's how much of a gaming enthusiast I am.
     
  11. Nov 14, 2014 #10
    IMO, go for the MSI Twinfrozr. I've always been an ASUS fanboy, but the Twinfrozr's are hard to beat for the price/performance.
     
  12. Nov 15, 2014 #11
    MSI is known for over-engineering their cards, especially their flagship brands.
     
  13. Nov 15, 2014 #12
    I've been playing Wolfenstien New Order on medium/high with an old Geforce 560M and it's smooth. It's hard for me to imagine that two 980s is going to make enough difference to justify the price.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook