Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A workstation for physicists, and programming in GPUs

  1. Aug 1, 2017 #1
    Probably everyone is aware of the new cpu's by intel and amd, which will have many cores. The threadripper will be launched on August 10, and it will have up to 16 cores. I am making plans to build a workstation, the idea is to code in parallel, so the more cores is obviously the better. Also, I will want to work with CUDA or OpenCL, i.e. to program in parallel using gpus.

    AMD has delivered a new gpu called Vega, which is aimed for scientists (ideal for people like me, I suppose). However, I have never programmed in a gpu, and I want to introduce my self in this a little bit more. In particular, I have heard that gpus works in single precision, does anyone know if this in general for all gpus? if the professional gpus by Nvidia are also limited to single precision? what about vega?

    What are the pros and the cons of CUDA vs OpenCL? which should I choose?

    I would also like to use this topic to know which would be the ideal desk computer as a workstation for physicists. Which cpu? amd or intel? threadripper? which motherboard? which memory ram and how much of it? what about the hard disk? ssd? which gpu?

    All opinions are welcome.

    Thanks in advance.
  2. jcsd
  3. Aug 2, 2017 #2


    User Avatar
    Science Advisor

  4. Aug 2, 2017 #3

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    Then I wouldn't run out and buy an expensive card. I'd buy a $40 card, stick it in whatever PC you are using now and learn to program one first.
  5. Aug 2, 2017 #4
    There may be a better solution. Most universities and research institutes have some kind of cluster computer which includes GPU's. So this may be enough for testing.
    Not all cards are equally suitable. What Telemachus needs is apparently general purpose GPU computing and I think not every GPU is capable of doing that.

    I used Cuda and I think there's somewhere a list of Nvidia GPUs which are supported.

    Unfortunately there is no single answer to the question: Which computer for a physicist. The more, the better, actually. Many physicists run their expensive calculations remotely on supercomputers, or they have a shared workstation because those things never need to run all the time anyways.

    You must mostly think about how difficult your task is, and how accessible it is to simple parallelization.
  6. Aug 2, 2017 #5


    User Avatar
    Science Advisor

    CUDA is a registered trademark of NVIDIA Corporation.

    OpenCL supports a wider range of manufacturers, including NVIDIA. If you want support for managing the widest range of Intel or AMD multi-core CPUs and NVIDIA or AMDs Radeon Pro GPUs then consider OpenCL. It is more open, and is closer to the hardware than is CUDA when it comes to optimisations. Maybe there will be a dominant manufacturer. We cannot expect either CUDA or OpenCL to die.

    I gave earlier the example of the Pascal architecture GTX1070. It has no monitor plug as it is a dedicated GPGPU, a parallel computing coprocessor, so you will need an integrated display chip on the motherboard, which will hopefully give you a few more MFLOPS. Next year there may be a better way to get more bang for your buck.

    So long as AMD and NVIDIA are competing, GPGPUs will continue to go through rapid increases in performance. Each generation takes only a couple of years. We have been in the NVIDIA Pascal architecture since 2016 and apparently will get shipments of Volta architecture in 2018.

    Most computational software packages now support GPUs, check out which packages you use and if they have both CUDA and OpenCL libraries. If you are using a super-computer made from many NVIDIA GPGPUs you will obviously do better using to CUDA.

    A free copy of OpenCL will almost certainly work on any multi-core CPU and graphics chip in your existing machine. So delay your purchase while possible, then make the hardware decision once you have climbed the parallel software learning curve. Only then will you will know what hardware will work best for you in the future.
  7. Aug 2, 2017 #6
    Do you know if the new cards by amd are able to work in double precision too? I ask for amd because it is much cheaper than nvidia and intel.

    I looked at the wikipedia page for vega and it seems it does: https://en.wikipedia.org/wiki/AMD_RX_Vega_series
    Last edited: Aug 2, 2017
  8. Aug 2, 2017 #7
    I'm pretty sure that only the Nvidia Tesla series has a significant amount of GPU double computing. It is in most cases possible to use float precision in physics, but you need to know your stuff when it comes to precision loss in subtraction. So you may have a lot more problems with the programming.
  9. Aug 2, 2017 #8


    User Avatar
    Science Advisor

    @Telemachus. I am going through the same process as you, so have been studying the form for the past lunar cycle. The AMD Radeon option is the lower cost GPU, but when it comes to any GPU it is hard to tell what you are getting from any manufacturer. The detail of the internal architecture is very well hidden.

    There is double precision in the AMD Radeon HD 69xx and the Radeon HD 7730 onwards to the Radeon Pro WX x100. Take a look at the bottom half of the table here …
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted