Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

RISC gaming

  1. May 28, 2007 #1
    RISC processors dominate the embedded market and gaming consoles. Even Microsoft ditched the x86 processor in favor of PowerPC for the XBOX 360. In high performance computing 4 of the top 5 fastests supercomputers are based on the PowerPC.

    So why has there been this aversion to RISC processors in the general market? Even Apple went to intel. What makes RISC so attractive for gaming and not fit for our world of word processors and email?
     
  2. jcsd
  3. May 29, 2007 #2
    I wonder about this as well.

    I don't see why a 128-bit PowerPC processor with 8 cores, all running at 3.2 ghz wouldn't be better than the x86 processors we're using now. Is there something I'm missing?
     
  4. May 29, 2007 #3

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    Both multi core and RISC require intelligent optimization in order to make effective use of the capability.

    My opinion
    The "advantage" of RISC is that all instructions are single cycle as opposed to complex multi-cycle instructions on a "standard" uP.
    Not really an advantage at all as when that complex instruction now gets implemented in the compiler it burns more machine cycles than the hardware complex instruction.
    With the advent of single cycle "complex" instructions implemented in the current generation of hardware, I would say the concept of RISC is well past its prime.

    Multi-core is still a meaningful concept, but well beyond the capabilities of the current generation of compilers to make any meaningful use of it.
    OSs can make some use of this feature.
    The major difference here being true parallel execution of a linear job stream as opposed to time slicing.
     
  5. May 29, 2007 #4
    That's interesting because I consistently saw RISC processors with lower clock speeds compared to CISC processors with higher clock speeds. And yet you say it uses more clock cycles to do the same work. Have I been duped this whole time?
     
  6. May 29, 2007 #5

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    I would dispute that. The makers of parallel machines with large numbers of processors have had compilers than can automatically optimize code over multiple CPUS for 20 years now (Sun+Cray Research+Silicon Graphics have developed a pretty good compiler system - though some of their competitors managed to FUBAR their own attempts at writing one).

    IMO the big problem with the "multi core" concept is that the memory bandwidth to the chip is unlikely to keep pace with sticking more cores in the chip. Also there are considerations like "cache trashing" where multiple copies of memory contents are cached for different processors, then one processor updates the value. It's (fairly) easy to design something that handles that and works, but not so easy to design something that still works fast. That's part of the reason why a 32-processor SGI box costs rather a lot of money, but it really can run a single "real-world" application (not an artificial benchmark) on 32 processors, 31.9 times as fast as on a single processor.

    The other problems are that not all applications are suitable for multi-processor optimization, and (the biggest problem) the proportion of working programmers and system designers who have real practical experience of this is very small - and from my own experience, you can't become an expert by going on a 2-day appreciation course, it takes more like 2 years to really get your head around it.

    As you say, the easy way to use multiple processors is to run multiple independent tasks - but for most "personal" computing, that's not a useful option. Re-inventing the mainframe doesn't seem a very sensible way to make progress.
     
  7. May 29, 2007 #6

    chroot

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    AlephZero is correct. SGI, for example, has compilers that will automatically parallelize anything you throw at it. You don't need to learn how the architecture works -- just write normal, general-purpose C code, and the SGI compilers will parellelize it for you. I'm not qualified to say precisely how well this parallelization is performed, but it's hardly a new concept. Making good use of multiple processors is absolutely not "well beyond the capabilities" of current compilers... unless maybe you've only had experience with Visual Studio or something.

    Personally, I'd be thrilled to see technologies like SGI's cache-coherent non-uniform memory access architecture (ccNUMA) make it onto desktop personal computers.

    - Warren
     
  8. May 29, 2007 #7

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    Been there, done that.... don't believe everything you read in the marketing brochures though. There is a good Freudian reason why "parallelize" is often pronounced "paralyze" :rolleyes:

    As for how simple (or not) this technology really is, I've seen the source code of an SGI library routine to multiply two matrices stored in rectangular arrays. With all the options for different ways of optimizing the code depending on the size and shape of the matrices and the number of parallel threads, it was about 700 lines of source code in total - and I mean 700 lines of code, not 10 lines of code and 690 lines of comments. There's no way your average Joe Programmer is ever going to use this sort of hardware efficiently without that level of assistance.
     
  9. May 29, 2007 #8

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    While I haven't worked with something like SGI, I have worked on both multitasking OS code and parallel hardware processing.
    My understanding of these compilers is that they are limited primarily to array processing without programmer assistance.
    Are you saying they are capable of more than that?
    If so, I'm suitably impressed.
    AFAIK it just won't work with the mix of instructions found in the typical business/personal application program.
    That may be a moot point anyway since the typical application is normally I/O bound and the only thing that will speed it up is an increase in I/O channel speed.

    Yep! There is a tremendous amount of headwork involved in this. You can't simply throw code at the problem and hope some of it sticks. It takes a thorough understanding of your intended results and how to segment the problem into pieces that can be completed independently.
    The people that can do this are few and far between.

    The only two "personal" computing concepts, that I can think of that can make real use of multi-core are relational databases and video rendering. Four if you count sorting and plain table lookup.
     
  10. May 29, 2007 #9

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    Frankly, I'll see that when I believe it.

    Maybe not, when cracking RSA keys becomes trivial :wink:
     
  11. May 29, 2007 #10

    graphic7

    User Avatar
    Gold Member

    The MIPSpro compilers and whatever they use for their Altix systems are by no means the only compilers today that do auto-parallelization of code. The Intel and Sun compilers both support OpenMP, and the Sun compilers (Sun Studio) support auto-parallelization. Sun Studio is available for free for both, Solaris and Linux.
     
  12. May 29, 2007 #11

    graphic7

    User Avatar
    Gold Member

    Small ccNUMA systems have been around for quite sometime. Any system from 2000 and on that Sun sells that has a Fireplane is considered a ccNUMA system. My Sun Blade 1000 has a Fireplane interconnect, and is thus, a ccNUMA machine, has no chance of ever cracking RSA keys in a trivial manner.
     
  13. May 30, 2007 #12

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    Actually Sun wrote the parallelization code for the Cray Unix-based compilers, which was then ported to SGI when SGI bought Cray. The SGI and Solaris compilers are pretty much identical, which is good news for software developers, because fighting with different compilers when porting high performance code to different systems is a pain in the ****.
     
  14. May 30, 2007 #13

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    I guess it depends what you define as "array processing". They do quite a lot more than just "numerical operations on matrices". But you are right, the real key to getting good performance is designing the algorithm not tweaking the code. That applies equally to parallel and non-parallel, though parallel makes a whole new set of options available.

    Re business applications, database searching is definitely on the parallelizable applications list.

    I agree it's fairly irrelevant for most personal applications (except games).

    The SGI architecture is pretty good for I/O speed as wel. The I/O bandwidth scales with the number of processors, and the OS doesn't force all I/O requests to squeeze through one serialized bottleneck. You don't get either of those by just putting multiple cores on one CPU chip.
     
    Last edited: May 30, 2007
  15. May 30, 2007 #14

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    Wait another 10 years :biggrin:
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: RISC gaming
  1. Nintendo Game (Replies: 3)

  2. RISC vs x86 gaming (Replies: 2)

  3. Game ideas (Replies: 5)

  4. NoteBook gaming (Replies: 9)

  5. Games cd (Replies: 4)

Loading...