Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why is pipelining easier on a RISC vs a CISC?

  1. Aug 4, 2017 #1
    I know very basically what pipe-lining is. My understanding is that the output from one "computing element" (usually a small set of instructions solving a simple task, such as say, finding a geometric mean) should flow immediately into the input of the following "computing element" where it is being used, instead of being stored somewhere in ram/cache and then retrieved again later.

    I've read that pipe-lining is easier to achieve (generally) with a RISC rather than a CISC.

    I regurgitated this to a friend of mine recently, and he asked why this would be so, and I couldn't give an answer.

    It feels right intuitively that RISC would be easier to do pipe-ling for than CISC...but in all honesty, I don't understand why.

    Could someone help me understand why?

    I already have a basic working understanding of the von Neumann architecture of CPU (ALU, PC, registers, control logic), but no real knowledge of real systems.
     
  2. jcsd
  3. Aug 5, 2017 #2
    ISC means the same thing in both so do you know what the difference between CISC and RISC is?
     
  4. Aug 5, 2017 #3
    Reduced Instruction Set vs Complex Instruction Sets for the CPU control logic.

    In the first half of the nand2tetris course a series of projects lead me through describing a very basic CPU from the nand gates up in a basic custom HDL.

    The CPU I eventually described was actually a Harvard Architecture, not strict von Neumann, in so far as the instructions for the ALU, jump conditions and destination registers were stored separately in the ROM from the constant or memory address values; based A instructions and C instructions. They said that in most CPU those addresses/constant values get stored in the same instructions for the ALU, jump and dest registers. However, their Hack CPU was only 16 bit.

    But I am not familiar with x86 or ARM for instance.

    What is it about complex instruction sets that make them less amenable to pipe-lining? Or conversely, what is it about reduced instruction sets that do make them better for pipe-lining?
     
  5. Aug 5, 2017 #4
    Pipelining is used in two ways in processors:
    There is pipelining for the actual computations. A floating point multiply unit might need 5 clockcycles to produce an answer from the inputs, but another computation can be started after 1 or 2 clock cycles. Fetching data from memory or the cache is also pipelined. For this kind of pipelining it does not really matter if you are on a RISC or CISC.

    Pipelining is also used for instruction fetching and decoding. This is much harder on a CISC than on a RISC because there are so much more instructions, the instructions have different lenghths (wich is also a problem if you want to decode and execute more than 1 instruction at once), and it's much harder to find out what the input and output from instructions is, and to see if instructions depend on each other.

    This problem has been mostly solved by throwing transistors at it.
    All modern CISC processors have a longer fetch/decode pipeline and they will convert the CISC instructions into RISC-like micro-ops in the pipeline. The longer pipeline gives a larger wait if the pipeline needs to be flushed because of a mispredicted jump, but for most repetitive loops, like matrix multiplication, the performance will only depend on the instruction units and memory/caches, and instruction decoding won't be a factor, nor RISC/CISC.
     
  6. Aug 7, 2017 #5
    Modern processors are pretty much all RISC. EvenCISC instruction sets (x86-64) are translated to RISCmicrocode on chip prior to execution. ... RISC CPUs generally run at faster clock speeds than CISCbecause max clock period is dictated by the slowest step of the pipeline (more complex instructions are slower).

    Check my blog :- <..........> -Link removed
     
    Last edited by a moderator: Aug 7, 2017
  7. Aug 10, 2017 #6
    For pipe-lining to be effective, you need predictable instruction sizes and a quick way of determining how consecutive instructions interact.
    RISC often provides instructions of a single length - so finding the start of the next few instructions is pretty simple. Often, what is "reduced" in a RISC is not just the total number of instructions, but the total number of types of instructions - so there is a limited number of instruction formats. This allows memory and register dependencies to be determined in very few gate-delays - so if the pipeline needs to be throttled (to wait for a result), it can be determined quickly.

    In the best case, the instruction set is designed for being pipe-lined - commonly with rules that favor a specific sequence of instruction types: A, then B, then C, then A. So as long as you code (or the compiler codes) A,B,C,A,B,C,..., the pipeline will operate at its maximum pace.
     
  8. Aug 12, 2017 #7
    In the 1980s Amdahl mainframes were pipelined and IBM mainframes were not.

    They both ran exactly the same non-RISC instruction set.

    For some workloads the pipelining produced speed advantages, but running a program that modified upcoming instructions that were close enough to have already been prefetched was disadvantageous for the pipeline approach.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Why is pipelining easier on a RISC vs a CISC?
  1. Why not fortran? (Replies: 4)

  2. MKL vs GSL speed (Replies: 0)

Loading...