Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How exactly do computers execute code?

  1. Sep 15, 2018 #1
    one thing that has always miffed me is how is it that the hardware carries out the coded instruction. the code is 1's and 0's, and if I have understood anything. 1's and 0's are on and off. but how is the computer flipping the circuits on and off? something must tell it to do that. but then what is telling it to follow those instructions. etc.

    I am unfamiliar, but have friends to took circuits or logic circuits or something like that, and so I guess at the most basic level, logic circuits are used in a magical way to control everything. all code must carry with it instructions that activate those circuits?

    I don't know. it's weird to me. almost as if there must be a brain in there deciphering the code and carry out the orders.
  2. jcsd
  3. Sep 15, 2018 #2


    User Avatar

    Staff: Mentor

    What reading have you been doing so far on this? Seeing what you've been reading and not understanding will help us to answer your question the best that we can. Thanks. :smile:
  4. Sep 15, 2018 #3


    Staff: Mentor

    A tutorial video like this might help. There are others on Youtube.

    There is no magic. It is just boolean logic. I'm not sure the level of understanding you seek. Do you need a description of the CPU down to the level of each transistor?
  5. Sep 16, 2018 #4
    I know there's no magic. I don't think that video is what I'm looking for. but probably, that's what I'm looking for. how the cpu works at the transistor level. not just a generic explanation you get in any course. but how exactly, physically, is the cpu processing.

    I can understand how a computer worked back in the days when human involvement was required. manually plugging and unplugging as code. or with the punch cards. when computers required human intercession, and software was physically altering the system. which I don't understand how a cpu does.
  6. Sep 16, 2018 #5

    jim hardy

    User Avatar
    Science Advisor
    Gold Member
    2018 Award

    i think you'll have to build one.
    I learned by troubleshooting them. Hung a 'scope on every IC in it .
    Of course that was in the days of 7400 logic IC's, when a CPU occupied three boards seventeen inches square and you could do that.
    By 1974 they'd progressed to the point a CPU was now one IC the size of your thumb.

    Hobbyists are building them for the exact reason you state.

    good luck
    old jim
  7. Sep 16, 2018 #6


    Staff: Mentor

    If you are curious enough to study, I recommend choosing the famous 6502 chip as your object.

    A simple google search for 6502 will point you to numerous books, articles, emulators, circuit diagrams, and even fan clubs of people who study and celebrate the 6502. The 6502 was a very capable microcomputer, but much much simpler than today's CPUs. Those 6502 groups and forums are filled with people who share your interests. They can probably help you with very specific suggestions.

    Once upon a time, I had all the 6502 op codes memorized, and I could read code in binary. Those were fun times. :biggrin:
  8. Sep 16, 2018 #7


    User Avatar
    Science Advisor
    Gold Member
    2018 Award

  9. Sep 16, 2018 #8


    User Avatar
    Homework Helper

    Typically the opcode and/or options are used to index the equivalent of tables, which in turn result in running a sequence of micro -operations, and/or more table lookups. In some simple processors, there's a single table lookup with a large number of bits per entry, 1 bit for every possible operation. I recall a 16 micro based on AMD 2900 family of chips that only had just under 80 possible combinations of operations, so it use a table that was 80 bits wide, with only 1 bit set per entry.
  10. Sep 16, 2018 #9


    Staff: Mentor

    CPUs with micro code add another level of abstraction. That's why I recommended the 6502. No micro code.
  11. Sep 16, 2018 #10


    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    You can start at the transistor level and work up from there by looking into Digital Logic. In my undergrad Digital Logic class we actually designed a very, very basic CPU. Unfortunately you just won't be able to really understand this stuff without spending a significant amount of time and effort with a good textbook on the subject. You can watch as many videos as you want, but until you get down and dirty solving digital logic problems you won't truly understand.
  12. Sep 16, 2018 #11
    The punch card is just in memory now, as a bunch on/off bits. The clock is cycling, fetching, decoding, executing, it’s built in. You have your programs instructions loaded in memory, you have a number pointing you to where the next one is. That number increments, or an instruction can change it.

    It seams highly complex, but all of the components individually are fairly simple. How they are combined to make more complex machinery is also relatively simple, at each level. The key to managing the complexity is abstraction.
  13. Sep 17, 2018 #12
    Exactly. Many things what you just can't understand by a book will become common and trivial once you start using them. Don't try to expand your understanding without any practical experience.
  14. Sep 21, 2018 #13
    It seems that way because there IS a brain in there deciphering the code and carrying out the orders.

    A processor has a set of instruction codes it is designed to understand. A common design feature is also to have "registers", which are a few memory locations right on the chip. Things done on registers are very fast. Let's say you have four registers A, B, C and D. Then a built-in instruction might be ADD A, B, C (encoded in some binary format) which the processor is wired to understand as "Add what's in register A to what's in register B and store the result in register C". When the processor is executing that instruction, it invokes special addition circuitry which is also right on the chip, as well as the circuitry which reads and writes things to the desired places.

    Then it gets the next instruction and does whatever that says. And the next. And the next.

    You might want to pick up a book on assembly language for some processor, any processor, just to see what those fundamental instructions look like. Assembly language is the text version of the commands. The processor of course encodes them as 1's and 0's. Maybe ADD is instruction 0101 and registers A, B, C, D are written as binary 00, 01, 10, 11 so ADD A, B, C to the computer really looks like 0101 00 01 10 for example.

    The binary language, what the CPU actually understands, is called "machine language". There isn't really a good reason to learn or understand that unless you're really curious. And these days there is very little call for people to work in the slightly more-readable assembly language either.
  15. Sep 21, 2018 #14


    User Avatar
    Gold Member

    It's good to have at least a passing understanding of assembly. If you ever get into trouble debugging you might get dumped to an assembly view. Also if your code is not working you can compare the assembly to the C and it will show you what part of the C language you violated.

  16. Sep 23, 2018 #15


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    You might want to look into what's called a finite state machine. Also, you should probably concentrate on the gate level, rather than the transistor level.
  17. Oct 2, 2018 #16


    User Avatar
    Gold Member

  18. Dec 7, 2018 #17
    I preferred the 6809. Ahh, assembly language programming on the Color Computer....
  19. Dec 8, 2018 #18


    User Avatar
    Science Advisor


    For a reasonably basic look starting at the transistor level:
    (above. and much more, found with: https://www.google.com/search?&q=logic+gate+circuit+diagram)

    An early bible for the computer operation (and more) is THE ART OF COMPUTER PROGRAMMING, Volume 1, Fundamental Algorithms, by Donald E. Knuth, Addison-Wesley Publishing Company, ISBN 0-201-03809-9. (that's the Second edition, I believe there are later ones.)

    Digital Logic is based on signals that are either On or Off (One or Zero, True or False); just like your refrigerator is either On or Off.

    Then realize that any logical operation (or mathematical for that matter) can be created from the three logical operations of AND, OR, NOT. (can also be done with other operations, but these are the most used and simple enough to keep straight) (That's the field called Boolean Algebra, quite useful when you get in a bit deeper)

    AND = output is True if all inputs are True
    OR = output is True if any one or more inputs are True
    NOT = output is True if input is False; output is False if input is True (i.e. the output is the complement of the input

    Hope this helps.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?