Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How does the computer understand computer code

  1. Jun 5, 2014 #1
    I've been programming on and off for a few years just recently taking it seriously. Something I've always wondering about is how the computer understands the commands I type in. Is it essentially binary? and if so how could 1's and 0's possibly create something as complex as a video game or an operating system?
     
  2. jcsd
  3. Jun 5, 2014 #2

    Borek

    User Avatar

    Staff: Mentor

    If yup know a little bit about high level programming languages, there are at least two additional levels on which this question can be answered. One level is a machine code (assembler) - that's the way your commands are analyzed, interpreted and executed. Other level is a level of logic gates, where the zeros and ones are a way of changing logical states of other gates, which in turn makes it possible for a processor to execute the machine code.
     
  4. Jun 5, 2014 #3

    phinds

    User Avatar
    Gold Member
    2016 Award

    Yes

    Since you obviously know that they DO, why do you pose the question as though it seems unlikely to you that they could?

    Just study some basic computer architecture. As Borek said, the lowest level of understanding is logic gates and above that is groups of logic gates controlled by machine code (1's and 0's) and above that are high level languages. Each of these things builds from the bottom up.

    As you are aware, by the time you get up to high-level languages, you are not actually programming the machine, you are programming a program (a compiler) which takes your statements back down to machine code.
     
  5. Jun 5, 2014 #4
    How could a bunch of simple protons, neutrons and electrons possibly create something as complex as the human body? See what I did there? :)
     
  6. Jun 5, 2014 #5

    SixNein

    User Avatar
    Gold Member

    I think others have pointed this out, but there are several different layers of abstraction between high level languages and what is going on in the hardware.

    For example, take the following C++ statement:

    Code (Text):

    a = b+c; // a,b,c are integers
     
    Go down one level of abstraction, and you end up with assembly language:
    Code (Text):

    mov eax, b      // fetch variable b from memory.
    mov ebx, c      // fetch variable c from memory.
    add eax, ebx   // add b + c.
    mov a, eax     // store the result in variable a.
     
    The next layer of abstraction is to turn the assembly code into opcode. So each of those instructions above will translated to a different binary number. In addition, opcodes can be variable sizes on some platforms. So there is a bitmap that is looked at to compare the instruction names to the opocde format needed. Generally, the more common instructions have smaller instruction codes. But again it's platform dependent. For example, ARM processors uses fix sized opcodes.

    Output of a lst file:
    Code (Text):

    0000002A  A1 00000008 R     mov eax, b
    0000002F  8B 1D 0000000C R      mov ebx, cc
    00000035  03 C3                 add eax, ebx
    00000037  A3 00000004 R     mov a, eax
     
    The first column is the offset in code. The second column is the opcode. The third column is the offset for the variable. Notice some instructions generated more than one byte.


    The next step down is to look at the architecture itself. For example, most likely the system will use pipelines. And it breaks the process of fetching, decoding, and executing instructions into multiple tasks.

    The next step down is to look at digital circuit design.

    Looking at an adder is a good but simple example of a circuit design.
     
  7. Jun 18, 2014 #6

    harborsparrow

    User Avatar
    Gold Member

  8. Jun 18, 2014 #7

    rcgldr

    User Avatar
    Homework Helper

    Once at machine language, there are various methods to decode and execute instructions. One method used on some old 16 bit mini computers was to use some number of bits from the instruction data to index into a "bit per function" table. Say the processor has 79 possible operations and 49 nops that are represented as 7 bits of an instruction. The processor reads an instruction, and uses the 7 bits to index into an table that is 80 bits wide. Only a single bit in each table entry is set, and the bit that is set triggers a specific operation (load, add, subtract, nop, ... ). A similar decoding process is used for most processors, using combinatoins of multiple table lookups and/or decoders and/or demultiplexers and/or ... .
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: How does the computer understand computer code
  1. Computer coding (Replies: 8)

  2. How a computer works (Replies: 5)

Loading...