Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How programming interacts with transistors

  1. Dec 12, 2014 #1
    Forgive the broadness of the question. I know this is a huge concept which entire books have been written about. I am an amateur computer programmer entering my first object-oriented class soon and want to get some things out of the way. When I write lines of code how does the computer know what this all means? Essentially it's a series of ones's and zero's interacting with millions of transistors right? but how is it possible for one's and zero's to create something as complex as say Photoshop or a Video game such as minecraft/rollercoaster tycoon. Thing's i've been able to make are simple calculators or just simple data computers, and even those I don't fully understand what's going on at the fundamental level of the computer. How is it possible that adding and subtracting one's and zero's can create the complexity we see in front of us?

    Thank you
  2. jcsd
  3. Dec 12, 2014 #2


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    What you see is the cumulative effort of previous software that was written to handle very simple tasks such as "light pixel at point x y with the color green".
  4. Dec 12, 2014 #3
    Hmmm I've heard that. Could you give the foundation of which it's build upon?
  5. Dec 12, 2014 #4


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    A computer in it's simpliest form is composed of hardware and software. The transistors, hard drives, monitors, etc. are the hardware. In order to interact with a piece of hardware, there will typically be driver software that ties the hardware to common application programming interfaces (APIs). The APIs can then be used by the OS. Object oriented code like Java, C++, etc. then can interact with APIs that the OS provides.

    Note also that OO code that you write isn't in a 0's and 1's form. When you compile the code, its in a form that other software can use to talk to the OS API interfaces. Your code doesn't actually end up in a 1's and 0's format until it gets past the driver software.
    Last edited: Dec 12, 2014
  6. Dec 12, 2014 #5
    Interesting. When the computer is compiling the code given what form is it in to interact/communicate with the API or OS interface components? Still don't seem to be comprehending how the parts of computer can create what I see in front of my eyes.
  7. Dec 12, 2014 #6
    You're asking a question that's really important and formative for a computer scientist to ask. On the lowest level is a few layers of polysilicon and insulating material (CMOS), forming very tiny logic gates. With enough of these in a digital circuit, you can calculate anything, to any precision (dependent on how many gates you wish to use). It's fun and educational to design logical circuits using a circuit simulator to perform useful computational tasks. Look up 'rippe-carry adder', and 'two's complement' at your leisure.

    The main problem with that, is that once you build a circuit in reality you can not change its function without starting from scratch. So, you build one big digital logic circuit (a set of boolean equations, mathematically), and have it do a few important, fundamental operations (addition, multiplication, sqrt, division) on a bit-size you pre-define. This is an ALU, an arithmetic logic unit.

    Assume you're only interested in calculating 8-bit numbers. You specify 2 bits to choose a function (addition, subtraction, multiplication, or division), and 16 bits to input the two operands. You get an output of 8 bits. (Optionally, you can specify 'flags', extra bits of output that tell you if the result was too big to fit in 8 bits).

    You now have a calculator, completely made out of little atoms of semiconducting material. The problem is, it's very tedious to use it for large formulas, with many operations. You want to be able to specify a series of bits that represent a mathematical function in terms of your four fundamental operations, and have your circuit give the output given a specified input.

    For the sake of concreteness, we will decide to implement the function f(x,y) = (x+1)(y-1)

    Now, remember what we're doing. We're going to be effectively inputting the function as a series of bits, as well as inputting the the two 8-bit operands. And we will be returned 8 bits which tell us the result of our calculation. We will have the series of bits which define our function to be in a convenient form: in terms of the operations we would call the ALU for.

    Consider how you would calculate f(x, y) on a simple four-function calculator. Here are the likely steps you would take:
    1. input x into the calculator, whatever it may be
    2. press +
    3. input 1
    4. write down the result (or remember it somehow)
    5. input y
    6. input -
    7. input 1
    9. press *
    10. input the earlier result.

    You see, any simple equation, and most complex ones, can be put iteratively in terms of simpler functions. We decide on a certain encoding of bits to define a function, and build complicated infrastructure to handle that.

    I'm hand-waving a bit here, because there it's difficult to explain the operation of CPUs in a simple and elegant manner, and I don't know everything there is to know. There are register banks, data buses, the ALU, the control unit. There's multiple modules which interlock together to perform the task in a loop. And there are different sorts of ways of encoding functions into ALU operations, and storing the results. Some models use a stack, some use registers.

    Let me note, though: It's not _that_ complex. People have designed CPUs in Conway's Game of Life.

    Anyway, after doing quite a bit of work, we have a circuit such that you can define a series of operations, the input to that function, and get returned an output. To expand the amount of functions we can calculate, we include the "JMP" operation. When the circuit gets to this, it doesn't use the ALU. It goes back to where the JMP specifies it should go, and continues execution. There's also JNE, and JG, conditional jumps - "jump if not equal", and "jump if greater than" respectively. How many sorts of conditional jumps you implement is up to you.

    Suddenly, with the addition of conditional jumps, we have an immensely complex class of functions that modern mathematics have not completely answered. To even tell if a given series of instructions will enter an infinite loop or not is literally impossible. We can now create functions which will not halt, but finish executing after the life of the universe (See, Ackermann's function).

    We now have an immensely powerful mathematical tool, but we also have a problem: Only a specialized person with weeks of instruction could use it, let alone understand it. We likely have to specify the input via switches, requiring us to write programs in the harshest possible way. We likely read the output as a binary series of lights.

    We could, at least, make it easier to read the output. We can devise a small circuit to translate an 8-bit number to three 7-segment-display format, thus outputting the number in decimal, rather than binary. (The construction of such a circuit is easy for small numbers, but grows complex for 32-bit numbers)

    This kicks off a development of peripherals, culminating in character terminals (screens that are only capable of outputting a specified character on a grid), monochrome pixel-based displays, RGB, mouses, etc. Each of these devices have hardware and/or software to support their function. Programs and drivers gets so numerous, that operating systems are developed.

    These allow the CPU to meld and merge programs together, and have them run on the same processor, independent of each other. This is quite a feat, and is dependent on assigning each program their own 'virtual address space' - but Operating Systems design is a monolithic subject in itself. Suffice to say, the operating system makes available a certain set of functionalities exposed through an API (Application Programming Interface). This is still true today - programmers who have good knowledge of the Windows API can perform amazing feats not possible with high-level programming.

    Programs now are stored on hard drives, and aren't simply a line of ALU instructions. They have metadata, defining when the program was made, for what operating system it is intended for, what libraries (third-party-programs) it requires, etc. Even understanding the hex dump of a windows program today is a rare skill. Writing one manually, even more so.

    And this finally begs the subject: If programs are just lines of instructions, why the 100+ programming languages? Originally, they were made in the attempt to understand the questions: "How do we write a program that writes a program?", and "How can a program modify another program?" The result was compilers, and their languages FORTRAN, LISP, etc.

    From then on, however, the focus on programming languages and compilers has been to make programming easier (at the cost of efficiency and understanding). We get praise to 'object oriented programming', and things like that. As we go on, our distance from the true reality of the electrons flowing through those semiconductor sheets grows larger and larger.

    It's funny you mention rollercoaster tycoon. It was completely programmed by one person in assembly code. It had to be, for such a revolutionary (for its time) game to be run on the computers of the day.

    I hope I answered your question.

    TL;DR: Asking 'how the computer understands my lines of code' is like asking 'How does a clock know what time it is?'
  8. Dec 14, 2014 #7


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook