How programming interacts with transistors

In summary: So, we take the simpler function of addition and multiplication, and input them into the calculator. We press the + and * keys, and the calculator responds with the result of the addition (3) and multiplication (6). We input the result of the multiplication (6) back into the calculator and press the * key. The calculator responds with the result of the addition (9), and we're done. This is the basic idea of how a computer works. You have a set of 0's and 1's, and you input a series of bits that tell the
  • #1
Niaboc67
249
3
Forgive the broadness of the question. I know this is a huge concept which entire books have been written about. I am an amateur computer programmer entering my first object-oriented class soon and want to get some things out of the way. When I write lines of code how does the computer know what this all means? Essentially it's a series of ones's and zero's interacting with millions of transistors right? but how is it possible for one's and zero's to create something as complex as say Photoshop or a Video game such as minecraft/rollercoaster tycoon. Thing's I've been able to make are simple calculators or just simple data computers, and even those I don't fully understand what's going on at the fundamental level of the computer. How is it possible that adding and subtracting one's and zero's can create the complexity we see in front of us?

Thank you
 
Technology news on Phys.org
  • #2
What you see is the cumulative effort of previous software that was written to handle very simple tasks such as "light pixel at point x y with the color green".
 
  • #3
Hmmm I've heard that. Could you give the foundation of which it's build upon?
 
  • #4
A computer in it's simpliest form is composed of hardware and software. The transistors, hard drives, monitors, etc. are the hardware. In order to interact with a piece of hardware, there will typically be driver software that ties the hardware to common application programming interfaces (APIs). The APIs can then be used by the OS. Object oriented code like Java, C++, etc. then can interact with APIs that the OS provides.

Note also that OO code that you write isn't in a 0's and 1's form. When you compile the code, its in a form that other software can use to talk to the OS API interfaces. Your code doesn't actually end up in a 1's and 0's format until it gets past the driver software.
 
Last edited:
  • #5
Interesting. When the computer is compiling the code given what form is it into interact/communicate with the API or OS interface components? Still don't seem to be comprehending how the parts of computer can create what I see in front of my eyes.
 
  • #6
You're asking a question that's really important and formative for a computer scientist to ask. On the lowest level is a few layers of polysilicon and insulating material (CMOS), forming very tiny logic gates. With enough of these in a digital circuit, you can calculate anything, to any precision (dependent on how many gates you wish to use). It's fun and educational to design logical circuits using a circuit simulator to perform useful computational tasks. Look up 'rippe-carry adder', and 'two's complement' at your leisure.

The main problem with that, is that once you build a circuit in reality you can not change its function without starting from scratch. So, you build one big digital logic circuit (a set of boolean equations, mathematically), and have it do a few important, fundamental operations (addition, multiplication, sqrt, division) on a bit-size you pre-define. This is an ALU, an arithmetic logic unit.

Assume you're only interested in calculating 8-bit numbers. You specify 2 bits to choose a function (addition, subtraction, multiplication, or division), and 16 bits to input the two operands. You get an output of 8 bits. (Optionally, you can specify 'flags', extra bits of output that tell you if the result was too big to fit in 8 bits).

You now have a calculator, completely made out of little atoms of semiconducting material. The problem is, it's very tedious to use it for large formulas, with many operations. You want to be able to specify a series of bits that represent a mathematical function in terms of your four fundamental operations, and have your circuit give the output given a specified input.

For the sake of concreteness, we will decide to implement the function f(x,y) = (x+1)(y-1)

Now, remember what we're doing. We're going to be effectively inputting the function as a series of bits, as well as inputting the the two 8-bit operands. And we will be returned 8 bits which tell us the result of our calculation. We will have the series of bits which define our function to be in a convenient form: in terms of the operations we would call the ALU for.

Consider how you would calculate f(x, y) on a simple four-function calculator. Here are the likely steps you would take:
1. input x into the calculator, whatever it may be
2. press +
3. input 1
4. write down the result (or remember it somehow)
5. input y
6. input -
7. input 1
9. press *
10. input the earlier result.

You see, any simple equation, and most complex ones, can be put iteratively in terms of simpler functions. We decide on a certain encoding of bits to define a function, and build complicated infrastructure to handle that.

I'm hand-waving a bit here, because there it's difficult to explain the operation of CPUs in a simple and elegant manner, and I don't know everything there is to know. There are register banks, data buses, the ALU, the control unit. There's multiple modules which interlock together to perform the task in a loop. And there are different sorts of ways of encoding functions into ALU operations, and storing the results. Some models use a stack, some use registers.

Let me note, though: It's not _that_ complex. People have designed CPUs in Conway's Game of Life.

Anyway, after doing quite a bit of work, we have a circuit such that you can define a series of operations, the input to that function, and get returned an output. To expand the amount of functions we can calculate, we include the "JMP" operation. When the circuit gets to this, it doesn't use the ALU. It goes back to where the JMP specifies it should go, and continues execution. There's also JNE, and JG, conditional jumps - "jump if not equal", and "jump if greater than" respectively. How many sorts of conditional jumps you implement is up to you.

Suddenly, with the addition of conditional jumps, we have an immensely complex class of functions that modern mathematics have not completely answered. To even tell if a given series of instructions will enter an infinite loop or not is literally impossible. We can now create functions which will not halt, but finish executing after the life of the universe (See, Ackermann's function).

We now have an immensely powerful mathematical tool, but we also have a problem: Only a specialized person with weeks of instruction could use it, let alone understand it. We likely have to specify the input via switches, requiring us to write programs in the harshest possible way. We likely read the output as a binary series of lights.

We could, at least, make it easier to read the output. We can devise a small circuit to translate an 8-bit number to three 7-segment-display format, thus outputting the number in decimal, rather than binary. (The construction of such a circuit is easy for small numbers, but grows complex for 32-bit numbers)

This kicks off a development of peripherals, culminating in character terminals (screens that are only capable of outputting a specified character on a grid), monochrome pixel-based displays, RGB, mouses, etc. Each of these devices have hardware and/or software to support their function. Programs and drivers gets so numerous, that operating systems are developed.

These allow the CPU to meld and merge programs together, and have them run on the same processor, independent of each other. This is quite a feat, and is dependent on assigning each program their own 'virtual address space' - but Operating Systems design is a monolithic subject in itself. Suffice to say, the operating system makes available a certain set of functionalities exposed through an API (Application Programming Interface). This is still true today - programmers who have good knowledge of the Windows API can perform amazing feats not possible with high-level programming.

Programs now are stored on hard drives, and aren't simply a line of ALU instructions. They have metadata, defining when the program was made, for what operating system it is intended for, what libraries (third-party-programs) it requires, etc. Even understanding the hex dump of a windows program today is a rare skill. Writing one manually, even more so.

And this finally begs the subject: If programs are just lines of instructions, why the 100+ programming languages? Originally, they were made in the attempt to understand the questions: "How do we write a program that writes a program?", and "How can a program modify another program?" The result was compilers, and their languages FORTRAN, LISP, etc.

From then on, however, the focus on programming languages and compilers has been to make programming easier (at the cost of efficiency and understanding). We get praise to 'object oriented programming', and things like that. As we go on, our distance from the true reality of the electrons flowing through those semiconductor sheets grows larger and larger.

It's funny you mention rollercoaster tycoon. It was completely programmed by one person in assembly code. It had to be, for such a revolutionary (for its time) game to be run on the computers of the day.

I hope I answered your question.

TL;DR: Asking 'how the computer understands my lines of code' is like asking 'How does a clock know what time it is?'
 
  • #7
ellipsis said:
TL;DR: Asking 'how the computer understands my lines of code' is like asking 'How does a clock know what time it is?'

:rolleyes:
 

1. How do transistors work in programming?

Transistors act as switches in programming, allowing for the manipulation of electrical signals and the execution of logical operations. They can be turned on or off, representing the binary values of 0 and 1, which are the building blocks of all computer programs.

2. What is the relationship between programming and transistors?

Programming and transistors have a symbiotic relationship - programming uses transistors to execute instructions and manipulate data, while transistors rely on programming to control their behavior and perform specific tasks.

3. How does the size of a transistor affect programming?

The size of a transistor is crucial in programming as it determines how many transistors can fit on a single chip, and therefore how powerful and efficient a computer can be. Smaller transistors allow for more compact and powerful devices, enabling more advanced programming capabilities.

4. Can programming affect the lifespan of transistors?

Yes, programming can have an impact on the lifespan of transistors. Transistors have a limited number of times they can be switched on and off before they start to degrade, and complex or inefficient programming can cause them to wear out faster.

5. How has the evolution of transistors affected programming?

The evolution of transistors has greatly impacted programming, as advancements in transistor technology have allowed for faster and more efficient computing. This has led to the development of more complex and sophisticated programming languages and systems, enabling the creation of advanced software and applications.

Similar threads

  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Programming and Computer Science
Replies
10
Views
3K
  • Programming and Computer Science
Replies
15
Views
1K
  • Programming and Computer Science
Replies
29
Views
3K
Replies
5
Views
2K
  • Programming and Computer Science
Replies
6
Views
6K
Replies
9
Views
1K
Replies
1
Views
820
Replies
18
Views
3K
Back
Top