- #1
AdamF
- 34
- 3
Is this thread appropriate for some basic computer science hardware and architecture questions or are the posts in here to be strictly related to specific Programming Languages?
Mark44 said:If the question is computer-sciencey, this section is probably as good as any other.
BINGO!AdamF said:The way that I understand computer science at this point is that there are really two segments;
1 - How do I make the computer do what I want it to do?
2 - How would the computer be able to do what I want to make it do?
After you learn the syntax of some programming language, you essentially "explain" to the computer what you want it to do.AdamF said:1 - How do I make the computer do what I want it to do?
Now you're getting into the architecture of the computer -- its CPU (central processing unit), memory (registers, RAM, cache, and disk storage), I/O, and so on. Computer languages vary a lot in how much or how little they insulate you from the hardware itself. At the lowest levels are assembly languages, with different languages for different architectures. A bit higher level is C, and higher yet are C++, Java, C#, Python, and many others.AdamF said:2 - How would the computer be able to do what I want to make it do?
Mark44 said:After you learn the syntax of some programming language, you essentially "explain" to the computer what you want it to do.
Now you're getting into the architecture of the computer -- its CPU (central processing unit), memory (registers, RAM, cache, and disk storage), I/O, and so on. Computer languages vary a lot in how much or how little they insulate you from the hardware itself. At the lowest levels are assembly languages, with different languages for different architectures. A bit higher level is C, and higher yet are C++, Java, C#, Python, and many others.
This is somewhat confused. The term "register" refers to a limited number of named memory locations inside the CPU. The term "registry" is specific to Windows (I believe), and refers to a block of memory where lots of internal stuff is kept track of.AdamF said:Okay, so here's some of what I'm trying to understand at the moment;
Let's say you want to command the computer to "Move the contents of Registry address N to Registry address M" and the way that you can do this is by typing the command directly into the keyboard.
Well, no. It's only going to carry out the kinds of commands that it understands.AdamF said:You turn the computer on, it goes to a blank (black) screen with a cursor ready for you to type in your command. You type in the command above, and the computer obeys.
There are "commands" (normally called instructions) that can read the contents of memory locations, copy the contents between registers, copy the contents between registers and memory, read values from an input device or send values to an output device, and lots of other operations. All data on a computer is stored in binary form, strings 0s and 1s. This includes numbers, characters, images, whatever.AdamF said:Here's how I currently understand that what happens in that situation -- I'm hoping somebody can point out whether this is accurate or where it isn't accurate:
- The "contents" of Registry Address N are sitting there existing in a way that they are expressed as some kind of "state" of the computer hardware; this "state" is the related to the output of a series of circuits which use various logic gates to use binary encoding. (I'm not sure if "encoding" is the right word to describe what the binary is doing, but the circuits are basically creating the output of 0's and 1's which somehow encapsulate the information.)
Well, no. It's only going to carry out the kinds of commands that it understands.AdamF said:- Next, I type my command on the screen. This is where I'm a little less clear on what happens -- Did the people who made the computer literally wire the circuitry from the keyboard to the registry such that the combination of symbols in the command "Move the contents of Registry address N to Registry address M" would use the wiring to communicate with the registry electronically?
No, that's not at all how things work. Things are complicated enough that I can't explain all of what you're asking here in an internet forum. Take a look at any of the links that @jedishrfu provided, which should help with some of your misconceptions.AdamF said:In other words, the computer used the laws of Physics to rig the whole thing up this way from the beginning and then later just tacked on the things like the letters and numbers of the keyboard an the monitor to make it make sense to me as the user?
The computer maker basically had to take into account all of the things they thought it would be reasonable for me to expect to do, and then build the machine with circuits and tech in order for me to be able to do those things in a way that makes sense to me?
AdamF said:The place that I'm kind of getting stuck is where the interface is between the user commands and the actual technology really is and what it means for the information to be "encoded in 1's and 0's", but I'm starting to kind of get it.
Is this all kind of a way more complex version of how a tribe would "encode" information in smoke signals by getting together and deciding "Okay, three smoke rings means X and five smoke rings means Y" and then it fell back on the user to interpret the smoke when they saw it? (In this case, the "smoke signals" would be the output in the form of whatever appears on the GUI.)
It's not "trained" to understand commands -- it's designed to execute them. At the lowest levels, the commands it responds to are those to move data here and there, load a register with a value from memory, store the value in a register in memory, add or subtract two values, and a lot more.AdamF said:-- I guess I'm having the most trouble understanding first how the computer is trained to understand various commands, (and then from there how it executes various commands from my own head to my fingers/voice and then to the computer and then back to my eyes/ears, but I don't want to get ahead of myself...)
I think your idea is accurate, but I'm not sure that it's helpful. In addition to the hardware in a computer (corresponding to the beads, frame, and wires of an abacus), there is the software, which exists at several levels, from BIOS (basic input/output services) that lives in ROM chips, to the operating system, which also exists in multiple levels, to driver software, to user programs. This is an overly simplistic explanation, but I hope you get the idea.AdamF said:The hardware is the beads, the frame, the string, and the other physical components. (Okay, I'm confident this part is correct.)
The software would be (example) the "addition" program which uses the proper arithmetic algorithm, and the algorithm is performed using the hardware for the purpose of adding two numbers together, so that in this case the addition program/"software" for the Abacus actually lives in the user's head, and not inside the computer itself?
Mark44 said:It's not "trained" to understand commands -- it's designed to execute them. At the lowest levels, the commands it responds to are those to move data here and there, load a register with a value from memory, store the value in a register in memory, add or subtract two values, and a lot more.
jedishrfu said:In order to understand a computer at its most basic level, you really need to take a course or even several courses on how it works. The concepts are easy but it’s even easier to get fooled into believing it works in some other fashion.
There’s the cpu and it’s opcodes. The cpu is an incredibly complex chip composed of hundreds of thousands of gates. Each gate can represent different kinds of logic operations. One such example, is a flip flop which can be used to store a bit or toggle a bit.
Start with the Khan Academy and then start asking questions, don’t try theorize things until you’ve almost got it.
https://www.khanacademy.org/computi...my-and-codeorg-introducing-how-computers-work
The simplest computer is a Turing machine and there is an excellent working model in LEGO on YouTube if you search for it. Watch how it reads ops from it’s tape and then modifies it’s tape. The tape is the memory and the machinery around it is the cpu.
View attachment 240985
Here's a figure that represents the logic gates that might be involved in ANDing two bits.AdamF said:Data types and programming languages seem far more intuitive to me -- it's understanding how the commands interact on a physical level that's where most of my questions are at this point, it's probably because I need to learn about circuits and electricity.
A better analogy is a bank of light switches, with some up (on) and some down (off). A switch in the on position could indicate 1 and a switch in the off position could indicate 0. The computer's memory, these days, consists of billions of these switches.AdamF said:Is it accurate to say that when I hear that information is encoded in transistors and capacitors, this is kind of like how the gears of a mechanical watch encode the time, as in we design the machine using the laws of physics to give us back information in such a way that our human brains can make meaning out of the output from the machine?
Mark44 said:Here's a figure that represents the logic gates that might be involved in ANDing two bits.
View attachment 240986
The two bits come in on the lines marked a and b. A signal comes in on the line marked Operation, that controls which of the two operations to perform -- an AND of the two bits or an OR of the two bits. If the Operation signal is AND, and both bits are 1, the result is 1. If either or both bits are 0, the Result is 0. The figure also consists of logic gates to perform the OR of two bits.
A CPU consists of a large number of logic gates and controller hardware.
A better analogy is a bank of light switches, with some up (on) and some down (off). A switch in the on position could indicate 1 and a switch in the off position could indicate 0. The computer's memory, these days, consists of billions of these switches.
AdamF said:how does the computer take electricity and use some of that electricity to store numbers, some to store sound, some to store images, etc...?
It stores the 'voltage' that represents the result of the logic gates on capacitors. The main memory is composed of very tiny capacitors and a whole bunch of transistors. The transistors in main memory implement three different functions:AdamF said:For example, how does the computer take electricity and use some of that electricity to store numbers, some to store sound, some to store images, etc...?
Right, I phrased my question poorly, the computer translates all data sources to bits, and then spits them back out again in other forms (as per the design that the human programmer has in mind), as far as I understand. What I was meaning to ask was "How are various sources of information translated to one source which the machine understands and then back again"?PeterDonis said:These are all one question, because the computer does not store numbers, sound, or images; it just stores bits. Whether those bits encode numbers, sound, or images, depends on how those bits interact with other bits (the bits in the programs that process numbers or sound or images), which ultimately depends on the interpretation that human programmers wanted to put on particular bits. But they're all just bits to the computer, and it stores them all the same way.
AdamF said:I really don't understand what happens between the time that I type the thought in and the time that the text is reflected back to me on the screen.
AdamF said:Is this where things like the way that the keyboard is wired into the memory and the memory is wired into the computer monitor come in?
Well, that depends on how deep a scratch you want to dig.AdamF said:Okay, so to really learn about this to the point where I can understand why every single decision on a particular piece of architecture was made (or the entire machine for that matter) and exactly how I'd go about reconstructing each piece from scratch, I'll need to become extremely knowledgeable on the relevant aspects of Classical E-M and some Quantum Chemistry?
PeterDonis said:Ah, ok. See below.
Yes, but the "wiring" is a lot more complicated, at least in all modern computers (where "modern" here means "since about 1970 or so" ).
As far as "translating" things to and from bits is concerned, though, that happens very close to your typing and seeing things on your screen. Your keyboard generates a sequence of bits for each key you type and sends them to the computer; and your computer's monitor translates sequences of bits into what appears on your screen. Everything else in between is bits, and you can think about them abstractly without even having to know the details of how they are physically represented in things like transistors and capacitors and currents in wires. And it's virtually impossible for a single human mind to grasp what's going on in a modern computer without thinking about the bits abstractly; things at the level of transistors and capacitors and currents in wires are way, way too complicated to be able to comprehend at that level while also comprehending how all those things connect to the keystrokes you type and the things you see on your screen. You have to abstract away the actual physical hardware and focus on the bits (and indeed on things at even higher levels of abstraction than that) if you want to understand what's going on at the level of keystrokes and images on screens.
Tom.G said:Well, that depends on how deep a scratch you want to dig.
You don't really need to get Quantum deep if you can accept that:
A rough concept of a resistor and a diode would be useful if you get down to the transistor circuit level.
- a transistor conducts current proportional to its input <current or voltage> (depends on whether it is a bi-polar or field-effect transistor, usually field-effect these days)
- a capacitor can hold a charge (as evidenced that you can measure a voltage on it)
Cheers,
Tom
AdamF said:The specific sequence of bits which are generated by the keyboard are somehow related to the ASCII and Unicode stuff that I've been reading about in my Intro to Computer Science text?
AdamF said:The bits, which are represented by voltage and subsequent current through the wiring, somehow reach the monitor in such a way that utilizes the design of all of this to interact with materials which display light in the appropriate place when the electricity hit it, or something of this nature?
AdamF said:My objective is to reach the point where I can de-construct today's PC and understand exactly why every decision was made the way it was on the level of exactly what the creator was thinking when they made that decision, and then understand how to construct it myself if I needed to, from sourcing the appropriate materials, to being able to look at any line in the code of the Operating System and understand what it's doing, to understanding why the circuits are built the way they are, etc...
As @PeterDonis implied, this isn't a practical goal, as it would take more than a lifetime to obtain this knowledge. A better strategy, IMO, would be to focus on a much simpler processor (someone suggested an Intel 8080) and delve into things at either the hardware level (electrical engineering) or at the software level (computer or software engineering). For myself, I have only a little interest in what goes on at the level of transistors, but am much more interested in what happens in the CPU, from both assembly language and higher-level language perspectives.AdamF said:My objective is to reach the point where I can de-construct today's PC and understand exactly why every decision was made the way it was on the level of exactly what the creator was thinking when they made that decision, and then understand how to construct it myself if I needed to, from sourcing the appropriate materials, to being able to look at any line in the code of the Operating System and understand what it's doing, to understanding why the circuits are built the way they are, etc...
Hardware architecture refers to the design and organization of physical components in a computer system, including the central processing unit (CPU), memory, storage, input/output devices, and other components. It determines how these components work together to process and store data.
The main components of hardware architecture include the central processing unit (CPU), memory, storage, input/output devices, and buses. The CPU is responsible for executing instructions and performing calculations, while memory stores data and instructions temporarily. Storage devices, such as hard drives and solid-state drives, store data permanently. Input/output devices allow communication between the computer and its external environment. Buses are pathways that connect all the components and enable data transfer.
The hardware architecture of a computer significantly impacts its performance. A well-designed architecture can improve the speed and efficiency of data processing, while a poorly designed one can lead to bottlenecks and slow down the system. The choice of components, their arrangement, and the design of the bus system all play a role in determining the performance of a computer.
Von Neumann architecture, also known as the stored-program concept, is a computer architecture that uses a single bus to transfer data and instructions between the CPU, memory, and input/output devices. In contrast, Harvard architecture uses separate buses for data and instructions, allowing them to be accessed simultaneously. This can improve performance but also increases complexity and cost.
The hardware architecture of a computer can have a significant impact on software development. Developers need to consider the capabilities and limitations of the hardware when designing software. For example, a software program may be optimized for a specific type of CPU or require a certain amount of memory to run efficiently. Understanding the hardware architecture can also help developers identify and troubleshoot performance issues in their software.