Basic Hardware Architecture Questions

In summary: I misunderstanding something?The computer used the laws of Physics to rig the whole thing up this way from the beginning and then later just tacked on the things like the letters and numbers of the keyboard... or am I misunderstanding something?The computer uses a keyboard and a monitor to type in commands. The keyboard sends the symbols that make up the command to the monitor, which displays the symbols so that the user can see them. The user types the symbols that make up the command into the keyboard.
  • #1
AdamF
34
3
Is this thread appropriate for some basic computer science hardware and architecture questions or are the posts in here to be strictly related to specific Programming Languages?
 
Technology news on Phys.org
  • #3
If the question is computer-sciencey, this section is probably as good as any other.
 
  • #4
Mark44 said:
If the question is computer-sciencey, this section is probably as good as any other.

My questions are mostly very basic on encoding logic in circuits and the physics of how electricity is used to store information, as well as things like how, for example, a mouse cursor actually appears on the screen. Is this the right place for that kind of stuff?
 
  • #5
Go with what Mark44 said, your confusion is our confusion but we'll sort it out if necessary. Try to get one topic per thread though as multiple topics can spin out of control as other folks answer and debate.
 
  • #7
Started reading the second link, looks like a great jumping in point, thank you.

The way that I understand computer science at this point is that there are really two segments;

1 - How do I make the computer do what I want it to do?

2 - How would the computer be able to do what I want to make it do?
 
  • #8
AdamF said:
The way that I understand computer science at this point is that there are really two segments;

1 - How do I make the computer do what I want it to do?

2 - How would the computer be able to do what I want to make it do?
BINGO!
 
  • #9
AdamF said:
1 - How do I make the computer do what I want it to do?
After you learn the syntax of some programming language, you essentially "explain" to the computer what you want it to do.
AdamF said:
2 - How would the computer be able to do what I want to make it do?
Now you're getting into the architecture of the computer -- its CPU (central processing unit), memory (registers, RAM, cache, and disk storage), I/O, and so on. Computer languages vary a lot in how much or how little they insulate you from the hardware itself. At the lowest levels are assembly languages, with different languages for different architectures. A bit higher level is C, and higher yet are C++, Java, C#, Python, and many others.
 
  • #10
Mark44 said:
After you learn the syntax of some programming language, you essentially "explain" to the computer what you want it to do.
Now you're getting into the architecture of the computer -- its CPU (central processing unit), memory (registers, RAM, cache, and disk storage), I/O, and so on. Computer languages vary a lot in how much or how little they insulate you from the hardware itself. At the lowest levels are assembly languages, with different languages for different architectures. A bit higher level is C, and higher yet are C++, Java, C#, Python, and many others.

Okay, so here's some of what I'm trying to understand at the moment;

Let's say you want to command the computer to "Move the contents of Registry address N to Registry address M" and the way that you can do this is by typing the command directly into the keyboard.

You turn the computer on, it goes to a blank (black) screen with a cursor ready for you to type in your command. You type in the command above, and the computer obeys.

Here's how I currently understand that what happens in that situation -- I'm hoping somebody can point out whether this is accurate or where it isn't accurate:

- The "contents" of Registry Address N are sitting there existing in a way that they are expressed as some kind of "state" of the computer hardware; this "state" is the related to the output of a series of circuits which use various logic gates to use binary encoding. (I'm not sure if "encoding" is the right word to describe what the binary is doing, but the circuits are basically creating the output of 0's and 1's which somehow encapsulate the information.)

- Next, I type my command on the screen. This is where I'm a little less clear on what happens -- Did the people who made the computer literally wire the circuitry from the keyboard to the registry such that the combination of symbols in the command "Move the contents of Registry address N to Registry address M" would use the wiring to communicate with the registry electronically?

In other words, the computer used the laws of Physics to rig the whole thing up this way from the beginning and then later just tacked on the things like the letters and numbers of the keyboard an the monitor to make it make sense to me as the user?

The computer maker basically had to take into account all of the things they thought it would be reasonable for me to expect to do, and then build the machine with circuits and tech in order for me to be able to do those things in a way that makes sense to me?

The place that I'm kind of getting stuck is where the interface is between the user commands and the actual technology really is and what it means for the information to be "encoded in 1's and 0's", but I'm starting to kind of get it.

Is this all kind of a way more complex version of how a tribe would "encode" information in smoke signals by getting together and deciding "Okay, three smoke rings means X and five smoke rings means Y" and then it fell back on the user to interpret the smoke when they saw it? (In this case, the "smoke signals" would be the output in the form of whatever appears on the GUI.)
 
  • #11
AdamF said:
Okay, so here's some of what I'm trying to understand at the moment;

Let's say you want to command the computer to "Move the contents of Registry address N to Registry address M" and the way that you can do this is by typing the command directly into the keyboard.
This is somewhat confused. The term "register" refers to a limited number of named memory locations inside the CPU. The term "registry" is specific to Windows (I believe), and refers to a block of memory where lots of internal stuff is kept track of.

If you mean "move the contents of memory address N to memory address M," this is not something that can be done by merely typing a command. To do something like this you will need to write a program, and then use some other program to compile, interpret, or assemble the program you wrote, and then run the program. What you're trying to do here is a low-level operation that isn't something that's available to casual users of an operating system. The kinds of operations that are available are things like copy a file to a different directory, delete a file, list the contents of a file, and so on.
AdamF said:
You turn the computer on, it goes to a blank (black) screen with a cursor ready for you to type in your command. You type in the command above, and the computer obeys.
Well, no. It's only going to carry out the kinds of commands that it understands.
AdamF said:
Here's how I currently understand that what happens in that situation -- I'm hoping somebody can point out whether this is accurate or where it isn't accurate:

- The "contents" of Registry Address N are sitting there existing in a way that they are expressed as some kind of "state" of the computer hardware; this "state" is the related to the output of a series of circuits which use various logic gates to use binary encoding. (I'm not sure if "encoding" is the right word to describe what the binary is doing, but the circuits are basically creating the output of 0's and 1's which somehow encapsulate the information.)
There are "commands" (normally called instructions) that can read the contents of memory locations, copy the contents between registers, copy the contents between registers and memory, read values from an input device or send values to an output device, and lots of other operations. All data on a computer is stored in binary form, strings 0s and 1s. This includes numbers, characters, images, whatever.

AdamF said:
- Next, I type my command on the screen. This is where I'm a little less clear on what happens -- Did the people who made the computer literally wire the circuitry from the keyboard to the registry such that the combination of symbols in the command "Move the contents of Registry address N to Registry address M" would use the wiring to communicate with the registry electronically?
Well, no. It's only going to carry out the kinds of commands that it understands.
AdamF said:
In other words, the computer used the laws of Physics to rig the whole thing up this way from the beginning and then later just tacked on the things like the letters and numbers of the keyboard an the monitor to make it make sense to me as the user?

The computer maker basically had to take into account all of the things they thought it would be reasonable for me to expect to do, and then build the machine with circuits and tech in order for me to be able to do those things in a way that makes sense to me?
No, that's not at all how things work. Things are complicated enough that I can't explain all of what you're asking here in an internet forum. Take a look at any of the links that @jedishrfu provided, which should help with some of your misconceptions.
AdamF said:
The place that I'm kind of getting stuck is where the interface is between the user commands and the actual technology really is and what it means for the information to be "encoded in 1's and 0's", but I'm starting to kind of get it.

Is this all kind of a way more complex version of how a tribe would "encode" information in smoke signals by getting together and deciding "Okay, three smoke rings means X and five smoke rings means Y" and then it fell back on the user to interpret the smoke when they saw it? (In this case, the "smoke signals" would be the output in the form of whatever appears on the GUI.)
 
  • #12
I'm reading a lot from the links and more. Regarding this:

"It's only going to carry out the kinds of commands that it understands."

-- I guess I'm having the most trouble understanding first how the computer is trained to understand various commands, (and then from there how it executes various commands from my own head to my fingers/voice and then to the computer and then back to my eyes/ears, but I don't want to get ahead of myself...)

I'll keep reading and asking questions.

Regarding a much simpler model, let's take the 5+2 Abacus:

Is it accurate to break the model down as follows?

The hardware is the beads, the frame, the string, and the other physical components. (Okay, I'm confident this part is correct.)

The software would be (example) the "addition" program which uses the proper arithmetic algorithm, and the algorithm is performed using the hardware for the purpose of adding two numbers together, so that in this case the addition program/"software" for the Abacus actually lives in the user's head, and not inside the computer itself?
 
  • #13
AdamF said:
-- I guess I'm having the most trouble understanding first how the computer is trained to understand various commands, (and then from there how it executes various commands from my own head to my fingers/voice and then to the computer and then back to my eyes/ears, but I don't want to get ahead of myself...)
It's not "trained" to understand commands -- it's designed to execute them. At the lowest levels, the commands it responds to are those to move data here and there, load a register with a value from memory, store the value in a register in memory, add or subtract two values, and a lot more.

Higher-level languages can perform more complex operations, but these operations boil down to some sequence of lower-level instructions.
AdamF said:
The hardware is the beads, the frame, the string, and the other physical components. (Okay, I'm confident this part is correct.)

The software would be (example) the "addition" program which uses the proper arithmetic algorithm, and the algorithm is performed using the hardware for the purpose of adding two numbers together, so that in this case the addition program/"software" for the Abacus actually lives in the user's head, and not inside the computer itself?
I think your idea is accurate, but I'm not sure that it's helpful. In addition to the hardware in a computer (corresponding to the beads, frame, and wires of an abacus), there is the software, which exists at several levels, from BIOS (basic input/output services) that lives in ROM chips, to the operating system, which also exists in multiple levels, to driver software, to user programs. This is an overly simplistic explanation, but I hope you get the idea.
 
  • #14
In order to understand a computer at its most basic level, you really need to take a course or even several courses on how it works. The concepts are easy but it’s even easier to get fooled into believing it works in some other fashion.

There’s the cpu and it’s opcodes. The cpu is an incredibly complex chip composed of hundreds of thousands of gates. Each gate can represent different kinds of logic operations. One such example, is a flip flop which can be used to store a bit or toggle a bit.

Start with the Khan Academy and then start asking questions, don’t try theorize things until you’ve almost got it.

https://www.khanacademy.org/computi...my-and-codeorg-introducing-how-computers-work
The simplest computer is a Turing machine and there is an excellent working model in LEGO on YouTube if you search for it. Watch how it reads ops from it’s tape and then modifies it’s tape. The tape is the memory and the machinery around it is the cpu.

240985
 
  • Like
Likes WWGD and QuantumQuest
  • #15
Mark44 said:
It's not "trained" to understand commands -- it's designed to execute them. At the lowest levels, the commands it responds to are those to move data here and there, load a register with a value from memory, store the value in a register in memory, add or subtract two values, and a lot more.

Okay, I see how I pretty much had it backwards.

Data types and programming languages seem far more intuitive to me -- it's understanding how the commands interact on a physical level that's where most of my questions are at this point, it's probably because I need to learn about circuits and electricity.

Is it accurate to say that when I hear that information is encoded in transistors and capacitors, this is kind of like how the gears of a mechanical watch encode the time, as in we design the machine using the laws of physics to give us back information in such a way that our human brains can make meaning out of the output from the machine?
 
  • #16
jedishrfu said:
In order to understand a computer at its most basic level, you really need to take a course or even several courses on how it works. The concepts are easy but it’s even easier to get fooled into believing it works in some other fashion.

There’s the cpu and it’s opcodes. The cpu is an incredibly complex chip composed of hundreds of thousands of gates. Each gate can represent different kinds of logic operations. One such example, is a flip flop which can be used to store a bit or toggle a bit.

Start with the Khan Academy and then start asking questions, don’t try theorize things until you’ve almost got it.

https://www.khanacademy.org/computi...my-and-codeorg-introducing-how-computers-work
The simplest computer is a Turing machine and there is an excellent working model in LEGO on YouTube if you search for it. Watch how it reads ops from it’s tape and then modifies it’s tape. The tape is the memory and the machinery around it is the cpu.

View attachment 240985

Thank you so much for pointing me to the Turing Machine, I was trying to find something in between the Abacus and the super-complex thing that I'm typing this on!

I'm looking into where I can buy own kit w/ the LEGO's to build one right now.

My questions have been coming from going through the following book (below), but I will start back w/ the Khan course as you recommended:

https://www.amazon.com/dp/013487546X/?tag=pfamazon01-20
 
  • #17
240987
 
  • #18
AdamF said:
Data types and programming languages seem far more intuitive to me -- it's understanding how the commands interact on a physical level that's where most of my questions are at this point, it's probably because I need to learn about circuits and electricity.
Here's a figure that represents the logic gates that might be involved in ANDing two bits.
240986

The two bits come in on the lines marked a and b. A signal comes in on the line marked Operation, that controls which of the two operations to perform -- an AND of the two bits or an OR of the two bits. If the Operation signal is AND, and both bits are 1, the result is 1. If either or both bits are 0, the Result is 0. The figure also consists of logic gates to perform the OR of two bits.

A CPU consists of a large number of logic gates and controller hardware.
AdamF said:
Is it accurate to say that when I hear that information is encoded in transistors and capacitors, this is kind of like how the gears of a mechanical watch encode the time, as in we design the machine using the laws of physics to give us back information in such a way that our human brains can make meaning out of the output from the machine?
A better analogy is a bank of light switches, with some up (on) and some down (off). A switch in the on position could indicate 1 and a switch in the off position could indicate 0. The computer's memory, these days, consists of billions of these switches.
 
  • Like
Likes QuantumQuest
  • #19
Mark44 said:
Here's a figure that represents the logic gates that might be involved in ANDing two bits.
View attachment 240986
The two bits come in on the lines marked a and b. A signal comes in on the line marked Operation, that controls which of the two operations to perform -- an AND of the two bits or an OR of the two bits. If the Operation signal is AND, and both bits are 1, the result is 1. If either or both bits are 0, the Result is 0. The figure also consists of logic gates to perform the OR of two bits.

A CPU consists of a large number of logic gates and controller hardware.
A better analogy is a bank of light switches, with some up (on) and some down (off). A switch in the on position could indicate 1 and a switch in the off position could indicate 0. The computer's memory, these days, consists of billions of these switches.

The logic diagrams and gates are the one part that I understand more easily than the rest, but what I don't understand is where the signal comes from and how it originates and then how it's organized in all of the gates, and how using those gives the electronic computer all of the capabilities that it has, etc... but I'll keep reading.

For example, how does the computer take electricity and use some of that electricity to store numbers, some to store sound, some to store images, etc...?

Is the mechanical watch analogy somewhat accurate for getting my foot in the door to understand this?

-- Is it accurate to say that when I hear that information is encoded in transistors and capacitors, this is kind of like how the gears of a mechanical watch encode the time, as in we design the machine using the laws of physics to give us back information in such a way that our human brains can make meaning out of the output from the machine?
 
  • #20
As a more concrete example, take the calculator on a computer:

I understand the binary number system and how it maps to using Boolean logic (I have a bit of math background), but I have trouble when it comes actually implementing a series of circuits to perform a calculation and store the result -- I think this is mostly due to me needing to learn a lot more about electronics, though, so I want to get some circuit components to construct the process for myself on a more elementary level.

Do you have any recommendations for what I'd want to pick up in order to construct an elementary electronic adding machine which can store a single result?

-- Just assume that I literally know nothing about circuits other than they have wires and you can use them to make a diode light up.
 
  • #21
AdamF said:
how does the computer take electricity and use some of that electricity to store numbers, some to store sound, some to store images, etc...?

These are all one question, because the computer does not store numbers, sound, or images; it just stores bits. Whether those bits encode numbers, sound, or images, depends on how those bits interact with other bits (the bits in the programs that process numbers or sound or images), which ultimately depends on the interpretation that human programmers wanted to put on particular bits. But they're all just bits to the computer, and it stores them all the same way.
 
  • Like
Likes QuantumQuest
  • #22
AdamF said:
For example, how does the computer take electricity and use some of that electricity to store numbers, some to store sound, some to store images, etc...?
It stores the 'voltage' that represents the result of the logic gates on capacitors. The main memory is composed of very tiny capacitors and a whole bunch of transistors. The transistors in main memory implement three different functions:
  1. decoding which capacitor to use based on the true/false signals on the Address lines
  2. writing to (applying a voltage ,or charge, to) the individual capacitors
  3. reading the voltage of the individual capacitors

Hope it helps!

Cheers,
Tom
 
  • #23
PeterDonis said:
These are all one question, because the computer does not store numbers, sound, or images; it just stores bits. Whether those bits encode numbers, sound, or images, depends on how those bits interact with other bits (the bits in the programs that process numbers or sound or images), which ultimately depends on the interpretation that human programmers wanted to put on particular bits. But they're all just bits to the computer, and it stores them all the same way.
Right, I phrased my question poorly, the computer translates all data sources to bits, and then spits them back out again in other forms (as per the design that the human programmer has in mind), as far as I understand. What I was meaning to ask was "How are various sources of information translated to one source which the machine understands and then back again"?

For example, take a string of text.

I have a thought, I use the keyboard to type in the thought, and then I really don't understand what happens between the time that I type the thought in and the time that the text is reflected back to me on the screen. Is this where things like the way that the keyboard is wired into the memory and the memory is wired into the computer monitor come in?
 
  • #24
AdamF said:
I really don't understand what happens between the time that I type the thought in and the time that the text is reflected back to me on the screen.

Ah, ok. See below.

AdamF said:
Is this where things like the way that the keyboard is wired into the memory and the memory is wired into the computer monitor come in?

Yes, but the "wiring" is a lot more complicated, at least in all modern computers (where "modern" here means "since about 1970 or so" :wink:).

As far as "translating" things to and from bits is concerned, though, that happens very close to your typing and seeing things on your screen. Your keyboard generates a sequence of bits for each key you type and sends them to the computer; and your computer's monitor translates sequences of bits into what appears on your screen. Everything else in between is bits, and you can think about them abstractly without even having to know the details of how they are physically represented in things like transistors and capacitors and currents in wires. And it's virtually impossible for a single human mind to grasp what's going on in a modern computer without thinking about the bits abstractly; things at the level of transistors and capacitors and currents in wires are way, way too complicated to be able to comprehend at that level while also comprehending how all those things connect to the keystrokes you type and the things you see on your screen. You have to abstract away the actual physical hardware and focus on the bits (and indeed on things at even higher levels of abstraction than that) if you want to understand what's going on at the level of keystrokes and images on screens.
 
  • Like
Likes QuantumQuest
  • #25
AdamF said:
Okay, so to really learn about this to the point where I can understand why every single decision on a particular piece of architecture was made (or the entire machine for that matter) and exactly how I'd go about reconstructing each piece from scratch, I'll need to become extremely knowledgeable on the relevant aspects of Classical E-M and some Quantum Chemistry?
Well, that depends on how deep a scratch you want to dig. :oldwink:
You don't really need to get Quantum deep if you can accept that:
  • a transistor conducts current proportional to its input <current or voltage> (depends on whether it is a bi-polar or field-effect transistor, usually field-effect these days)
  • a capacitor can hold a charge (as evidenced that you can measure a voltage on it)

A rough concept of a resistor and a diode would be useful if you get down to the transistor circuit level.

Cheers,
Tom
 
  • #26
PeterDonis said:
Ah, ok. See below.
Yes, but the "wiring" is a lot more complicated, at least in all modern computers (where "modern" here means "since about 1970 or so" :wink:).

As far as "translating" things to and from bits is concerned, though, that happens very close to your typing and seeing things on your screen. Your keyboard generates a sequence of bits for each key you type and sends them to the computer; and your computer's monitor translates sequences of bits into what appears on your screen. Everything else in between is bits, and you can think about them abstractly without even having to know the details of how they are physically represented in things like transistors and capacitors and currents in wires. And it's virtually impossible for a single human mind to grasp what's going on in a modern computer without thinking about the bits abstractly; things at the level of transistors and capacitors and currents in wires are way, way too complicated to be able to comprehend at that level while also comprehending how all those things connect to the keystrokes you type and the things you see on your screen. You have to abstract away the actual physical hardware and focus on the bits (and indeed on things at even higher levels of abstraction than that) if you want to understand what's going on at the level of keystrokes and images on screens.

Okay, this makes a bit more sense (no pun intended):
The specific sequence of bits which are generated by the keyboard are somehow related to the ASCII and Unicode stuff that I've been reading about in my Intro to Computer Science text? These are the specific standardization for each symbol or something, right?

The bits, which are represented by voltage and subsequent current through the wiring, somehow reach the monitor in such a way that utilizes the design of all of this to interact with materials which display light in the appropriate place when the electricity hit it, or something of this nature?
 
  • Like
Likes Tom.G
  • #27
Tom.G said:
Well, that depends on how deep a scratch you want to dig. :oldwink:
You don't really need to get Quantum deep if you can accept that:
  • a transistor conducts current proportional to its input <current or voltage> (depends on whether it is a bi-polar or field-effect transistor, usually field-effect these days)
  • a capacitor can hold a charge (as evidenced that you can measure a voltage on it)
A rough concept of a resistor and a diode would be useful if you get down to the transistor circuit level.

Cheers,
Tom

Well, my definition of understanding something is really "What I cannot create, I do not understand."

My objective is to reach the point where I can de-construct today's PC and understand exactly why every decision was made the way it was on the level of exactly what the creator was thinking when they made that decision, and then understand how to construct it myself if I needed to, from sourcing the appropriate materials, to being able to look at any line in the code of the Operating System and understand what it's doing, to understanding why the circuits are built the way they are, etc...

I
 
  • #28
AdamF said:
The specific sequence of bits which are generated by the keyboard are somehow related to the ASCII and Unicode stuff that I've been reading about in my Intro to Computer Science text?

Not directly, no. Keyboards generate scan codes--sequences of bits that tell the computer what keys were pressed and in what order. ASCII and Unicode are interpretations of sequences of bits as text. It so happens that, if you press an ordinary alphabetic key on the keyboard, the scan code (sequence of bits) that gets sent to your computer corresponds to the ASCII code (sequence of bits) for the letter that's on the key. But there's nothing that requires that to be the case; it was just a convenient way to simplify keyboard processing programs in early computers where ASCII was the only kind of text that was going to be used (this was way before Unicode was even invented).

AdamF said:
The bits, which are represented by voltage and subsequent current through the wiring, somehow reach the monitor in such a way that utilizes the design of all of this to interact with materials which display light in the appropriate place when the electricity hit it, or something of this nature?

Eventually, but not necessarily the same bits that get sent by the keyboard as you type, and not without a lot happening in between.

AdamF said:
My objective is to reach the point where I can de-construct today's PC and understand exactly why every decision was made the way it was on the level of exactly what the creator was thinking when they made that decision, and then understand how to construct it myself if I needed to, from sourcing the appropriate materials, to being able to look at any line in the code of the Operating System and understand what it's doing, to understanding why the circuits are built the way they are, etc...

This will take the rest of your lifetime, and you will need to take very good care of yourself so that you live to be 150 or so. :wink:

However, a good starting point might be to start with a simpler PC and operating system than today's. For example, you could try to de-construct the original IBM PC running DOS. These will give you an overview of the architecture and some references to dig deeper:

https://en.wikipedia.org/wiki/IBM_Personal_Computer#Technology
https://en.wikipedia.org/wiki/DOS#Design
 
  • Like
Likes Tom.G
  • #29
Or for an even easier start of computing, look at the Intel 8080 processor, circa 1974, I haven't looked recently but their used to be both on-line and downloadable simulators that showed the internal workings at the register level. (that's about two levels up from logic gates)

The 8080 was the basis of the original Altair 8800 computer from the company MITS. That came out in 1975 (yes, it predated the Apple), and since it was a kit, there was a wealth of detailed documentation. You may be able to find some of it on-line with a bit of digging.

In a PM, the OP asked the following question:

Thank you for your help.

I mean, am I right in assuming that this whole thing would be a hell of a lot easier for me to understand if I actually knew how voltage behaved and how charged moves in various environments, etc...?


Here is my response:

Probably. Trying to do that here is not practical though.

I did an on-line search and managed to find a free download of "Basic Radio", all six volumes! The whole series runs 800 pages but most of what you need is in the first half of the first volume, with about 30 pages about "CAPACITORS AND CAPACITANCE" in the second volume.

https://the-eye.eu/public/Books/Electronic%20Archive/Basic_Radio_Vol_1-6_-_A_Rider%201961_text.pdf

It's a big file, about 27MB, so be patient if you are on a slow connection.

Cheers,
Tom
 
  • #30
Downloaded, ty.
 
  • #31
Focusing on startup, the power is switched on so all chips get energized. A computer will have a boot program installed on a ROM. The boot program is preprogrammmed for the cpu and is retained even when the computer is off.

Once power is switched on the cpu initializes itself ie zeros out its registers and then fetches a memory address at a predetermined location in the ROM and begins the arduous journey of loading your os and writing to the screen or beeping at you when things go wrong like bad memory aka beep codes.

Heres a more detailed description
https://www.techwalla.com/articles/the-five-steps-of-the-boot-sequence
 
  • #33
Quite a problem that 'computers' as they are now has ~ 5-10 abstraction layers (did not care to try counting them) between any useful work and the transistors down there. Without methodically building up your knowledge about specific layers you are easy to miss the important parts.

6502 is a good CPU, but I would rather recommend an available one instead - even if you get an old Commodore or like, it'll still have a few abstraction layers, but you will have to fight the missing documentation too...
 
  • #34
AdamF said:
My objective is to reach the point where I can de-construct today's PC and understand exactly why every decision was made the way it was on the level of exactly what the creator was thinking when they made that decision, and then understand how to construct it myself if I needed to, from sourcing the appropriate materials, to being able to look at any line in the code of the Operating System and understand what it's doing, to understanding why the circuits are built the way they are, etc...
As @PeterDonis implied, this isn't a practical goal, as it would take more than a lifetime to obtain this knowledge. A better strategy, IMO, would be to focus on a much simpler processor (someone suggested an Intel 8080) and delve into things at either the hardware level (electrical engineering) or at the software level (computer or software engineering). For myself, I have only a little interest in what goes on at the level of transistors, but am much more interested in what happens in the CPU, from both assembly language and higher-level language perspectives.
 
  • #35
Hence, my initial interest in models which do not even involve electricity.
 
<h2>1. What is hardware architecture?</h2><p>Hardware architecture refers to the design and organization of physical components in a computer system, including the central processing unit (CPU), memory, storage, input/output devices, and other components. It determines how these components work together to process and store data.</p><h2>2. What are the main components of hardware architecture?</h2><p>The main components of hardware architecture include the central processing unit (CPU), memory, storage, input/output devices, and buses. The CPU is responsible for executing instructions and performing calculations, while memory stores data and instructions temporarily. Storage devices, such as hard drives and solid-state drives, store data permanently. Input/output devices allow communication between the computer and its external environment. Buses are pathways that connect all the components and enable data transfer.</p><h2>3. How does hardware architecture affect computer performance?</h2><p>The hardware architecture of a computer significantly impacts its performance. A well-designed architecture can improve the speed and efficiency of data processing, while a poorly designed one can lead to bottlenecks and slow down the system. The choice of components, their arrangement, and the design of the bus system all play a role in determining the performance of a computer.</p><h2>4. What is the difference between von Neumann and Harvard architecture?</h2><p>Von Neumann architecture, also known as the stored-program concept, is a computer architecture that uses a single bus to transfer data and instructions between the CPU, memory, and input/output devices. In contrast, Harvard architecture uses separate buses for data and instructions, allowing them to be accessed simultaneously. This can improve performance but also increases complexity and cost.</p><h2>5. How does hardware architecture impact software development?</h2><p>The hardware architecture of a computer can have a significant impact on software development. Developers need to consider the capabilities and limitations of the hardware when designing software. For example, a software program may be optimized for a specific type of CPU or require a certain amount of memory to run efficiently. Understanding the hardware architecture can also help developers identify and troubleshoot performance issues in their software.</p>

1. What is hardware architecture?

Hardware architecture refers to the design and organization of physical components in a computer system, including the central processing unit (CPU), memory, storage, input/output devices, and other components. It determines how these components work together to process and store data.

2. What are the main components of hardware architecture?

The main components of hardware architecture include the central processing unit (CPU), memory, storage, input/output devices, and buses. The CPU is responsible for executing instructions and performing calculations, while memory stores data and instructions temporarily. Storage devices, such as hard drives and solid-state drives, store data permanently. Input/output devices allow communication between the computer and its external environment. Buses are pathways that connect all the components and enable data transfer.

3. How does hardware architecture affect computer performance?

The hardware architecture of a computer significantly impacts its performance. A well-designed architecture can improve the speed and efficiency of data processing, while a poorly designed one can lead to bottlenecks and slow down the system. The choice of components, their arrangement, and the design of the bus system all play a role in determining the performance of a computer.

4. What is the difference between von Neumann and Harvard architecture?

Von Neumann architecture, also known as the stored-program concept, is a computer architecture that uses a single bus to transfer data and instructions between the CPU, memory, and input/output devices. In contrast, Harvard architecture uses separate buses for data and instructions, allowing them to be accessed simultaneously. This can improve performance but also increases complexity and cost.

5. How does hardware architecture impact software development?

The hardware architecture of a computer can have a significant impact on software development. Developers need to consider the capabilities and limitations of the hardware when designing software. For example, a software program may be optimized for a specific type of CPU or require a certain amount of memory to run efficiently. Understanding the hardware architecture can also help developers identify and troubleshoot performance issues in their software.

Similar threads

Replies
14
Views
2K
  • Programming and Computer Science
Replies
25
Views
2K
  • Programming and Computer Science
Replies
10
Views
2K
  • Programming and Computer Science
Replies
29
Views
2K
  • Programming and Computer Science
Replies
14
Views
1K
  • Programming and Computer Science
Replies
10
Views
5K
  • Quantum Interpretations and Foundations
6
Replies
204
Views
7K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
814
  • Programming and Computer Science
Replies
3
Views
2K
Back
Top