Why not hardwire software onto CPU chips for improved efficiency?

  • Thread starter nameta9
  • Start date
  • Tags
    Software
In summary, the conversation discusses the idea of hardwiring large chunks of software, such as a Linux kernel and office programs, directly onto CPU chips. However, this is not a feasible solution due to the high cost and potential for bugs. Additionally, these programs are constantly evolving, making it impractical to hard code them. As for the most widely used programming language, it is likely C due to its use in billion line programs like Microsoft and Linux, rather than assembly which is primarily used in smaller systems like microcontrollers.
  • #1
nameta9
184
0
Thinking about various threads here and there talking about inefficient modern software, has anyone ever though about hardwiring large chunks of software directly onto the CPU chips ? After all chips today contain millions of transistors, why not hardwire a linux kernel and a office word and internet explorer directly into hardware ?

Would it be so difficult ? after all these chunks of programs are quite standard and stable, maybe introducing 5 or 6 megachips that have a large piece of standard software all hardware made could really improve things!

Does anyone think there are real technical limitations to what I imagine ?
 
Computer science news on Phys.org
  • #2
For the same reason you don't hard-code values ("magic numbers") into software. You make the program read configuration files. Then changing, say, a directory location is as easy as shutting down the program, editing the text config file, and restarting the program. Otherwise you'd have to recompile.

In your plan, what would you do when you needed to change the program? I'm not sure I'd want IE's security flaws to be permanently stored in a chip. At least now, MS can send out an update once in a while. (Actually, I use Firefox, but you get my point.)

Actually, this is done to some degree. Programs that control electronic devices are often put into chips and called "firmware." It can be changed, but the operation is not trivial, and you are limited to how much you can put in there, and the programs are generally very small. And this is done basically to simply eliminate any notion of 'installing' the software your microwave oven, for example, needs to operate. Actually this technique was one of the main reasons for the Y2K panic. If software in, say, the circuits that operate nuclear missle launchers were to run differently due to misreading the date, it could be disaster (or so the logic went.)

Works somewhat with hardware that really never needs to change. Basically software changes too much for this to be practical.
 
  • #3
1) Chips are expensive. Microprocessors already cost hundreds of dollars per unit. Microprocessor chip "real-estate" is one of the most valuable things on the planet, in fact. The cost of the chip goes up exponentially with its size.

2) The earliest computers WERE hardcoded. The reason people started making general-purpose computers and using software to control them was specifically so the computers could be cheap, and their functionality changeable. What you suggest is effectively doing away with the last 60 years of progress, making computers so expensive that only government institutions could own them.

3) As infidel pointed out, you probably don't want a bunch of bugs hardcoded into your CPU. The reason CPUs generally don't have many bugs is because they are kept simple, symmetric, and testable, with a finite number of input vectors and output vectors. Once you introduce a block like IE into your chip, you lose all of those advantages.

4) infidel: firmware is really just software that's stored in non-volatile memory.

- Warren
 
  • #4
chroot said:
4) infidel: firmware is really just software that's stored in non-volatile memory.
Thanks for that, Warren. The day is almost over and I hadn't learned anything new yet. :biggrin:
 
  • #5
Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?
 
  • #6
oldtobor said:
Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

For the reasons already mentioned. $$$$$ and bugs. If an AMD chip with the Linux Kernel hard-coded in cost $1.37 then you'd see the Linux Kernel come hard coded. The cost of hard coding something like that would be great to say the least so it's not going to happen anytime soon. Moreover, the programs you mentioned are constantly changing. Kernels are alwasy being patched. Office software is constantly being updated. Hard coding it would be an exercise in futility to say the least.

[qoute]
Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?[/QUOTE]

100 line AVR/PIC programs are dwarfed by billion line MS/KDE/GNOME/what ever programs written mostly in C (or to a lesser extent C++). ASM is still used in the above, but I don't think it is the most widely used language on the planet. My money would be on C if I were a betting man.
 
  • #7
oldtobor said:
Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

What world have you been living on? All of those things have new releases regularly, semi-regularly and when MS says, respectively.

Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?

Uh...no. Cell phones and Tvs are rarely programmed in assembly language, AFAIK. Many cell phones run on either Windows CE, or Linux, as do many other embedded systems, like portable music players, PDAs, etc.
 
  • #8
The majority of the world's microcontroller code is in C. Assembly on chips like the PIC, however, is really very easy, since there are only (IIRC) 33 instructions anyway.

And, as others have mentioned, oldtobor, the linux kernel changes DAILY, as do the kernels for other platforms.

- Warren
 
  • #9
nameta9 said:
Thinking about various threads here and there talking about inefficient modern software, has anyone ever though about hardwiring large chunks of software directly onto the CPU chips ? After all chips today contain millions of transistors, why not hardwire a linux kernel and a office word and internet explorer directly into hardware ?

Would it be so difficult ? after all these chunks of programs are quite standard and stable, maybe introducing 5 or 6 megachips that have a large piece of standard software all hardware made could really improve things!

Does anyone think there are real technical limitations to what I imagine ?
Consider the BIOS which starts your computer, the old style game cardridges (cd/dvd is just cheaper) and a lot of OEM type ROMs (I have a plug in ROM that turns a Ipaq into an electronic level).

You innovation is not uncommon it is done all the time in many different ways.
 
  • #10
nameta9 said:
Does anyone think there are real technical limitations to what I imagine?

To describe the various trade-offs involved would take a whole book.

Also, don't underestimate the size of modern software. A suite of software can take hundreds of megabytes. You could use the entire die just holding the thing.

However engineers do translate specific software algorithms into transistors all the time. However they usually only choose software algorithms that are highly parallelizable and compute bound [*]. Most user software, what I think you were describing, is IO bound to the user (obviously), network, or disk so there is not a lot to be gained. The algorithm just ends up sleeping while waiting for more data (check the amount of time your computer spends in the idle process).

Also, as previously discussed, once you commit to an algorithm in hardware, which is a huge investment, you are stuck. So you had better be sure the algorithm is not going to change any time soon. This is why most companies build "general purpose" hardware. It insures the product is an option to the largest customer base for the longest time.

* The most common examples are software in hardware are
GPUs
DSPs
SSL chips for VPNs on some network cards, switchs and routers

This is not so common but...
There are even CPUs that speak high level languages like LISP as described in
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf
 
Last edited:
  • #11
It is about 2 eng. Decision...

1. The cost to upgrade.
2. The frequency in which you need to release an upgrade.

Example:--
1. How you can upgrade the chip, if the firmwire needs upgrade? Typical example include DVD player with MPEG board in it...no need to upgrade, everybody is agreeing on the MPEG structure so we can take the risk to make it in the HW itself...

2. There is a JAVA microprocessor, running JVM...and Java patches are coming 2/3 months...so what is the use? :grumpy:

And that explains the *real* reason.
 
  • #12
hardcoding, microsoft

isn't microsoft going to be hardcoding into the 64-bit chips for like spam and virus' for longhorn?
 
  • #13
Actually I'm starting to be convinced that it is a good idea! Take the awk95 programming language. It is only 200K in size which would mean 1,600,000 bits or considering all the overhead maybe 20 million transistors. Pentiums can have 100 million transistors so even a hardwired awk on chip would only occupy 20% of the chip. But then after, all the software could be directly coded in awk completely bypassing assembly language! Now that would be interesting. In the early 80s they had BASIC in ROMs of only 4k or 8K so it is conceivable to just hardwire the whole language and directly program the CPU in a high level language and build up all the complex applications starting from a higher level!
 
  • #14
oldtobor said:
Actually I'm starting to be convinced that it is a good idea! Take the awk95 programming language. It is only 200K in size which would mean 1,600,000 bits or considering all the overhead maybe 20 million transistors. Pentiums can have 100 million transistors so even a hardwired awk on chip would only occupy 20% of the chip. But then after, all the software could be directly coded in awk completely bypassing assembly language! Now that would be interesting. In the early 80s they had BASIC in ROMs of only 4k or 8K so it is conceivable to just hardwire the whole language and directly program the CPU in a high level language and build up all the complex applications starting from a higher level!

Since the 200K binary still requires a CPU to execute, you would in no way be bypassing the CPU. And sticking the program data directly on the die doesn't change that fact.

And what is the advantage? How is running awk directly in hardware better than running awk in software on top of another piece of hardware? Running a program on a physical machine isn't automatically better than running it on a virtual machine (it isn't even guaranteed to be faster on a physical machine).
 
  • #15
jlorino said:
isn't microsoft going to be hardcoding into the 64-bit chips for like spam and virus' for longhorn?
Cause you know, you can hardcode that stuff in effectively. Cause like, Gates says so.
 
  • #16
CPUs ultimately process assembly language code. It is the same program you write in BASIC only translated into microcodes and executed (actually machine language). To program a CPU you need to establish the order of the microcodes, but to do this you need to PROGRAM in assembler. If a language is hardwired, the architecture of the CPU is already optimized for the language. You don't bypass the CPU, you just speed everything up 100 fold! I would hardwire the following languages:

1) C
2) C++
3) FORTRAN
4) COBOL
5) BASIC/VISUAL BASIC

so you don't have to rewrite all the code of these languages.
Actually Gates was quite clever when he squeezed a BASIC interpeter in 4K ROM. Those are the kind of things that they should try to do today only directly implementing it in hardware!
 
  • #17
nameta9 said:
If a language is hardwired, the architecture of the CPU is already optimized for the language. You don't bypass the CPU, you just speed everything up 100 fold!
This is false, as has already been explained. Perhaps you should look into educating yourself about microarchitecture before making such statements?

- Warren
 
  • #18
No, what I wanted to say is that the architecture of the CPU would be "designed" to be optimized for the given language .

Anyways a CPU that had a hardwired language on it would be an interesting experiment. I am not sure if any companies ever tried it out. It could be the equivalent of the 4k ROM Basic, after which thousands of applications were written for the language.
 
  • #19
Whoa! this has been a retro. Mr. Neumann would have ignored this thread for some reason.

By the way,
I would hardwire the following languages:

1) C
2) C++
3) FORTRAN
4) COBOL
5) BASIC/VISUAL BASIC

First of all, how do u hardwire a language?
I believe u mean hardwiring the compiler program. If so, let's assume that this is actually feasible. What would this achieve us? Faster compiling time, i believe. But does it speed up the final program speed, nope!

Now if u instead of talking about compiled languages, were talking about interpreted languages, i would have given your idea my 2 cents of thought, but in your list , u don't include any of the interpreted languages? Personally "hardwiring" interpreters isn't a good idea either. Why should i give up flexibility and change for a few once of increase in speed? This point has been repeatedly put up even in earlier post, you may like to re-read the entire thread.

-- AI
P.S -> You may like to know that ppl do develop ASICs (Application Specific ICs). For example a simple micro-controller on a temperature controlling device. When u know that some software isn't going to be changed for a long time to come, its ok for it to be hardcoded, but when something changes as fast as our current softwares ... nope its completely infeasible.
 
  • #20
Yes there are ALOT OF DIFFERENT WAYS to "hardwire" a language. What I thought was a completely NEW CPU design that completely does away with the assembler language op-code design and just implements all the language constructs directly in the chip. Therefore NO COMPILER, NO INTERPRETER, NO OP CODES only a pure ideas machine. Like a register would have the FOR function control another would have the NEXT etc. The best way to start is to implement a small BASIC language CPU. An ASIC design could be fine. Anyone who has some time and has a copy of some small tiny basic or 8k basic and knows VHDL can try to do it for fun.
 
  • #21
(Oldtobor/nameta9), you obviously don't know what opcodes are. If you did, you would know they are essential. There is no way to get around it. You need to associate each instruction or "keyword" with a unique number. Without this how do you expect the CPU to interpret what operation you want to execute?

For example:

Code:
opcode  instruction
0000     mov
0001     add
0002     sub
0003     jmp

One of the jobs that the assembler has is to convert instructions into their opcode.
 
  • #22
oldtobor said:
Therefore NO COMPILER, NO INTERPRETER, NO OP CODES only a pure ideas machine. Like a register would have the FOR function control another would have the NEXT etc.
Then FOR and NEXT are your non-existent opcodes. As has been said, there is no way to build a digital machine that does not have some form of opcode, because you have to distinguish one instruction from another.

- Warren
 
  • #23
Must be very inventive and creative. It is more a RESEARCH idea than anything. After all in 1975 who would have ever thought of implementing bill gates BASIC interpreter in 4K ROM ? I think that we MAY not be using the millions of transistor on CPUs in the best possible way. Alot of research probably has been done towards studying alternative CPU designs. It is really just intruiging that 30 years ago we could put an interpreter in 4K, so maybe with a few million transistors we could possible organize chips to directly understand even a simple Basic like language.
 
  • #24
nameta9 said:
Must be very inventive and creative. It is more a RESEARCH idea than anything. After all in 1975 who would have ever thought of implementing bill gates BASIC interpreter in 4K ROM ?
I think you may be missing the point that putting an interpreter in ROM is NOT the same thing as hardwiring an interpreter in logic. Do you understand the difference?
I think that we MAY not be using the millions of transistor on CPUs in the best possible way.
Sorry, but you really have no idea what you're talking about. Do you have any idea how microprocessors are designed? Can you tell me what cache-coherency means? Can you tell me what the terms 'superscalar' and 'branch prediction' and 'pipeline' mean? Do you really not think it's a bit heady of you to denounce the work of hundreds of thousands of people who know more than you about the topic?

- Warren
 
  • #25
RISC and CISC are keywords you would want to look at as well. (Reduced / Complete Instruction Set Computer) You seem to be describing an elaborate CISC.

If I recall correctly, modern CISC processors translate the complete instruction set into an internal reduced instruction set at runtime anyways.
 
  • #26
Geez, i had this funny feeling of reading theory development forum. I double checked, it says "Software" *phew*.

-- AI
 
  • #27
TenaliRaman said:
Geez, i had this funny feeling of reading theory development forum. I double checked, it says "Software" *phew*.

-- AI

This thread is like something out of the Twilight Zone. TD transposed into a real subforum...creepy.
 
  • #28
Before anyone says that this idea will not work they should read this again:
Anyone who has some time and has a copy of some small tiny basic or 8k basic and knows VHDL can try to do it for fun.
It will result in a massive speed increase because of large amounts of increased parallelism, the idea is perfectly fine.

What you loose is flexibility since you will have problems running other languages efficiently and it is much harder to design a complex cip than it is to make a compiler or an interpreter for a language.
 
  • #29
I do have experience with VHDL, and I can say that it's not as simple as you're making it seem.

(1) You don't get massive parallelism for free -- a direct port of BASIC to a chip will not be parallelized at all. To get any parallelism out of it at all, it would take a fairly sophisticated design.

(2) You can't get much parallelism out of it anyways -- while you might be able to optimize the BASIC interpreter, that's all the parallelism you get. It can't make your arbitrary, run of the mill, BASIC program massively parallelized. To get massive parallelism, the program must be designed for and placed on the chip.

(3) You're not even guaranteed to run faster. :tongue2: CPUs are very well optimized devices -- I would not expect your result to be any better than simply compiling the program to machine language to run on the CPU. If you're using a FPGA for reconfigurability, instead of an ASIC, the speed discrepancy will be even greater!
 
  • #30
If you're using a FPGA for reconfigurability, instead of an ASIC, the speed discrepancy will be even greater!
I have a FPGA on my desk, even running at 50MHz it can give a 2GHz P4 a good fight when a program is written in VHDL instead of Assembly or a high level language. For example a specific LFSR that is often used in pseudo random generators ran at the same speed on the FPGA as in did in Visual C++. Hardwired on the same silicon as the P4 it would easily run at 100 times the speed of a program.

This is the FPGA I got:
http://www.digilentinc.com/info/D2SB.cfm

All the functions of BASIC is can be translated to VHDL and get a massive increase of speed. If each function gets a huge speed increase, then the BASIC program will inherit the exact same speed increase without any change of the program. It is assembly that is hard to make faster, not higher level languages.

You're not even guaranteed to run faster.
On the same silicon process and the same die size there will be a large performance increase. Even by just making a new CPU that has special instructions that are specific for the needs of the language will give a nice speed increase.
 
Last edited by a moderator:
  • #31
Hurkyl said:
(3) You're not even guaranteed to run faster. :tongue2: CPUs are very well optimized devices -- I would not expect your result to be any better than simply compiling the program to machine language to run on the CPU. If you're using a FPGA for reconfigurability, instead of an ASIC, the speed discrepancy will be even greater!

Actually, if you translate a CPU bound program into VHDL, you probably will get a speed increase. But translating a program into VHDL can be a very non-trivial task; and it's a waste of time if your program is I/O bound anyway.
 
  • #32
The goal is not speed but simplifying software. A CPU that can only be programmed directly in a BASIC variant simplifies everything, there are no longer compilers, and it is easy to debug. I would add all those funky features of PERL like associative arrays, regular expressions etc.
 
  • #33
oldtobor said:
The goal is not speed but simplifying software. A CPU that can only be programmed directly in a BASIC variant simplifies everything, there are no longer compilers, and it is easy to debug. I would add all those funky features of PERL like associative arrays, regular expressions etc.

No, compilers are still there. You just have to compile into BASIC instead of assembly language.

A BASIC chip wouldn't make it any easier to write programs using BASIC, and it definitely wouldn't make it easier to write programs in other languages. The only possible advantage of using a physical machine instead of a virtual one is speed.
 
  • #34
No way Jose'. The compiler exists because it has to convert high level language to op codes. In our CPU there are no longer opcodes but direct high level instructions. The logic circuits take care of understanding them and activating registers and counters etc. It is a true IDEAS machine that bypasses all we have always taken for granted in CPU design. With millions of transistors available I think it is feasable. Then we only have ONE FUNKY HIGH LEVEL LANGUAGE that takes care of all, all software is built up starting from a higher level.

You have a register group that takes care of the FOR instruction, another for the NEXT another for the GOTO etc. You just write the program, the chip reads it from RAM and immediately executes it. No more debugging nightmares or incompatible software. Of course industry and academia may not really want to simplify software for "cultural - economical" reasons...
 
  • #35
Bjørn Bæverfjord said:
All the functions of BASIC is can be translated to VHDL and get a massive increase of speed. If each function gets a huge speed increase, then the BASIC program will inherit the exact same speed increase without any change of the program. It is assembly that is hard to make faster, not higher level languages.
Microarchitecture, like all engineering pursuits, is about trade-offs. Sure, you can build a small algorithm like an LFSR into an FPGA and run it at the maximum toggle-rate of the FPGA, and it will likely be faster than the equivalent algorithm running on a general-purpose computer which requires many instructions. However, as the complexity of your algorithm goes up (say, all the way to an algorithm that will interpret Perl), the advantages disappear. At some level of complexity, it will no longer be able to compete with the common opcoded CPU architecture.

This is the reason modern microarchitecture is learning more and more toward simpler hardware. First, there were CISC (complex instruction-set computing) chips, programmed mostly by hand. Today, RISC (reduced instruction-set computing) chips have center stage; they have simpler control paths and more function units per unit die area. Next, VLIW (very long instruction word) CPUs will take off. VLIW eliminates most of the control path, relying on sophisticated compilers to generate very long continuous runs of instructions that directly control the chip's function units. Eventually TTA (tag-triggered architecture) might take over, which eliminates the control path completely.

What this means, of course, is that complexity is being moved out of the chip, and into the compiler. The advantages of this are numerous: the biggest stumbling block for today's microprocessors is die size; it takes a long time to send signals back and forth across a very large chip. In the early days of microprocessors, the control path used to dominate the chip's area, but why have a control path when you don't actually need one? Why not use that chip area for more function units to actually get things done? It eliminates much of the cross-chip communication that limits clock speeds, and uses the die area more effectively.

Furthermore, putting the complexity in the compiler rather than in the chip means that the arduous task of scheduling instructions and doing branch predictions happens at compile-time rather than at run-time. What that means is simple: it will take longer to compile your program, but much less time to execute it. Since programs are compiled once and run many times, this is certainly the way to go.
On the same silicon process and the same die size there will be a large performance increase. Even by just making a new CPU that has special instructions that are specific for the needs of the language will give a nice speed increase.
This is true, but the answer is not to make a billion specialized instructions for every possible need; the trade-off is that your bloated chip now runs at 100 kHz.

- Warren
 
Last edited:

Similar threads

  • MATLAB, Maple, Mathematica, LaTeX
Replies
12
Views
1K
  • Computing and Technology
Replies
12
Views
5K
Replies
13
Views
2K
  • Computing and Technology
Replies
17
Views
16K
Replies
4
Views
4K
  • STEM Academic Advising
Replies
1
Views
919
  • STEM Academic Advising
Replies
7
Views
2K
  • DIY Projects
Replies
5
Views
2K
  • Sci-Fi Writing and World Building
Replies
9
Views
2K
Back
Top