# Why not hardwire software ?

1. Apr 30, 2005

### nameta9

Thinking about various threads here and there talking about inefficient modern software, has anyone ever though about hardwiring large chunks of software directly onto the CPU chips ? After all chips today contain millions of transistors, why not hardwire a linux kernel and a office word and internet explorer directly into hardware ?

Would it be so difficult ? after all these chunks of programs are quite standard and stable, maybe introducing 5 or 6 megachips that have a large piece of standard software all hardware made could really improve things!

Does anyone think there are real technical limitations to what I imagine ?

2. Apr 30, 2005

### infidel

For the same reason you don't hard-code values ("magic numbers") into software. You make the program read configuration files. Then changing, say, a directory location is as easy as shutting down the program, editing the text config file, and restarting the program. Otherwise you'd have to recompile.

In your plan, what would you do when you needed to change the program? I'm not sure I'd want IE's security flaws to be permanently stored in a chip. At least now, MS can send out an update once in a while. (Actually, I use Firefox, but you get my point.)

Actually, this is done to some degree. Programs that control electronic devices are often put into chips and called "firmware." It can be changed, but the operation is not trivial, and you are limited to how much you can put in there, and the programs are generally very small. And this is done basically to simply eliminate any notion of 'installing' the software your microwave oven, for example, needs to operate. Actually this technique was one of the main reasons for the Y2K panic. If software in, say, the circuits that operate nuclear missle launchers were to run differently due to misreading the date, it could be disaster (or so the logic went.)

Works somewhat with hardware that really never needs to change. Basically software changes too much for this to be practical.

3. Apr 30, 2005

### chroot

Staff Emeritus
1) Chips are expensive. Microprocessors already cost hundreds of dollars per unit. Microprocessor chip "real-estate" is one of the most valuable things on the planet, in fact. The cost of the chip goes up exponentially with its size.

2) The earliest computers WERE hardcoded. The reason people started making general-purpose computers and using software to control them was specifically so the computers could be cheap, and their functionality changeable. What you suggest is effectively doing away with the last 60 years of progress, making computers so expensive that only government institutions could own them.

3) As infidel pointed out, you probably don't want a bunch of bugs hardcoded into your CPU. The reason CPUs generally don't have many bugs is because they are kept simple, symmetric, and testable, with a finite number of input vectors and output vectors. Once you introduce a block like IE into your chip, you lose all of those advantages.

4) infidel: firmware is really just software that's stored in non-volatile memory.

- Warren

4. Apr 30, 2005

### infidel

Thanks for that, Warren. The day is almost over and I hadn't learned anything new yet.

5. May 1, 2005

### oldtobor

Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?

6. May 1, 2005

### faust9

For the reasons already mentioned. $and bugs. If an AMD chip with the Linux Kernel hard-coded in cost$1.37 then you'd see the Linux Kernel come hard coded. The cost of hard coding something like that would be great to say the least so it's not going to happen anytime soon. Moreover, the programs you mentioned are constantly changing. Kernels are alwasy being patched. Office software is constantly being updated. Hard coding it would be an exercise in futility to say the least.

[qoute]
Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?[/QUOTE]

100 line AVR/PIC programs are dwarfed by billion line MS/KDE/GNOME/what ever programs written mostly in C (or to a lesser extent C++). ASM is still used in the above, but I don't think it is the most widely used language on the planet. My money would be on C if I were a betting man.

7. May 1, 2005

### franznietzsche

What world have you been living on? All of those things have new releases regularly, semi-regularly and when MS says, respectively.

Uh...no. Cell phones and Tvs are rarely programmed in assembly language, AFAIK. Many cell phones run on either Windows CE, or Linux, as do many other embedded systems, like portable music players, PDAs, etc.

8. May 1, 2005

### chroot

Staff Emeritus
The majority of the world's microcontroller code is in C. Assembly on chips like the PIC, however, is really very easy, since there are only (IIRC) 33 instructions anyway.

And, as others have mentioned, oldtobor, the linux kernel changes DAILY, as do the kernels for other platforms.

- Warren

9. May 1, 2005

### Integral

Staff Emeritus
Consider the BIOS which starts your computer, the old style game cardridges (cd/dvd is just cheaper) and a lot of OEM type ROMs (I have a plug in ROM that turns a Ipaq into an electronic level).

You innovation is not uncommon it is done all the time in many different ways.

10. May 2, 2005

### egsmith

To describe the various trade-offs involved would take a whole book.

Also, don't underestimate the size of modern software. A suite of software can take hundreds of megabytes. You could use the entire die just holding the thing.

However engineers do translate specific software algorithms into transistors all the time. However they usually only choose software algorithms that are highly parallelizable and compute bound [*]. Most user software, what I think you were describing, is IO bound to the user (obviously), network, or disk so there is not a lot to be gained. The algorithm just ends up sleeping while waiting for more data (check the amount of time your computer spends in the idle process).

Also, as previously discussed, once you commit to an algorithm in hardware, which is a huge investment, you are stuck. So you had better be sure the algorithm is not going to change any time soon. This is why most companies build "general purpose" hardware. It insures the product is an option to the largest customer base for the longest time.

* The most common examples are software in hardware are
GPUs
DSPs
SSL chips for VPNs on some network cards, switchs and routers

This is not so common but...
There are even CPUs that speak high level languages like LISP as described in
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf

Last edited: May 2, 2005
11. May 2, 2005

### nmondal

It is about 2 eng. Decision....

1. The cost to upgrade.
2. The frequency in which you need to release an upgrade.

Example:--
1. How you can upgrade the chip, if the firmwire needs upgrade? Typical example include DVD player with MPEG board in it...no need to upgrade, everybody is agreeing on the MPEG structure so we can take the risk to make it in the HW itself....

2. There is a JAVA microprocessor, running JVM....and Java patches are coming 2/3 months....so what is the use? :grumpy:

And that explains the *real* reason.

12. May 2, 2005

### jlorino

hardcoding, microsoft

isn't microsoft going to be hardcoding into the 64-bit chips for like spam and virus' for longhorn?

13. May 3, 2005

### oldtobor

Actually I'm starting to be convinced that it is a good idea! Take the awk95 programming language. It is only 200K in size which would mean 1,600,000 bits or considering all the overhead maybe 20 million transistors. Pentiums can have 100 million transistors so even a hardwired awk on chip would only occupy 20% of the chip. But then after, all the software could be directly coded in awk completely bypassing assembly language! Now that would be interesting. In the early 80s they had BASIC in ROMs of only 4k or 8K so it is conceivable to just hardwire the whole language and directly program the CPU in a high level language and build up all the complex applications starting from a higher level!

14. May 3, 2005

### master_coda

Since the 200K binary still requires a CPU to execute, you would in no way be bypassing the CPU. And sticking the program data directly on the die doesn't change that fact.

And what is the advantage? How is running awk directly in hardware better than running awk in software on top of another piece of hardware? Running a program on a physical machine isn't automatically better than running it on a virtual machine (it isn't even guaranteed to be faster on a physical machine).

15. May 3, 2005

### franznietzsche

Cause you know, you can hardcode that stuff in effectively. Cause like, Gates says so.

16. May 4, 2005

### nameta9

CPUs ultimately process assembly language code. It is the same program you write in BASIC only translated into microcodes and executed (actually machine language). To program a CPU you need to establish the order of the microcodes, but to do this you need to PROGRAM in assembler. If a language is hardwired, the architecture of the CPU is already optimized for the language. You don't bypass the CPU, you just speed everything up 100 fold! I would hardwire the following languages:

1) C
2) C++
3) FORTRAN
4) COBOL
5) BASIC/VISUAL BASIC

so you don't have to rewrite all the code of these languages.
Actually Gates was quite clever when he squeezed a BASIC interpeter in 4K ROM. Those are the kind of things that they should try to do today only directly implementing it in hardware!

17. May 4, 2005

### chroot

Staff Emeritus
This is false, as has already been explained. Perhaps you should look into educating yourself about microarchitecture before making such statements?

- Warren

18. May 4, 2005

### nameta9

No, what I wanted to say is that the architecture of the CPU would be "designed" to be optimized for the given language .

Anyways a CPU that had a hardwired language on it would be an interesting experiment. I am not sure if any companies ever tried it out. It could be the equivalent of the 4k ROM Basic, after which thousands of applications were written for the language.

19. May 4, 2005

### TenaliRaman

Whoa!! this has been a retro. Mr. Neumann would have ignored this thread for some reason.

By the way,
First of all, how do u hardwire a language?
I believe u mean hardwiring the compiler program. If so, lets assume that this is actually feasible. What would this achieve us? Faster compiling time, i believe. But does it speed up the final program speed, nope!

Now if u instead of talking about compiled languages, were talking about interpreted languages, i would have given your idea my 2 cents of thought, but in your list , u dont include any of the interpreted languages? Personally "hardwiring" interpreters isnt a good idea either. Why should i give up flexibility and change for a few once of increase in speed? This point has been repeatedly put up even in earlier post, you may like to re-read the entire thread.

-- AI
P.S -> You may like to know that ppl do develop ASICs (Application Specific ICs). For example a simple micro-controller on a temperature controlling device. When u know that some software isnt going to be changed for a long time to come, its ok for it to be hardcoded, but when something changes as fast as our current softwares ... nope its completely infeasible.

20. May 4, 2005

### oldtobor

Yes there are ALOT OF DIFFERENT WAYS to "hardwire" a language. What I thought was a completely NEW CPU design that completely does away with the assembler language op-code design and just implements all the language constructs directly in the chip. Therefore NO COMPILER, NO INTERPRETER, NO OP CODES only a pure ideas machine. Like a register would have the FOR function control another would have the NEXT etc. The best way to start is to implement a small BASIC language CPU. An ASIC design could be fine. Anyone who has some time and has a copy of some small tiny basic or 8k basic and knows VHDL can try to do it for fun.