Why not hardwire software onto CPU chips for improved efficiency?

  • Thread starter Thread starter nameta9
  • Start date Start date
  • Tags Tags
    Software
AI Thread Summary
The discussion centers on the feasibility of hardwiring software, such as operating systems and applications, directly onto CPU chips to enhance efficiency. Participants express skepticism, citing the high costs and technical challenges associated with hardcoding software, especially given the rapid evolution and frequent updates of modern applications. The conversation highlights that while some firmware is already embedded in chips, most user software is too complex and variable to be effectively hardwired. Concerns about security vulnerabilities and the implications of permanently embedding potentially flawed software are also raised. Ultimately, the consensus is that hardcoding software into CPUs is impractical due to the dynamic nature of software development and the significant costs involved.
nameta9
Messages
184
Reaction score
0
Thinking about various threads here and there talking about inefficient modern software, has anyone ever though about hardwiring large chunks of software directly onto the CPU chips ? After all chips today contain millions of transistors, why not hardwire a linux kernel and a office word and internet explorer directly into hardware ?

Would it be so difficult ? after all these chunks of programs are quite standard and stable, maybe introducing 5 or 6 megachips that have a large piece of standard software all hardware made could really improve things!

Does anyone think there are real technical limitations to what I imagine ?
 
Computer science news on Phys.org
For the same reason you don't hard-code values ("magic numbers") into software. You make the program read configuration files. Then changing, say, a directory location is as easy as shutting down the program, editing the text config file, and restarting the program. Otherwise you'd have to recompile.

In your plan, what would you do when you needed to change the program? I'm not sure I'd want IE's security flaws to be permanently stored in a chip. At least now, MS can send out an update once in a while. (Actually, I use Firefox, but you get my point.)

Actually, this is done to some degree. Programs that control electronic devices are often put into chips and called "firmware." It can be changed, but the operation is not trivial, and you are limited to how much you can put in there, and the programs are generally very small. And this is done basically to simply eliminate any notion of 'installing' the software your microwave oven, for example, needs to operate. Actually this technique was one of the main reasons for the Y2K panic. If software in, say, the circuits that operate nuclear missle launchers were to run differently due to misreading the date, it could be disaster (or so the logic went.)

Works somewhat with hardware that really never needs to change. Basically software changes too much for this to be practical.
 
1) Chips are expensive. Microprocessors already cost hundreds of dollars per unit. Microprocessor chip "real-estate" is one of the most valuable things on the planet, in fact. The cost of the chip goes up exponentially with its size.

2) The earliest computers WERE hardcoded. The reason people started making general-purpose computers and using software to control them was specifically so the computers could be cheap, and their functionality changeable. What you suggest is effectively doing away with the last 60 years of progress, making computers so expensive that only government institutions could own them.

3) As infidel pointed out, you probably don't want a bunch of bugs hardcoded into your CPU. The reason CPUs generally don't have many bugs is because they are kept simple, symmetric, and testable, with a finite number of input vectors and output vectors. Once you introduce a block like IE into your chip, you lose all of those advantages.

4) infidel: firmware is really just software that's stored in non-volatile memory.

- Warren
 
chroot said:
4) infidel: firmware is really just software that's stored in non-volatile memory.
Thanks for that, Warren. The day is almost over and I hadn't learned anything new yet. :biggrin:
 
Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?
 
oldtobor said:
Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

For the reasons already mentioned. $$$$$ and bugs. If an AMD chip with the Linux Kernel hard-coded in cost $1.37 then you'd see the Linux Kernel come hard coded. The cost of hard coding something like that would be great to say the least so it's not going to happen anytime soon. Moreover, the programs you mentioned are constantly changing. Kernels are alwasy being patched. Office software is constantly being updated. Hard coding it would be an exercise in futility to say the least.

[qoute]
Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?[/QUOTE]

100 line AVR/PIC programs are dwarfed by billion line MS/KDE/GNOME/what ever programs written mostly in C (or to a lesser extent C++). ASM is still used in the above, but I don't think it is the most widely used language on the planet. My money would be on C if I were a betting man.
 
oldtobor said:
Thanks for the knowledge! What I was thinking about is the basic-fundamental parts of code that don't change. The basic Unix/linux kernel has remained the same for years as has the basic perl/intepreter-compiler or Word/excel, at least the most common and standard things. Why not just hard code those parts ?

What world have you been living on? All of those things have new releases regularly, semi-regularly and when MS says, respectively.

Also since microcontrollers outnumber PCs maybe 10 to 1 (think of all the cell phones, tvs etc), would that make ASSEMBLER the most used programming language ? Or at least the language with the largest body of code worldwide ?

Uh...no. Cell phones and Tvs are rarely programmed in assembly language, AFAIK. Many cell phones run on either Windows CE, or Linux, as do many other embedded systems, like portable music players, PDAs, etc.
 
The majority of the world's microcontroller code is in C. Assembly on chips like the PIC, however, is really very easy, since there are only (IIRC) 33 instructions anyway.

And, as others have mentioned, oldtobor, the linux kernel changes DAILY, as do the kernels for other platforms.

- Warren
 
nameta9 said:
Thinking about various threads here and there talking about inefficient modern software, has anyone ever though about hardwiring large chunks of software directly onto the CPU chips ? After all chips today contain millions of transistors, why not hardwire a linux kernel and a office word and internet explorer directly into hardware ?

Would it be so difficult ? after all these chunks of programs are quite standard and stable, maybe introducing 5 or 6 megachips that have a large piece of standard software all hardware made could really improve things!

Does anyone think there are real technical limitations to what I imagine ?
Consider the BIOS which starts your computer, the old style game cardridges (cd/dvd is just cheaper) and a lot of OEM type ROMs (I have a plug in ROM that turns a Ipaq into an electronic level).

You innovation is not uncommon it is done all the time in many different ways.
 
  • #10
nameta9 said:
Does anyone think there are real technical limitations to what I imagine?

To describe the various trade-offs involved would take a whole book.

Also, don't underestimate the size of modern software. A suite of software can take hundreds of megabytes. You could use the entire die just holding the thing.

However engineers do translate specific software algorithms into transistors all the time. However they usually only choose software algorithms that are highly parallelizable and compute bound [*]. Most user software, what I think you were describing, is IO bound to the user (obviously), network, or disk so there is not a lot to be gained. The algorithm just ends up sleeping while waiting for more data (check the amount of time your computer spends in the idle process).

Also, as previously discussed, once you commit to an algorithm in hardware, which is a huge investment, you are stuck. So you had better be sure the algorithm is not going to change any time soon. This is why most companies build "general purpose" hardware. It insures the product is an option to the largest customer base for the longest time.

* The most common examples are software in hardware are
GPUs
DSPs
SSL chips for VPNs on some network cards, switchs and routers

This is not so common but...
There are even CPUs that speak high level languages like LISP as described in
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf
 
Last edited:
  • #11
It is about 2 eng. Decision...

1. The cost to upgrade.
2. The frequency in which you need to release an upgrade.

Example:--
1. How you can upgrade the chip, if the firmwire needs upgrade? Typical example include DVD player with MPEG board in it...no need to upgrade, everybody is agreeing on the MPEG structure so we can take the risk to make it in the HW itself...

2. There is a JAVA microprocessor, running JVM...and Java patches are coming 2/3 months...so what is the use?

And that explains the *real* reason.
 
  • #12
hardcoding, microsoft

isn't microsoft going to be hardcoding into the 64-bit chips for like spam and virus' for longhorn?
 
  • #13
Actually I'm starting to be convinced that it is a good idea! Take the awk95 programming language. It is only 200K in size which would mean 1,600,000 bits or considering all the overhead maybe 20 million transistors. Pentiums can have 100 million transistors so even a hardwired awk on chip would only occupy 20% of the chip. But then after, all the software could be directly coded in awk completely bypassing assembly language! Now that would be interesting. In the early 80s they had BASIC in ROMs of only 4k or 8K so it is conceivable to just hardwire the whole language and directly program the CPU in a high level language and build up all the complex applications starting from a higher level!
 
  • #14
oldtobor said:
Actually I'm starting to be convinced that it is a good idea! Take the awk95 programming language. It is only 200K in size which would mean 1,600,000 bits or considering all the overhead maybe 20 million transistors. Pentiums can have 100 million transistors so even a hardwired awk on chip would only occupy 20% of the chip. But then after, all the software could be directly coded in awk completely bypassing assembly language! Now that would be interesting. In the early 80s they had BASIC in ROMs of only 4k or 8K so it is conceivable to just hardwire the whole language and directly program the CPU in a high level language and build up all the complex applications starting from a higher level!

Since the 200K binary still requires a CPU to execute, you would in no way be bypassing the CPU. And sticking the program data directly on the die doesn't change that fact.

And what is the advantage? How is running awk directly in hardware better than running awk in software on top of another piece of hardware? Running a program on a physical machine isn't automatically better than running it on a virtual machine (it isn't even guaranteed to be faster on a physical machine).
 
  • #15
jlorino said:
isn't microsoft going to be hardcoding into the 64-bit chips for like spam and virus' for longhorn?
Cause you know, you can hardcode that stuff in effectively. Cause like, Gates says so.
 
  • #16
CPUs ultimately process assembly language code. It is the same program you write in BASIC only translated into microcodes and executed (actually machine language). To program a CPU you need to establish the order of the microcodes, but to do this you need to PROGRAM in assembler. If a language is hardwired, the architecture of the CPU is already optimized for the language. You don't bypass the CPU, you just speed everything up 100 fold! I would hardwire the following languages:

1) C
2) C++
3) FORTRAN
4) COBOL
5) BASIC/VISUAL BASIC

so you don't have to rewrite all the code of these languages.
Actually Gates was quite clever when he squeezed a BASIC interpeter in 4K ROM. Those are the kind of things that they should try to do today only directly implementing it in hardware!
 
  • #17
nameta9 said:
If a language is hardwired, the architecture of the CPU is already optimized for the language. You don't bypass the CPU, you just speed everything up 100 fold!
This is false, as has already been explained. Perhaps you should look into educating yourself about microarchitecture before making such statements?

- Warren
 
  • #18
No, what I wanted to say is that the architecture of the CPU would be "designed" to be optimized for the given language .

Anyways a CPU that had a hardwired language on it would be an interesting experiment. I am not sure if any companies ever tried it out. It could be the equivalent of the 4k ROM Basic, after which thousands of applications were written for the language.
 
  • #19
Whoa! this has been a retro. Mr. Neumann would have ignored this thread for some reason.

By the way,
I would hardwire the following languages:

1) C
2) C++
3) FORTRAN
4) COBOL
5) BASIC/VISUAL BASIC

First of all, how do u hardwire a language?
I believe u mean hardwiring the compiler program. If so, let's assume that this is actually feasible. What would this achieve us? Faster compiling time, i believe. But does it speed up the final program speed, nope!

Now if u instead of talking about compiled languages, were talking about interpreted languages, i would have given your idea my 2 cents of thought, but in your list , u don't include any of the interpreted languages? Personally "hardwiring" interpreters isn't a good idea either. Why should i give up flexibility and change for a few once of increase in speed? This point has been repeatedly put up even in earlier post, you may like to re-read the entire thread.

-- AI
P.S -> You may like to know that ppl do develop ASICs (Application Specific ICs). For example a simple micro-controller on a temperature controlling device. When u know that some software isn't going to be changed for a long time to come, its ok for it to be hardcoded, but when something changes as fast as our current softwares ... nope its completely infeasible.
 
  • #20
Yes there are ALOT OF DIFFERENT WAYS to "hardwire" a language. What I thought was a completely NEW CPU design that completely does away with the assembler language op-code design and just implements all the language constructs directly in the chip. Therefore NO COMPILER, NO INTERPRETER, NO OP CODES only a pure ideas machine. Like a register would have the FOR function control another would have the NEXT etc. The best way to start is to implement a small BASIC language CPU. An ASIC design could be fine. Anyone who has some time and has a copy of some small tiny basic or 8k basic and knows VHDL can try to do it for fun.
 
  • #21
(Oldtobor/nameta9), you obviously don't know what opcodes are. If you did, you would know they are essential. There is no way to get around it. You need to associate each instruction or "keyword" with a unique number. Without this how do you expect the CPU to interpret what operation you want to execute?

For example:

Code:
opcode  instruction
0000     mov
0001     add
0002     sub
0003     jmp

One of the jobs that the assembler has is to convert instructions into their opcode.
 
  • #22
oldtobor said:
Therefore NO COMPILER, NO INTERPRETER, NO OP CODES only a pure ideas machine. Like a register would have the FOR function control another would have the NEXT etc.
Then FOR and NEXT are your non-existent opcodes. As has been said, there is no way to build a digital machine that does not have some form of opcode, because you have to distinguish one instruction from another.

- Warren
 
  • #23
Must be very inventive and creative. It is more a RESEARCH idea than anything. After all in 1975 who would have ever thought of implementing bill gates BASIC interpreter in 4K ROM ? I think that we MAY not be using the millions of transistor on CPUs in the best possible way. Alot of research probably has been done towards studying alternative CPU designs. It is really just intruiging that 30 years ago we could put an interpreter in 4K, so maybe with a few million transistors we could possible organize chips to directly understand even a simple Basic like language.
 
  • #24
nameta9 said:
Must be very inventive and creative. It is more a RESEARCH idea than anything. After all in 1975 who would have ever thought of implementing bill gates BASIC interpreter in 4K ROM ?
I think you may be missing the point that putting an interpreter in ROM is NOT the same thing as hardwiring an interpreter in logic. Do you understand the difference?
I think that we MAY not be using the millions of transistor on CPUs in the best possible way.
Sorry, but you really have no idea what you're talking about. Do you have any idea how microprocessors are designed? Can you tell me what cache-coherency means? Can you tell me what the terms 'superscalar' and 'branch prediction' and 'pipeline' mean? Do you really not think it's a bit heady of you to denounce the work of hundreds of thousands of people who know more than you about the topic?

- Warren
 
  • #25
RISC and CISC are keywords you would want to look at as well. (Reduced / Complete Instruction Set Computer) You seem to be describing an elaborate CISC.

If I recall correctly, modern CISC processors translate the complete instruction set into an internal reduced instruction set at runtime anyways.
 
  • #26
Geez, i had this funny feeling of reading theory development forum. I double checked, it says "Software" *phew*.

-- AI
 
  • #27
TenaliRaman said:
Geez, i had this funny feeling of reading theory development forum. I double checked, it says "Software" *phew*.

-- AI

This thread is like something out of the Twilight Zone. TD transposed into a real subforum...creepy.
 
  • #28
Before anyone says that this idea will not work they should read this again:
Anyone who has some time and has a copy of some small tiny basic or 8k basic and knows VHDL can try to do it for fun.
It will result in a massive speed increase because of large amounts of increased parallelism, the idea is perfectly fine.

What you loose is flexibility since you will have problems running other languages efficiently and it is much harder to design a complex cip than it is to make a compiler or an interpreter for a language.
 
  • #29
I do have experience with VHDL, and I can say that it's not as simple as you're making it seem.

(1) You don't get massive parallelism for free -- a direct port of BASIC to a chip will not be parallelized at all. To get any parallelism out of it at all, it would take a fairly sophisticated design.

(2) You can't get much parallelism out of it anyways -- while you might be able to optimize the BASIC interpreter, that's all the parallelism you get. It can't make your arbitrary, run of the mill, BASIC program massively parallelized. To get massive parallelism, the program must be designed for and placed on the chip.

(3) You're not even guaranteed to run faster. :-p CPUs are very well optimized devices -- I would not expect your result to be any better than simply compiling the program to machine language to run on the CPU. If you're using a FPGA for reconfigurability, instead of an ASIC, the speed discrepancy will be even greater!
 
  • #30
If you're using a FPGA for reconfigurability, instead of an ASIC, the speed discrepancy will be even greater!
I have a FPGA on my desk, even running at 50MHz it can give a 2GHz P4 a good fight when a program is written in VHDL instead of Assembly or a high level language. For example a specific LFSR that is often used in pseudo random generators ran at the same speed on the FPGA as in did in Visual C++. Hardwired on the same silicon as the P4 it would easily run at 100 times the speed of a program.

This is the FPGA I got:
http://www.digilentinc.com/info/D2SB.cfm

All the functions of BASIC is can be translated to VHDL and get a massive increase of speed. If each function gets a huge speed increase, then the BASIC program will inherit the exact same speed increase without any change of the program. It is assembly that is hard to make faster, not higher level languages.

You're not even guaranteed to run faster.
On the same silicon process and the same die size there will be a large performance increase. Even by just making a new CPU that has special instructions that are specific for the needs of the language will give a nice speed increase.
 
Last edited by a moderator:
  • #31
Hurkyl said:
(3) You're not even guaranteed to run faster. :-p CPUs are very well optimized devices -- I would not expect your result to be any better than simply compiling the program to machine language to run on the CPU. If you're using a FPGA for reconfigurability, instead of an ASIC, the speed discrepancy will be even greater!

Actually, if you translate a CPU bound program into VHDL, you probably will get a speed increase. But translating a program into VHDL can be a very non-trivial task; and it's a waste of time if your program is I/O bound anyway.
 
  • #32
The goal is not speed but simplifying software. A CPU that can only be programmed directly in a BASIC variant simplifies everything, there are no longer compilers, and it is easy to debug. I would add all those funky features of PERL like associative arrays, regular expressions etc.
 
  • #33
oldtobor said:
The goal is not speed but simplifying software. A CPU that can only be programmed directly in a BASIC variant simplifies everything, there are no longer compilers, and it is easy to debug. I would add all those funky features of PERL like associative arrays, regular expressions etc.

No, compilers are still there. You just have to compile into BASIC instead of assembly language.

A BASIC chip wouldn't make it any easier to write programs using BASIC, and it definitely wouldn't make it easier to write programs in other languages. The only possible advantage of using a physical machine instead of a virtual one is speed.
 
  • #34
No way Jose'. The compiler exists because it has to convert high level language to op codes. In our CPU there are no longer opcodes but direct high level instructions. The logic circuits take care of understanding them and activating registers and counters etc. It is a true IDEAS machine that bypasses all we have always taken for granted in CPU design. With millions of transistors available I think it is feasable. Then we only have ONE FUNKY HIGH LEVEL LANGUAGE that takes care of all, all software is built up starting from a higher level.

You have a register group that takes care of the FOR instruction, another for the NEXT another for the GOTO etc. You just write the program, the chip reads it from RAM and immediately executes it. No more debugging nightmares or incompatible software. Of course industry and academia may not really want to simplify software for "cultural - economical" reasons...
 
  • #35
Bjørn Bæverfjord said:
All the functions of BASIC is can be translated to VHDL and get a massive increase of speed. If each function gets a huge speed increase, then the BASIC program will inherit the exact same speed increase without any change of the program. It is assembly that is hard to make faster, not higher level languages.
Microarchitecture, like all engineering pursuits, is about trade-offs. Sure, you can build a small algorithm like an LFSR into an FPGA and run it at the maximum toggle-rate of the FPGA, and it will likely be faster than the equivalent algorithm running on a general-purpose computer which requires many instructions. However, as the complexity of your algorithm goes up (say, all the way to an algorithm that will interpret Perl), the advantages disappear. At some level of complexity, it will no longer be able to compete with the common opcoded CPU architecture.

This is the reason modern microarchitecture is learning more and more toward simpler hardware. First, there were CISC (complex instruction-set computing) chips, programmed mostly by hand. Today, RISC (reduced instruction-set computing) chips have center stage; they have simpler control paths and more function units per unit die area. Next, VLIW (very long instruction word) CPUs will take off. VLIW eliminates most of the control path, relying on sophisticated compilers to generate very long continuous runs of instructions that directly control the chip's function units. Eventually TTA (tag-triggered architecture) might take over, which eliminates the control path completely.

What this means, of course, is that complexity is being moved out of the chip, and into the compiler. The advantages of this are numerous: the biggest stumbling block for today's microprocessors is die size; it takes a long time to send signals back and forth across a very large chip. In the early days of microprocessors, the control path used to dominate the chip's area, but why have a control path when you don't actually need one? Why not use that chip area for more function units to actually get things done? It eliminates much of the cross-chip communication that limits clock speeds, and uses the die area more effectively.

Furthermore, putting the complexity in the compiler rather than in the chip means that the arduous task of scheduling instructions and doing branch predictions happens at compile-time rather than at run-time. What that means is simple: it will take longer to compile your program, but much less time to execute it. Since programs are compiled once and run many times, this is certainly the way to go.
On the same silicon process and the same die size there will be a large performance increase. Even by just making a new CPU that has special instructions that are specific for the needs of the language will give a nice speed increase.
This is true, but the answer is not to make a billion specialized instructions for every possible need; the trade-off is that your bloated chip now runs at 100 kHz.

- Warren
 
Last edited:
  • #36
oldtobor said:
In our CPU there are no longer opcodes but direct high level instructions.
And the direct, high level instructions are called 'opcodes.' Apparently, you just don't like the word opcode; but any atomic operation on a processor is called an opcode.

Your hardwired BASIC interpreter will still run a fetch-decode-execute cycle; it just happens that you've built in special opcodes that facilitate BASIC. It doesn't even mean your processor will run any faster; it just means it will be easier to program, and consequently have a larger control path. As I've already explained, this is an engineering trade-off: either it's easy to program by hand, or it runs fast. You really cannot have both! You cannot increase the complexity of the control path and not take a hit in speed.
The logic circuits take care of understanding them and activating registers and counters etc. It is a true IDEAS machine that bypasses all we have always taken for granted in CPU design. With millions of transistors available I think it is feasable. Then we only have ONE FUNKY HIGH LEVEL LANGUAGE that takes care of all, all software is built up starting from a higher level.
The reason processors are getting 'dumber' (i.e. moving from CISC to RISC to VLIW to TTA) is not because programmers enjoy programming dumb chips; the reason is that dumb chips are fast. The bottom line is simple: there are 10,000 users for every programmer. Making the programmer's job a little more difficult is of no consequence; what matters is that the user's machine runs faster. You seem to missing this.
No more debugging nightmares or incompatible software. Of course industry and academia may not really want to simplify software for "cultural - economical" reasons...
I fail to see how a hardwired BASIC interpreter would eliminate debugging. Are you suggesting I could not write an algorithm that wouldn't work on such a chip? :smile:

And how would it eliminate incompatible software? Most software incompatibilities lie in data structures. Are you suggesting data structures will no longer exist in your world? Or that the only data structures anyone will ever be able to use are arrays? :smile:

- Warren
 
  • #37
chroot said:
The reason processors are getting 'dumber' (i.e. moving from CISC to RISC to VLIW to TTA) is not because programmers enjoy programming dumb chips; the reason is that dumb chips are fast. The bottom line is simple: there are 10,000 users for every programmer. Making the programmer's job a little more difficult is of no consequence; what matters is that the user's machine runs faster. You seem to missing this.

In fact, I don't even agree that chips getting "dumber" makes things harder for programmers. It just makes things (slightly) harder for the compilers/interpreters.

I suppose it might make things harder for compiler/interpreter writers as well; but it's a lot easier to build a smarter compiler than it is to build a smarter chip.
 
  • #38
master_coda said:
I suppose it might make things harder for compiler/interpreter writers as well; but it's a lot easier to build a smarter compiler than it is to build a smarter chip.
Exactly. I just wanted to illucidate the trade-off for those reading the thread. The trade-off, of course, is a no-brainer.

- Warren
 
  • #39
However, as the complexity of your algorithm goes up (say, all the way to an algorithm that will interpret Perl), the advantages disappear.
So then we can just divide it into optimal steps that each are 100 times faster than the original. Since each step is 100 times faster then the total result will be 100 times faster. The point is to make it optimal, not to make it as slow as possible to support your view.

For example the original Microsoft BASIC has a floatingpoint format that is not compatible with any modern CPU. A Pentium 4 would use 50 instructions to do something that can be done in a single clockcycle. The clock frequency would be the same because the job is not more complicated than what the P4 does, it is just different. This speedup you will find everywhere a program does something that is not directly supported by the CPU.
 
  • #40
Bjørn Bæverfjord said:
So then we can just divide it into optimal steps that each are 100 times faster than the original. Since each step is 100 times faster then the total result will be 100 times faster. The point is to make it optimal, not to make it as slow as possible to support your view.

For example the original Microsoft BASIC has a floatingpoint format that is not compatible with any modern CPU. A Pentium 4 would use 50 instructions to do something that can be done in a single clockcycle. The clock frequency would be the same because the job is not more complicated than what the P4 does, it is just different. This speedup you will find everywhere a program does something that is not directly supported by the CPU.

So how will you increase the speed of the split function in Perl by 100 times? That's a relatively simple function in Perl.
 
  • #41
I don't know Perl and I only have some basics of hardware design but looking at the split function that doesn't look complicated to do. You could read the string into a bunch of parallel logic circuits, each circuit primed with the delimeter character. Each circuit would get one character of the string and the address of that character. It would look at the input character and see if it is the same as the delimeter character; if it is, it write a reference to the subsequent character into the memory for the function's output. Also, a reference to the first character in the string would go directly to memory. In case of multiple delimeter characters in a row, it would only take a few levels of combinational circuits to only write the output of the last delimeter processor in the row of those trying to write. The speedup would probably not be 100x but it would be faster.
 
Last edited:
  • #42
BicycleTree said:
I don't know Perl and I only have some basics of hardware design but looking at the split function that doesn't look complicated to do. You could read the string into a bunch of parallel logic circuits, each circuit primed with the delimeter character. Each circuit would get one character of the string and the address of that character. It would look at the input character and see if it is the same as the delimeter character; if it is, it write a reference to the subsequent character into the memory for the function's output. Also, a reference to the first character in the string would go directly to memory. In case of multiple delimeter characters in a row, it would only take a few levels of combinational circuits to only write the output of the last delimeter processor in the row of those trying to write. The speedup would probably not be 100x but it would be faster.

This description is no good. All of the important parts of the algorithm are glossed over by "write a reference ... into the memory for the function's output" and "a few levels of combinatorial circuits...", these are non-trivial parts of the algorithm. Without more details, I can't decide if your algorithm even works; I certainly can't determine if this algorithm is faster, and how much real estate this circuit is going to eat up.
 
  • #43
Well, I said I do not know more than basic hardware design. One course over 2 semesters is all I have. So I don't know exactly how writing the references would operate; it should be in parallel somehow or it would be a bottleneck, but I don't know exactly how that would work. Also the reading of the string would have to be in parallel for each character or there would probably be no advantage. You'd probably have to design a special kind of memory for the reading and writing.

The combinational circuits would be based on the write assert bits from the character processors. I guess the tentative write assert bit for each character processor (processor X) would be XOR'd with the tentative write assert bit from processor X+1, and the result would be the write assert for processor X. So that would only be 1 level of combinational circuits.

Each processor would basically be an adder to test for equality, which is very fast.

The part of this process I do not fully understand is the memory; the other parts definitely could be speeded up. But I don't know exactly how modern memory is designed and whether these things I am saying are possible. The only kind of memory that I understand a very simple schematic design which would not work for this.
 
  • #44
Actually, since the processors are going to have a dedicated function, they could just be a level of XNOR gates, one for each bit of the character, followed by two levels of AND gates, with the output of the second level being the write assert bit.
 
Last edited:
  • #45
Well if anything, I see that the old robot's proposal is interesting at the least. I think I would approach the problem in a much simpler way by simply implementing a high level instruction set for the CPU. So as you have a typical subtract instruction that is composed of an opcode and 2 operands, you could have a FOR NEXT instruction that is made up of an opcode and 2 operands saying the start (I=0) and end (TO 50) (and maybe a 3rd saying the step). Or you can have a string accumulator like PERL $_ and regular expression instructions, and maybe SQR and RND assembler level instructions etc. You could end up having an almost 1 to 1 correspondence between a high level language and it's underlying assembler translation. Anyways in this research I would start out simple, implementing an equivalent 4K ROM BASIC like instruction set for the CPU and then extend it. Start with VHDL and ASICs. You might want to also look up to see if a SOFTWARE TO HARDWARE CONVERTER exists or could be designed. Take any small program and the converter could convert the entire program into a bunch of combinational circuits.
 
Last edited:
  • #46
nameta9 said:
Well if anything, I see that the old robot's proposal is interesting at the least. I think I would approach the problem in a much simpler way by simply implementing a high level instruction set for the CPU. So as you have a typical subtract instruction that is composed of an opcode and 2 operands, you could have a FOR NEXT instruction that is made up of an opcode and 2 operands saying the start (I=0) and end (TO 50) (and maybe a 3rd saying the step). Or you can have a string accumulator like PERL $_ and regular expression instructions, and maybe SQR and RND assembler level instructions etc. You could end up having an almost 1 to 1 correspondence between a high level language and it's underlying assembler translation. Anyways in this research I would start out simple, implementing an equivalent 4K ROM BASIC like instruction set for the CPU and then extend it. Start with VHDL and ASICs. You might want to also look up to see if a SOFTWARE TO HARDWARE CONVERTER exists or could be designed. Take any small program and the converter could convert the entire program into a bunch of combinational circuits.

We already have instructions for computing values that can easily be implemented in hardware, square roots, trig functions, etc. Even a for next instruction isn't really useful; existing instructions like loop provide the same functionality.

However, other instructions just can't be practically implemented. For example, almost any complex string operations would be a waste of time; the algorithm you're implementing will almost certainly spend most of its time loading the string from memory. You can try and keep as much of the string on the cpu (or at least as close as possible) as is possible, which is the whole point of things like cache, but that approach doesn't scale; you can't just stick an arbitrarily large chunk of memory onto a cpu.


Still, if you want to implement a chip with a complete high-level instruction set, go right ahead; if you're into hardware design, you might enjoy it. It's just not a particularly useful idea.
 
  • #47
Ah, there was an error in my earlier suggestion. The tentative write enable bit from processor X should not be XOR'd with the tentative write enable bit from processor X+1; the correct operation is (tentative WE for X) AND NOT(tentative WE for X+1).

The question is whether a single memory can be written to and read from in parallel, practically speaking. You can't just say that it definitely would be a bottleneck. It might be possible and practical, and then a string operation could be performed requiring no more time than an add operation.

Do graphics processors have memory that is accessed 1 word at a time or are they set up in parallel? That would be the most similar application I can think of.
 
  • #49
BicycleTree said:
Ah, there was an error in my earlier suggestion. The tentative write enable bit from processor X should not be XOR'd with the tentative write enable bit from processor X+1; the correct operation is (tentative WE for X) AND NOT(tentative WE for X+1).

The question is whether a single memory can be written to and read from in parallel, practically speaking. You can't just say that it definitely would be a bottleneck. It might be possible and practical, and then a string operation could be performed requiring no more time than an add operation.

Do graphics processors have memory that is accessed 1 word at a time or are they set up in parallel? That would be the most similar application I can think of.

Memory access definitely would be a bottleneck. You can write blocks of memory in parallel, but there's always an upper bound on the amount you can write at once. And there's also always an upper bound on the amount of a string your chip can process at one time.

Add operations are fast because the numbers you're adding have a fixed size. Working with strings is more like working with arbitrary integer; an operation that you can't perform in constant time.
 
  • #50
How are you equating parallel memory to your parallel process? You could have done a search for parallel processing and came up with more applicable information. Heck the Amiga I hade in 1989 utilized limited parallel processing by offloading most functions to support chips all coordinated by the main 68K processor. That would be a more analogous situation to your proposition than parallel memory IMHO.

Now, I have to ask the question again: "Why bother?" Why try to hard code even something like an programming language? Why spend time and money trying to develop a transistor array on some die to do what computer manufacturers stopped doing in the late 80's. I don't see a way(this is me mind you) that one could implement a higher level language that operates faster than directly accessing the hundreds(or less depending on the processor) of instructions already placed on current chips. Simply saying "Well, remove the opcodes and replace with a direct HLL" minimizes the fact that current computer technology relies on only two states. You can build computers with more states, You can even build analog computers if you want---but those cost more with minimal improvements. Analog computers used to be common place in engine control units; however, a 20Mhz processor, and a SAR ADC is a more than sufficient replacement and costs less in the end.

I see a lot of minimizing of difficulties here---sure, Basic can run from a ROM. Basic is easy. Basic didn't include much of the functionality of modern HLL. Basic was slow (assembly or C programs STILL ran faster on Apple II's and Commodore C64 than basic programs). The use of this as an example is a canard at best because the Basic went by the way side due to its cost and relative performance. Software basic was and is faster.

So, Why bother? Why bother locking yourself into a project that is not easily user updatable---ever flashed the BIOS on a PC? Most users will never, ever, update their bios and that says a lot about this idea. Forcing such an inconvenience to fill a hole in word or to upgrade ones on die interpretor is less than plausible IMHO. Why bother? You won't see a speed improvement (I reiterate the RISC mantra, that being more is most definitely not better) from adding an HLL to your die. You might see a speed improvement from placing a program on a Flash, but the speed improvement will only show itself as a loadtime improvement, not a runtime improvement.

Those are my thoughts.
 
Back
Top