- #1
fredreload
- 250
- 6
If I simulate a virtual computer inside a real computer graphically, would it take more resources or would it work like before? It is different from cloud computing, I'm simulating a computer graphically.
Jeff Rosenbury said:Back when I was in school, we did an graphical simulator of the micro-coding of an 8x86 for teaching. It wasn't that hard. (Getting the artwork right was the hardest part). But it was a simulator, not any sort of emulator intended to run code.
anorlunda said:Enlighten me please Jeff. Graphical? I don't know what you mean.
- You used an interactive circuit builder in Spice?
- You sketched artwork for a chip etching mask or a printed circuit mask?
Yes. The program displayed the CPU in block diagram form. It would execute commands by loading the registers, running the ALU, setting the flags, etc. Basically it displayed the machine code and how the µcode drove the machine code.analogdesign said:SPICE is far, far too slow to use as an interactive microcode simulator.
I'm sure Jeff meant he had a program that allowed the various registers to be printed to the screen as the simulation progressed. I've done similar things but I've always just printed them to stdout or redirected to a log file.
fredreload said:But Wikipedia says it is possible to emulate an IBM PC with a Commodore 64 here.
fredreload said:Can I emulate more CPU power for number of calculations greater than the CPU I use itself?
It cannot - you cannot simulate more than one operation with one operation. Actual simulations of different processor architectures are way slower than those processors, as they need much more complex high-level operations to decide what each individual component in the simulated system would do. Simulating systems is a massive performance loss. This can be acceptable if the simulated system itself does not exist yet (you want to design something new), or does not exist any more (you want to see how some C64 software worked but you don't have a C64), otherwise it is a huge waste of ressources.fredreload said:I thought simulating a computer would create a faster one
Yes it seems that someone who is asking the questions is pretty bent on giving the answers.mfb said:I suggest you look up how a CPU works before you continue this thread. It will help a lot to understand what limits computation speeds.
CPU clock speeds increased by orders of magnitude in the last decades.fredreload said:I don't think you can make the clock goes any faster.
More transistors allow to do more things in parallel - if the software allows it. If every calculation step depends on the result of the previous step, a computer is very slow. An examplefredreload said:How does the number of transistors factor in into this?
1. Let's see, first we need a clock, then we link multiple CPU designs onto this one clock, then we create a CPU emulator software from this. The CPU designs repeat infinitely, like a skyscraper, and each floor receives 4 bits of input 0000 to 1111. Now assume I have 16 bits information 1011 0100 1100 0011, I need to have 4 floors repeat of this skyscraper in parallel, but is there a way for me to truncate it down to just 1 floor and pass all 16 bits in one go? Like stack them or something.mfb said:Note: The last few posts were merged into this thread, they were a separate thread before.
It cannot - you cannot simulate more than one operation with one operation. Actual simulations of different processor architectures are way slower than those processors, as they need much more complex high-level operations to decide what each individual component in the simulated system would do. Simulating systems is a massive performance loss. This can be acceptable if the simulated system itself does not exist yet (you want to design something new), or does not exist any more (you want to see how some C64 software worked but you don't have a C64), otherwise it is a huge waste of ressources.
That does not make sense at all.fredreload said:1. Let's see, first we need a clock, then we link multiple CPU designs onto this one clock, then we create a CPU emulator software from this. The CPU designs repeat infinitely, like a skyscraper, and each floor receives 4 bits of input 0000 to 1111. Now assume I have 16 bits information 1011 0100 1100 0011, I need to have 4 floors repeat of this skyscraper in parallel, but is there a way for me to truncate it down to just 1 floor and pass all 16 bits in one go? Like stack them or something.
The performance per transistor is actually very low in modern CPUs. For example imagine you recreated a 6502 processor (similar to the one used in the C64) with a modern 22nm manufacturing process and run it at 4GHz. That thing only has about 3,500 transistors and it's performance for most types of software would be about 50 to 100 times slower than that of a quad core i7 with more then 1 billion transistors. So you put 285,000 times as many transistors in it but only get 100 times the processing power.fredreload said:Precisely I am referring to how CPU works here, I don't think you can make the clock goes any faster. The thing that bothers me is the amount of transistors you need. Since you can't make the clock goes faster, you can only increase the amount of data being calculated. How does the number of transistors factor in into this? Does anyone know?