Simulate computer inside a computer

  • Thread starter fredreload
  • Start date
  • Tags
    Computer
In summary: It's possible, but it's not easy. You would need to find an emulator that is accurate enough and recompile your code.
  • #36
meBigGuy said:
It's a total waste to build a faster HOST computer to use multiple cycles to simulate a TARGET doing something. Just do it with the HOST as efficiently as possible.
If the TARGET has a better architecture, then build a faster computer using the TARGET architecture.

A common way to get peak performance from a machine is to hand code in assembly language. In many cases a human can do a better job of optimizing than the compiler (but that isn't always true). A prime example where humans do well is in tightly coded pipelined DSP arithmetic loops. A human can sniff out tricks and shuffle code to cut cycles from the loops that an optimizing compiler could never find.

The next level of optimization is to build dedicated hardware coprocessors to do certain operations more efficiently, like a graphics GPU, or a custom coprocessor to do a square root or an FFT. (or even just use multiple CPU's to execute parallel threads)

The only way to efficiently emulate a TARGET computer is to build dedicated hardware (that's a joke, BTW, since that would be the TARGET computer itself)
Well but let's say I emulate 10, 20, or even 100 of this software CPU emulator, will that dramatically boost the amount of calculations I can do in a second? I am not sure the amount of data I can process, someone has to enlighten me.
 
Computer science news on Phys.org
  • #37
fredreload said:
Well but let's say I emulate 10, 20, or even 100 of this software CPU emulator, will that dramatically boost the amount of calculations I can do in a second? I am not sure the amount of data I can process, someone has to enlighten me.
What you seem to want to do is break tasks into parts and so the parts at the same time. Yes this works.

But the tricky part of this is to break problems into parts so one part doesn't depend on the result of other parts. Usually this is handled logically (in software). It is a very difficult problem. One could do it in hardware, but software costs the time of the programmer. Hardware costs $100,000 a chip run plus the cost of the Cadence chair.
 
  • #38
Simulating old 8 bit or 16 bit computers on a much faster PC has been done already. I'm not sure how much slower a virtual PC is on Windows, but I suspect CPU stuff is the same speed and I/O emulation is much slower. Mainframes like IBM Z systems, can run in multiple modes (tri modal addressing) for backwards compatibility support at full speed. In addition to current and classic IBM OS support, it also supports UNIX API. The series includes computers with 80 to 100+ central processors (like having 80 to 100+ cores on a PC).

http://en.wikipedia.org/wiki/IBM_System_z

http://en.wikipedia.org/wiki/Z/OS
 
  • #39
Detailed simulations of the exact working and timing of a computer can take hours to just simulate a second of the actual computer operations. That is what others meant when they said that SPICE is slow. It is possible to have simplified models of computers that run much faster than the actual computers would. The simplified models would be useful to see if a computer system is adequate for some task. It would not really perform the same calculations as the computer being simulated. For instance, I might model an entire computer calculation as a simple delay along a signal path, just to see if the delay would be too great for a system to work right.
 
  • #40
I don't know what is behind your thinking that emulating something will lead to better performance. You are starting with that wrong assumption and trying to somehow fit it into reality without doing any real thinking. I challenge you to give one detailed example. I think part of the problem is that you really don't understand how computers actually work. What level of programming skills do you have? Try to write code that emulates a 6502 microprocessor. You will see very quickly what others are telling you.

fredreload said:
Well but let's say I emulate 10, 20, or even 100 of this software CPU emulator,

Lets say I emulate a TARGET that does an ADD with a loop that adds 1 each time. If I emulate 1000 of these loops on my HOST, it doesn't go any faster, since my HOST has only a certain performance. But, if I execute the add natively on my HOST, it executes in 1 cycle.

How many times will I have to say this: Your idea of emulating another computer to achieve performance is totally off base. There is no way. Forcing a HOST to behave like a TARGET will always take extra cycles and will be slower.

I already outlined what is done to achieve extra performance.
1. Better algorithms (more efficient code)
2. Co processors
3. Multiple CPU's
4. Enhanced Architecture (faster clock, better cache, better instructions, etc)

The only thing emulation will do is slow you down.
 
  • Like
Likes Klystron, Vanadium 50 and mfb
  • #41
fredreload said:
I was thinking about it too but, doesn't the CPU it takes to simulate such a computer out weight the computer itself? Or is it possible to build one with an even better performance?

You keep asking that. It's still impossible. (See my post #19)
 
  • #42
Right I thought about it and it really does run on the original hardware for the CPU emulator. If we run the hardware design in parallel all we need is a single clock to drive the whole thing, I suppose you need current running through to drive the design, something I never really thought about. I've been interested about how super computer works, especially the k super computer and brain computation. Feel free to leave any comments about how the super computer or exascale computer can be improved with CPU usage for more calculations per second.
 
  • #43
fredreload said:
If we run the hardware design in parallel all we need is a single clock to drive the whole thing,

What you say above makes no sense in the context of what you are asking.

You are talking about designing a different architecture to do things more efficiently. It is called computer science.

Have you ever written a computer program of any sort? Do you have any idea what machine language is? Do you have any concept of the way in which hardware controls the program sequencing? Do you know what an alu is? DO you understand the structure of cache memories and the algorithms that drive them? Do you know about the difference between Van Neumann architectures and Harvard Architecture?

There is no way you can even begin to understand enough to ask intelligent questions about computer performance until you start studying computer science and computer architecture.

I don't say this to discourage you from asking questions, but rather to motivate you to read about how a CPU operates and the part it plays in a computer.

Previously posted link: http://www.hardwaresecrets.com/inside-pentium-m-architecture/4/ for example explains the innards of the pentium CPU. That is probably too advanced, but serves as an introduction to the richness of CPU architecture as an engineering discipline.

One of the simplest and most common architectures is the Intel 8051. Much has been written about its hardware architecture. Here is a reference manual.
http://web.mit.edu/6.115/www/document/8051.pdf You should read about how it actually operates.

Here is a introduction to what a CPU actually is. You especially need to read through the Operation and Structure sections (several times, BTW):
https://en.wikipedia.org/wiki/Central_processing_unit

Without understanding what a CPU is and what a computer is (and how they are different) you will not be able to understand how performance is improved and how different architectures are suited to different tasks.

It would be like asking if hooking together 100 volkswagon engines would yield a Formula A race race car. If you understood how an engine works, and what comprises a complete high performance automobile, that is not the question you would be asking.

Again, I'm not discouraging questions, just pointing out you need to do a bit more of your own home study. The fact that you are asking these questions shows an inquisitive nature. Focus for a bit on the fundamentals, and you will never regret it.
 
  • Like
Likes Silicon Waffle, mfb and Mark44
  • #44
If I am running a software like NEST here. Is it better to use a video card or just uses CPU for computation?
 
  • Like
Likes Silicon Waffle
  • #45
If you have software that uses the graphics card in a meaningful way, it can speed up calculations, see the video in post 26. The speedup you get will depend on the task and the implementation in software.
 
  • Like
Likes Silicon Waffle
  • #46
Thanks, I was looking for that video. Also here is a note on the new Nvidia's supercomputer, supposedly help out the Google Brain porject
 

Similar threads

  • Computing and Technology
Replies
1
Views
940
  • Computing and Technology
Replies
32
Views
980
  • Computing and Technology
Replies
5
Views
1K
Replies
8
Views
863
  • Computing and Technology
Replies
1
Views
1K
Replies
14
Views
907
  • Computing and Technology
Replies
14
Views
2K
  • Computing and Technology
Replies
8
Views
2K
Replies
24
Views
3K
  • Computing and Technology
3
Replies
92
Views
5K
Back
Top