Is it possible to simulate a good computer in graphics(myth)

  • Thread starter fredreload
  • Start date
  • #1
246
6
So I had this question in my mind and I made a thread previously saying if it is possible to make more CPU simulator out of an existing computer. Turns out it is a no because it takes more instructions cycle to change the 0 and 1. Now we are doing it in terms of computer graphics, we are simulating the current and bits not in 0 or 1 but in flip flops(collision detection). I am not really sure, I mean we can simulate say 1 billion particles with collision, there are also calculators built using Mine Craft. We can even run the computer with water simulation, but can a simulated computer using GPU outrun an actual computer? I would like to know, any idea is welcome
 

Answers and Replies

  • #2
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,546
6,904
A GPU IS an actual computer, just a somewhat specialized one, so I really don't get your point. Do you mean a simulation of a computer that runs as fast as that computer? If so, then you just need a much faster computer to do the simulation and then what's the point?
 
  • #3
246
6
Right of course we want to build a faster computer than the existing one, using particles
 
  • #4
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,546
6,904
Right of course we want to build a faster computer than the existing one, using particles
I have no idea what you are talking about.
 
  • #5
246
6
You use the GPU to build an actual computer hardware like you see here
 
  • #6
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,546
6,904
I still have no idea what you are talking about. How does that video relate to your original question? Why is the word myth in the question? What is it that you are trying to do? In the second post you ask about simulating a computer using particles. This makes no sense. Please be very clear and specific about what you are trying to figure out.
 
  • #7
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,546
6,904
@fredreload I just looked at your other thread about computer simulations. I can only repeat the advice you were give there. Learn the basics of computer architecture before going any further. Your questions suggest that you do not understand the fundamentals but are trying to jump into high-level design without knowing the basics. This is a serious mistake and you are wasting your time. You are not going to get anywhere this way, no matter how much you wish it were otherwise.
 
  • #8
FactChecker
Science Advisor
Gold Member
5,852
2,201
If you are talking about using the GPU as the main CPU processor of computer, that is a reasonable question. A GPU has massive parallel processing capability. The current capabilities of a GPU are impressive and interesting. But it is very specialized. If you tried to use it for general purpose computing, you would probably be better off starting with a CPU and trying to add parallel processing to that. That is actually what is happening these days.
 
  • #9
675
319
I have heard that is possible to build a computer using Conway's Game of Life.
 
  • #10
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,546
6,904
I have heard that is possible to build a computer using Conway's Game of Life.
Hard to imaging how.
 
  • #12
136
19
This is mixing a very important theoretical property of computers, and a very practical aspect of building new (and faster) computers, and getting confused. The Church-Turing thesis is the theory that anything that can be computed by one computer can be computed by any other computer. The only difference being the time required to do the computation. A lot of work is being done today on quantum computers. The hope is that mature quantum computers will be able to solve problems which are infeasible on current digital computers, which would take millions of years to solve them. (I'm brushing aside the question of whether P=NP here...)

Obviously the way you demonstrate that a new type of computer is subject to the Church-Turing thesis is to show that it can be simulated by a computer known to be subject to the Chuch-Turing thesis and vice-versa. There are lots of such proofs floating around, but they are usually of little use other than that. What is useful, and important is to model new computer designs. These models may run hundreds or hundreds of thousand times slower than real time. But they save billions? trillions? of dollars no one has. Computer manufacturers (including both CPU and GPU designers) use these simulations to improve the design of new computers, and computer chips before they are built. In fact, there are multiple levels of such simulations from simulating the individual transistors in a chip up to simulations of systems consisting of millions of chips. Improving the design at say the register level may discover problems at the gate level and designers will work back and forth to get a balanced and efficient design.

One last point, which not even some computer engineers understand. The design of each of these levels is expressed in a language. Let's talk about the x87 ISA (instruction level architecture). It describes a model where computer programs can be written and communicated to a computer chip. Once upon a time, there was a one-to-one correspondence between the ISA and the computer chip state. That doesn't happen anymore. The computer chip can be stopped, and when it stops it will present a single x86 ISA state to the user. But it will have millions of registers (including cache memory) which are not part of that state, and when operating the chip will seldom if ever be in a state which corresponds to some part of the x86 ISA. This is how "superscalar" chips manage to execute several (x86) ISA instructions in one clock cycle. The decomposition of CISC instructions into RISC instructions actually increases the number of instructions. The superscalar part of the chip assembles these pieces in a much different order to extract parallelism from the (compiled) program.

The Church-Turing thesis tells us (because of how these chips were designed) that they can only compute things which other computers can compute. But the actual chip is a huge distance away from the finite state machine it models.
 
  • Like
Likes Silicon Waffle and nsaspook
  • #13
246
6
I dunno, sorry for not reading the entire writing eachus (I'll catch up to it later!). I thought building a GPU or graphics card is a virtual environment is a good idea, for one there's no worries about the space it takes. The thing is the hardware you are able to build in this virtual graphics space is based on the actual computer hardware performance you have. So I broke it down to the basics on what it needs to have a hardware design in a graphics space. You need a way of simulating the electric current as well as some type of collision detection so you can flip the switch between the current and the hardware. If you can provide a way to simulate electric current and collision detection with minimum loss of GPU computational power with parallelization then the size of the computer can be huge(well my knowledge of this isn't enough if someone knows about GPU and graphics simulation better than me then feel free to tell me how it works). I'm not even sure if building hardware in a virtual simulated environment is a good idea, this is based on my knowledge of Unity3D. Once you know this is doable, the rest is designing hardware in graphics using for loops and if and else statements
 
  • Like
Likes Silicon Waffle
  • #14
425
258
We simulate natural phenomena all the time using partial simulation, and approximation ( in order to make the infeasible problem feasible ). We can ( and do ) do the same thing to simulate a computer, for example we simulate super-computers to approximate things like data flow / communication, load factor, etc. If we only care about the answers to some questions, we can approximate just their answers in the simulation. This means, that the simulation can substantially outrun the simulator, even when the simulator is simulating itself, but you have to make some sacrifices.

But I don't think that simulating ('graphically', or based on particles) would be effective. That's only going to make it more difficult and slower.
 
  • #15
246
6
Well, it is the only way to construct hardware virtually within a 3D environment. The idea is to simulate the hardware architecture instead of using 0 and 1 operations which takes up instructions cycles(more instructions needed to perform a single instruction for simulation), this is what most of the hardware and software simulators are using. GPU differs from CPU in that it can effectively run things in parallel. But can you simulate multiple or more GPU hardware with a single GPU card, I'm not sure. I don't know if there is another way of simulating it without showing it graphically, if there is I would be glad to hear the idea

P.S. So one is running the collision detection algorithm, the other is turning 0 to 1 with instructions, which one is faster?
 
  • #16
425
258
Well, it is the only way to construct hardware virtually within a 3D environment. The idea is to simulate the hardware architecture instead of using 0 and 1 operations which takes up instructions cycles(more instructions needed to perform a single instruction for simulation), this is what most of the hardware and software simulators are using. GPU differs from CPU in that it can effectively run things in parallel. But can you simulate multiple or more GPU hardware with a single GPU card, I'm not sure. I don't know if there is another way of simulating it without showing it graphically, if there is I would be glad to hear the idea
Simulating the hardware itself is a much more computationally expensive task. The closer to the raw hardware dynamics you get the more computationally expensive it gets. At the most atomic level, a simulation of a few seconds could take many many days to run.

Parallelism isn't magic. You are still bound by your computational resources. If you have 10 min of work, sure 10 people can do it in 1 min, but not in 1/10 a min.

By 'Graphically' you mean visually, or the calculations performed on a GPU? You don't need to see it for it to happen. Showing it visually can help a person understand it, but doesn't help the computer.

Every operation is done through instructions which change zeros to ones. Running the collision detection algorithm means running millions to trillions of instructions, each to switch ones and zeros.
 
Last edited:
  • #17
246
6
But computer simulation is fast, if speed does not matter then what matters is the amount of data being processed, well you can easily replace it with a RAM, right let me think of this for a while, we are not doing it at the atomic level

P.S. But you can make a really big CPU, then again you can't create more parallelization then the ones allowed as mentioned before, all you need is a clock and "magic current", jk
 
Last edited:
  • #18
136
19
P.S. But you can make a really big CPU, then again you can't create more parallelization then the ones allowed as mentioned before, all you need is a clock and "magic current", jk
Sigh! I've been in the software industry before it even existed. (I'm retired now.) Rear Admiral Grace Murray Hopper was older, and got started even earlier. She used to carry around wires one foot long and call them nanoseconds. (In practice light travels about one foot per nanosecond in a vacuum, but getting even 75% of light speed for signals in a wire or cable was tough.) Anyway, the point, and all computing pioneers before say 1960 had our noses constantly rubbed in it by the real world, was that to make a computer fast, you had to make it small.

Today, we have computers with maximum dimensions measured in millimeters. (The chip, not the power supplies I/O devices and other kruft.) Even high-end CPU and GPU chips are about one inch square--and experts spent years on distributing the clock to all areas of the chip so that it gets there at the correct time (phase) for what is going on. If the clock speed is 4 GHz, some high-end computers are higher, everything that has to be done in one clock cycle has to be done in the time that light travels about three inches--in vacuum, in silicon it is much slower. There are tricks like pipelining and letting the clock slew across the chip, but that is the main fight. (There have been asynchronous computers, some even made it to market. But the extra traces for an asynchronous computer cost more (in chip space) than distributing a clock.)

Where computers are going today is into the third dimension. AMD is shipping GPUs with stacked memory chips (HBM), which allows the memory chips and GPU chip to be closer together. nVidia will be doing the same before the year is out, and HBM2 will run twice as fast, cramming more memory into the same volume. The other problem is heat. Chips have been shifting to lower and lower voltages to keep temperatures under control, but the move to 3d is going to make that harder. That's why you see so much talk about spintronics. It isn't faster than current transistors, but doesn't have a minimum energy requriement to flip a bit. (Technically (QM) it is erasing a bit that consumes energy, but it would be wonderful if we could get within a factor of a hundred of that ultimate limit.)
 
  • #19
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2019 Award
25,683
8,878
I dunno, sorry for not reading the entire writing eachus
You should. You really should. Instead you repeat the same misconception over and over and over and over, ignoring everyone who is trying to help you. This behavior is not very nice, and you really should stop it.
 
  • #20
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,546
6,904
You should. You really should. Instead you repeat the same misconception over and over and over and over, ignoring everyone who is trying to help you. This behavior is not very nice, and you really should stop it.
I agree.
 
  • #21
246
6
Well, is there a way for collision detection not to take up extra computing power, like the real world = =. I mean in the real world you don't need to keep track of collision, but in the computer you need to program that in, like some type of hack for it to be implemented, ionno, I just want to discuss about this, it seems interesting
 
  • #22
Well, is there a way for collision detection not to take up extra computing power, like the real world = =. I mean in the real world you don't need to keep track of collision, but in the computer you need to program that in, like some type of hack for it to be implemented, ionno, I just want to discuss about this, it seems interesting
I'm having a hard time deciphering your question. I think you should write this all out on paper, read it out loud, and then rewrite it after you organize your thoughts.

Okay I wrote a long response but I think I just understood what is going through your head, on some jacked abstract level.
By "don't need to keep track of collision" you mean something like...
When detecting a particle collision, indirect methods are used, like chemical coatings on arrays around the collision chamber. Then the arrays are analysed to determine the dynamics of the collision?
Now your main question is something like... Is it possible to use an indirect way of accounting for a simulated particle collision like they do with particle accelerators?
If so, your question is absolutely nonsensical. In order for a collision to happen, you have to give the parameters. How fast are they moving, how much energy, in which way are they interacting.
Software is an entirely artificial environment. Random events like particle collision do not happen unless they are programmed in, which takes computing power.
 
  • #23
meBigGuy
Gold Member
2,323
405
It is a waste of your time and ours to ask questions about computer architecture when you have no idea how a computer works. You have shown no gain in knowledge since you started you last nonsensical simulation thread. Not once have you responded in a way that indicates you absorbed what you have been told. You just repeat the same flawed concept over and over.

It will never be faster to simulate a computer. If you have a fast simulator, then run your original problem on the simulator since simulating another computer doing it is a waste of time. THERE ARE NO SHORTCUTS! You seem to think there is some magical high level way to make assumptions that will allow the simulator to save time. NO NO NO. The simulator HAS to do everything the original computer did, only slower since it also needs to manage the simulation. You are barking up the wrong tree.
 
  • Like
Likes Mark44 and phinds
  • #24
246
6
Right, collision, equal and opposite force. You guys are right on this one. Scale it back down to if statements, I'll take some time to look through GPU architecture
 
Last edited:
  • #25
34,299
5,934
It is a waste of your time and ours to ask questions about computer architecture when you have no idea how a computer works. You have shown no gain in knowledge since you started you last nonsensical simulation thread. Not once have you responded in a way that indicates you absorbed what you have been told. You just repeat the same flawed concept over and over.
I agree completely. The question has been asked and answered, so I'm closing this thread.
 

Related Threads on Is it possible to simulate a good computer in graphics(myth)

  • Last Post
Replies
2
Views
2K
Replies
1
Views
3K
  • Last Post
2
Replies
45
Views
3K
Replies
18
Views
34K
Replies
5
Views
2K
  • Last Post
Replies
7
Views
2K
  • Last Post
Replies
10
Views
2K
Replies
8
Views
673
Replies
8
Views
16K
Top