Changing circuitry of analog computer *During* simulations?

Click For Summary
The discussion centers on the potential for analog circuitry to change during simulations, particularly in the context of real-time simulations of biological organs. Participants clarify that while analog computers can modify component values during computations, changing connections mid-simulation introduces undefined conditions and stability concerns. The feasibility of using feedback loops to automate these changes based on prior results is debated, with caution advised to avoid self-excitation issues. The conversation also touches on the speed limitations of analog computers compared to digital systems, suggesting that digital solutions may ultimately be more efficient for complex simulations. Overall, the exploration of dynamic circuitry in analog computing remains largely theoretical, with practical applications needing careful consideration.
  • #61
Baluncore said:
Everyone wants better algorithms.

But in my case speedier simulation is not only desirable but more importantly it is a necessity. I have no problem going to vacuum tubes, hydraulic or pneumatic analog computers, or I have no problem going back to thousand years and building a mechanical analog computer if it can deliver faster simulations (not just faster when compared to any other type of computer but it should solve my problem within the allowed time duration).
 
Engineering news on Phys.org
  • #62
My electrical analog computer has the following characteristics:
1) It is described by an electrical circuit 2) The circuit represents the electrical analogy for the original physics (or problem or system) to be simulated.

The individual electronic/electrical components of the above analog computer (filters, amplifiers, switches, resistors, capacitors etc.) can be either analog or digital. Even if some of these components are digital, I call the circuit as analog computer.

Based on the above definition of analog computer, my "common sense" says that (of course, "common sense" can go wrong at times) the analog computer will be the fastest, followed by the purely digital simulation. Digital simulation of the analog circuit might be the slowest. I "feel" that this would hold true at least for many of the problems (this may or may not be true for all the problems, i.e., there is a possibility that which of the simulations is faster depends on the problem taken up).

The reason I feel that the digital simulation of analog circuit can be slower than the purely digital simulation is that the purely digital simulation does not require any circuit (and does not require any electrical analogy). Although both the digital simulations could be perfectly analogous, they may use different algorithms for their respective digital solutions (since how the original problem is digitally represented/defined/explained could be different).

Now, if some theory says that digital simulation of analog computer is faster than the analog computer, the theory may assume that the same accuracy is expected out of the results from the two simulations. But if some amount of error is allowed in the simulations, digital simulation may be calculating more accurate results even when that is not required (that may be the reason that would make it slower when compared to the analog computer, even while the theory may predict otherwise).
 
  • #63
Just some unrealistic thoughts below (although I am a mechanical engineer and do not know much about electrical circuits, I know that the following are quite unrealistic, at least for the time being).

Let us assume that someone designs an analog computer which has tens of thousands of electrical components. Then if the entire analog circuit can be placed (fabricated) on a very small piece of material (like VLSI, or a modern Intel processor), then the analog computer is likely to be very fast (fast simulations, fast switching). Other advantages: reliable, less heat generated, less space required, lower power consumption.

Going a step further, someone who designs an analog computer (such as the analog computer mentioned in the last paragraph above) should be able to get it fabricated (on a very small chip as mentioned above) by outsourcing the fabrication to work to some company (e.g., Intel). Ordering (fabricating) just one piece (one analog computer) should also be allowed.
 
  • #64
Kirana Kumara P said:
Just some unrealistic thoughts below (although I am a mechanical engineer and do not know much about electrical circuits, I know that the following are quite unrealistic, at least for the time being).

Let us assume that someone designs an analog computer which has tens of thousands of electrical components. Then if the entire analog circuit can be placed (fabricated) on a very small piece of material (like VLSI, or a modern Intel processor), then the analog computer is likely to be very fast (fast simulations, fast switching). Other advantages: reliable, less heat generated, less space required, lower power consumption.

Going a step further, someone who designs an analog computer (such as the analog computer mentioned in the last paragraph above) should be able to get it fabricated (on a very small chip as mentioned above) by outsourcing the fabrication to work to some company (e.g., Intel). Ordering (fabricating) just one piece (one analog computer) should also be allowed.

There is so much wrong with this I don't even know where to begin.

Tens of thousands electrical components for your integrated analog computer is too small. A VLSI analog computer like you're describing would more likely be millions (an op amp is thousands of components once parasitics have been extracted) of components to simulate. You could simulate it of course but it won't be fast, since the simulation time increases as the square of components, to first order.

Sure you could do it in a VLSI chip but what do you mean "a modern Intel processor"? Intel's fabrication process is *highly* optimized for digital and their internal analog designers have to jump through outrageous hoops just to get anything analog (like a clock generator) to work.

Anyway, you aren't going to be outsourcing anything to Intel. They don't operate a foundry service. You would need something like MOSIS to broker space on a wafer run for you. I doubt you can afford a full wafer engineering run. You'll get 40 to 100 parts depending on the process you use. Using an older process would still set you back a few tens of thousands of dollars, just in fabrication cost. You could by a HPC cluster for that price.

The real sticking point, though, will be how do you propose to design the analog computer? What tools will you use? Public domain tools, quite frankly, suck and professional tools are really expensive. I mean REALLY EXPENSIVE.

Also, did you know it is hard to make different op amp circuits (for example, integrators) on the same wafer act in a similar way? How do you propose to deal with that? I'm guessing you don't know that's a problem.

I think your concept of an analog computer doing fast simulations for specific problems is sound in principle, I think you are way, way out of your depth. Simulate with software and be done with it. Start with MATLAB or SciPy and migrate to C if it's too slow. This is my recommendation.
 
  • Like
Likes Kirana Kumara P
  • #65
There already are VLSI analog chips that are user-programmable. They are called Field Programmable Analog Arrays (FPAA). They can be used similarly to Field Programmable Gate Arrays (FPGA). See http://www.anadigm.com/fpaa.asp , http://www.okikatechnologies.com/ , and https://en.wikipedia.org/wiki/Field-programmable_analog_array .

Programming either FPAAs or FPGAs requires some help in scaling all the signals. The calculations are fixed point. MATLAB has a tool to turn a floating point signal diagram into a FPGA fixed point diagram and a similar thing would be useful for FPAAs. In fact, as far as I know, it might apply directly.
 
  • Like
Likes Kirana Kumara P
  • #66
In all fairness, Field Programmable Analog Arrays universally suck. There is a reason they are only offered by tiny companies and no one buys them.

For the OP's application, anyway, an FPAA would be a total Charlie-Foxtrot since they are invariably switched-capacitor internally and so the OP would have to deal with sampled-data effects on top of solving the desired equations. The whole point of going analog here is speed... and FPAAs just aren't going to get it done.
 
  • Like
Likes Kirana Kumara P
  • #67
analogdesign said:
In all fairness, Field Programmable Analog Arrays universally suck. There is a reason they are only offered by tiny companies and no one buys them.

For the OP's application, anyway, an FPAA would be a total Charlie-Foxtrot since they are invariably switched-capacitor internally and so the OP would have to deal with sampled-data effects on top of solving the desired equations. The whole point of going analog here is speed... and FPAAs just aren't going to get it done.
I am not familiar with them, but one spec sheet that I looked at said that it had a signal band width up to 2 MHz. From that, I assumed that any sampling effects are minimal except for very high frequencies. (http://www.anadigm.com/_doc/DS231000-U001.pdf )
 
  • Like
Likes Kirana Kumara P
  • #68
The sampled data effect is the need for good quality anti-aliasing filtering. That means you're adding op amps on your front end even if you are eliminating them by using this part you linked to.

Also, based on the OP's speed requirements, 2 MHz just isn't going to cut it.
 
  • Like
Likes Kirana Kumara P
  • #69
analogdesign said:
The sampled data effect is the need for good quality anti-aliasing filtering. That means you're adding op amps on your front end even if you are eliminating them by using this part you linked to.

Also, based on the OP's speed requirements, 2 MHz just isn't going to cut it.
If the OP is looking at something with mode frequencies over 2 MHz, then it is no wonder that digital computers are not solving it. But he was talking about running at a 30 millisecond frame, with a desire for 1 millisecond frame. So I assume that he is dealing with frequencies way below 2 MHz ... more on the order of well under 500 Hz.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #70
The OP said at one point the he or she want the calculations done in 1 ms. Assuming 0.1% settling (that's about 10 bits) and a feedback factor or 0.1 (who knows how much gain would be needed?) and with a then the GBW required would be -ln(0.001)/(2*3.14*0.1*1ms) = 11 MHz. That's right at the limit of the analog circuits on the part you linked to. Who knows? Maybe it would work.
 
  • Like
Likes Kirana Kumara P
  • #71
I am not sure that we are talking about the same thing. It is routine for analog oscilloscopes to handle 30 Mhz signals (100 Mhz on expensive ones). So I wouldn't think that there are any problem with the mode frequencies of a system that only needs to complete a calculation every 30 milliseconds (33 times a second,with a wish to increase to 1000). Those frequencies must be significantly less than 33 Hz. Any analog circuit can handle that.
 
  • Like
Likes Kirana Kumara P
  • #72
If you look at the spectrum of a single that settles in 30 ms you will see that it has components in excess of 33 Hz. An analog computer needs to settle to a new value based on changes in its input stimulus; it isn't a system passed a 33 Hz sine. I think that is the disconnect. For a system to settle to 1% in 30 ms it needs a BW far in excess of 1/30 ms = 33 Hz (see above).
 
  • Like
Likes Kirana Kumara P
  • #73
Since analog oscilloscopes routinely handle up to 30 MHz (and expensive ones go up to 100 MHz), I find it hard to believe that any modern system would have trouble with 33 Hz.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #74
How many miller integrators are needed to solve the set of equations ?
And what will the time constant of those analogue integrators need to be ?

There is actually no problem here as there is no set of equations to solve.

In my opinion you are all dancing on the head of a pin.
 
  • Like
Likes Nidum and Kirana Kumara P
  • #75
But dancing on the head of a pin is so *fun*!
 
  • Like
Likes Kirana Kumara P
  • #76
Baluncore said:
There is actually no problem here as there is no set of equations to solve.

As I have mentioned in one of my earlier replies, the very final form of the equations can be known only *during* runtime.
 
  • #77
FactChecker said:
There already are VLSI analog chips that are user-programmable. They are called Field Programmable Analog Arrays (FPAA). They can be used similarly to Field Programmable Gate Arrays (FPGA).

The Wikipedia entry on FPGA of course tells that FPGAs can be re-wired "on the filed", at the time of simulation, in real-time etc. However I am not sure whether the connections can be changed (switching) *during* simulations (because "real-time" in the Wikipedia entry could mean "just before the start of the simulation" also). It would be helpful if someone could clarify.
 
  • #78
I have come across Application-Specific Integrated Circuit (ASIC). Would this be useful?

I think an ASIC is usually a digital processor. Instead of using the standard CPU, can the use of ASICs speed up the simulations if one chooses to use the digital computing?

Or, is it possible to have an analog ASIC? (Let us not worry about the cost for now). Would it be possible to change the connections (switching) if this happens to be the case? Does one of my previous questions (#63) and the reply from the member analogdesign (#64) refer to the same thing that I am talking about here?
 
  • #79
Kirana Kumara P said:
I have come across Application-Specific Integrated Circuit (ASIC). Would this be useful?
Yes. It is the only way to go if you want the highest speed processors to run dedicated algorithms.

Kirana Kumara P said:
Or, is it possible to have an analog ASIC? (Let us not worry about the cost for now).
There is really no such thing as a fast analogue ASIC. There are reconfigurable analogue processors but they are slow and not really useful for analogue computers because they lack the external interfaces needed get data in and out.

Baluncore said:
It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array
The advantage of using FPGAs or ASICs is that a single ASIC could probably be programmed to contain between 4 and 32 specialised digital processors. Expect an ASIC to have a cycle time of 1 nsec and be able to change parameters or data flow in less than 2 nsec. There is absolutely no way an array of analogue processors could keep up with a similar array of ASICs. I expect an ASIC processor would outrun an analogue processor by a factor of about 1000.

Digital processors are now so fast that the arrays of 1024 risc processors now appearing would be capable of outrunning a similar array of 1024 analogue processors by a factor of 100. The advantage of arrays of digital processors is that the internal interprocessor data flow connections have a similar speed to the processors in the chip.
 
  • Like
Likes Kirana Kumara P
  • #80
Kirana Kumara P said:
As I have mentioned in one of my earlier replies, the very final form of the equations can be known only *during* runtime.
The final form is not needed unless you have an operating physical processor.

I expect the number of equations, the degree of the equations and the accuracy requirements should be specified now. If the general form of the equation set needing solution was given, it would be possible to determine an optimum processor topology, the expected cost and solution time. Without an example equation set, there is no problem to solve.
 
  • Like
Likes Kirana Kumara P
  • #81
Wouldn't it be a good idea to build an ASIC (a product or module based on the ASIC) that can solve a set of simultaneous nonlinear algebraic equations? One can assume that the equations are quadratic to begin with. Solving even a set of simultaneous linear algebraic equations will have wide applications. This is because a wide variety of simulations (not just a certain algorithm for the simulation of biological organs) need to solve these equations. Hence a product (module) which can solve these equations and which one would couple to a digital computer would have diverse applications.

Or, is it that the set of equations explained above are too general to be of any use as far as the circuit design is concerned (since the numerical values of the coefficients in the set of equations are not defined here)?

Or, would it be that a module that can solve a set of equations cannot be used to solve another set of equations when the total number of equations in the latter set is lesser than the total number of equations in the former set?
 
  • #82
Adding to my reply numbered #81 above, let us assume for the time being that we have 5000 equations in the set of equations (whether the set of linear simultaneous algebraic equations or the set of nonlinear simultaneous algebraic equations). And let us assume that 1% error is allowed in the final result (again for the problem defined in #81 above).
 
  • #83
One small addition (correction) to my reply numbered #82 above. If one wishes to commercialize the ASIC module (product) mentioned in the replies #81 and #82 above, one should aim to achieve the same accuracy as normally offered by a general purpose normal digital computer (1% accuracy not enough). It would be even better if the user can -- just before the computations start -- decide how much error is allowed in the final results.
 
  • #84
Kirana Kumara P said:
Wouldn't it be a good idea to build an ASIC (a product or module based on the ASIC) that can solve a set of simultaneous nonlinear algebraic equations?
You go ahead, write the equation set for a single processor, define, then load that processor into an FPGA so you can demonstrate that it works. I've got bigger fish to fry.

Kirana Kumara P said:
It would be even better if the user can -- just before the computations start -- decide how much error is allowed in the final results.
Why complicate things by allowing the user adjust an irrelevant parameter. Are you trying to sabotage your own project?
 
  • Like
Likes Kirana Kumara P
  • #85
(1) Why does it matter to you how long the computation takes ? You have quoted display frame rate as the limiting factor but would it matter if there was a processing lag per frame of 10 milliseconds ? 60 seconds ? A week ? The individual frames could still be assembled so as to display at the correct rate but there would just be a delay in starting the sequence .

(2) There are existing ANSYS models of a beating heart - why is your problem so much more complex ?
 
  • Like
Likes Kirana Kumara P
  • #86
Nidum said:
(1) Why does it matter to you how long the computation takes ? You have quoted display frame rate as the limiting factor but would it matter if there was a processing lag per frame of 10 milliseconds ? 60 seconds ? A week ? The individual frames could still be assembled so as to display at the correct rate but there would just be a delay in starting the sequence .

(2) There are existing ANSYS models of a beating heart - why is your problem so much more complex ?

(1) One computation should be over within 30 milliseconds. I cannot do computations beforehand and display the frames later. This is because I want to "poke" or "cut" a biological organ displayed on the computer screen, and I want to know in real-time how the biological organ is going to deform when I poke or cut it. The virtual surgical tool displayed on the screen is tied to the position of the mouse pointer (I would control the position of the mouse pointer). I myself won't be knowing where exactly on the biological organ I am going to "poke" after 10 seconds,say (boundary conditions are not known beforehand, boundary conditions decided by the location of mouse pointer, mouse pointer controlled by a user, the user "pokes" the biological organ as per his wish and as he feels like, he himself won't be knowing where he would "poke" next). The system should be interactive (and real-time). The system is meant for training surgeons in commonly encountered surgical tasks.

(2) I am aware of the ANSYS models that you have mentioned. They won't serve my purpose.
 
  • #87
Kirana Kumara P said:
The Wikipedia entry on FPGA of course tells that FPGAs can be re-wired "on the filed", at the time of simulation, in real-time etc. However I am not sure whether the connections can be changed (switching) *during* simulations (because "real-time" in the Wikipedia entry could mean "just before the start of the simulation" also). It would be helpful if someone could clarify.
All computers (analog or digital) allow switching of signal paths during operation. But you must know ahead of time what types of switches will be required and have the options available before the simulation starts. The unused signal paths should usually run during the entire simulation so that their signals can be switched in at any time without startup transients. Then the only transients during a switch will be caused by the difference between the signal alternatives.

PS. Remember that the first "fly-by-wire" fighters like the first batches of F-16 had analog computers for their flight controls. They were capable of switching, variable gains, etc., with no problems. At that time, there were piloted handling qualities flight simulations that used a combination of analog and digital computers. The analog computers easily integrated with digital computers that ran with 1000 Hz frames.
 
  • Like
Likes Kirana Kumara P
  • #88
OK - now I finally understand what you are trying to do .

The computational problem can be reduced by using different resolutions for the FE and for the visible display . The underlying FE model can have a relatively coarse mesh to allow fast computation and the display could be generated using a 3D cad engine to generate smooth curves and photo realistic images .
 
  • Like
Likes Kirana Kumara P
  • #89
Kirana Kumara P said:
(1) One computation should be over within 30 milliseconds. I cannot do computations beforehand and display the frames later. This is because I want to "poke" or "cut" a biological organ displayed on the computer screen, and I want to know in real-time how the biological organ is going to deform when I poke or cut it. ... I myself won't be knowing where exactly on the biological organ I am going to "poke"
This really tells us how difficult your problem is. The variability of your problem makes it very hard to anticipate how things will change during the simulation. It is not just a prepared-for switching problem.
 
  • Like
Likes Kirana Kumara P
  • #90
Over short time periods the solutions for the mutating model are not going to change that much and possibly some parts of the solutions will not change at all .

In this situation an iterative solver might give very fast results . Generate the next solution from the previous one .
 
  • Like
Likes Kirana Kumara P

Similar threads

Replies
1
Views
2K
Replies
9
Views
7K
Replies
1
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 9 ·
Replies
9
Views
946
  • · Replies 3 ·
Replies
3
Views
3K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K