Changing circuitry of analog computer *During* simulations?

AI Thread Summary
The discussion centers on the potential for analog circuitry to change during simulations, particularly in the context of real-time simulations of biological organs. Participants clarify that while analog computers can modify component values during computations, changing connections mid-simulation introduces undefined conditions and stability concerns. The feasibility of using feedback loops to automate these changes based on prior results is debated, with caution advised to avoid self-excitation issues. The conversation also touches on the speed limitations of analog computers compared to digital systems, suggesting that digital solutions may ultimately be more efficient for complex simulations. Overall, the exploration of dynamic circuitry in analog computing remains largely theoretical, with practical applications needing careful consideration.
  • #51
Coming back to the problem of the simulation of biological organs in real-time, let me give a few more details about the problem in my mind. The biological organs that I have in my mind are: liver and kidney. Let us assume that we have the geometry of the biological organs in hand (CAD files, say). I am not bothered about the inner details of the biological organs; the organs may be assumed to be homogeneous and isotropic. When subjected to specified displacements at certain points on the surface, the entire surface of the organ would undergo deformation. Large deformations are allowed. The material behaviour is nonlinear and the material may be assumed to be hyperelastic; hyperelastic material properties are known. Over the course of time, geometry of biological organs may change, e.g., because of cutting. Mass of the biological organ may be ignored. Dynamics and inertia effects may also be ignored. My problem is to find the displacement of the entire surface of the organ when displacements only at a few points on the surface are known; this computation should be completed within 30 milliseconds (or, we should be able to complete about 30 such computations within a second). There can be a slight change in the geometry (because of cutting, say) and the boundary conditions, once a computation finishes and before the next computation starts.

Literature is clear that as of now nobody has been successful in solving the above problem using a digital computer with reasonably good granularity, i.e., granularity that is usable for practical purposes (of course, many have made simplifying assumptions, and thus claimed that they have got a solution to the problem).
 
Engineering news on Phys.org
  • #52
Kirana Kumara P said:
A digital solution methodology that employs a worse approximation could be slower than a digital solution methodology that employs a better approximation.
In which case only a fool would consider using the worst and slowest of the available solutions.
There are a huge number of bad and slow solutions. Any fixation on those failures would represent a lack of, or a misapplication of intelligence.

Kirana Kumara P said:
Coming back to the problem of finding the circumference of a circle using a digital computer, it may be faster to calculate (simulate) the more accurate solution (2*pi*radius) when compared to calculating the circumference by drawing a circle with the given radius on the computer screen by using a circle generation algorithm and then "measuring" the circumference by actually counting the number of pixels on the circumference and the distance between the pixels.
That is not really a good example or demonstration of anything. Also, the number of pixels around a circle is not π * the diameter in pixels. I believe it is actually closer to 2*√2 multiplied by the diameter of the circle in pixels.
 
  • Like
Likes Kirana Kumara P
  • #53
Baluncore said:
In which case only a fool would consider using the worst and slowest of the available solutions.
There are a huge number of bad and slow solutions. Any fixation on those failures would represent a lack of, or a misapplication of intelligence.That is not really a good example or demonstration of anything. Also, the number of pixels around a circle is not π * the diameter in pixels. I believe it is actually closer to 2*√2 multiplied by the diameter of the circle in pixels.

While solving real-world problems (represented in the form of some equations, say), it may not be possible to find a perfect electrical analogy (one may have to use some approximately analogous circuit). Now there is a difference between obtaining solutions by simulating this approximately analogous circuit on a digital computer, obtaining solutions by using the approximate analog circuit itself (circuit in the form of hardware), and obtaining solutions by directly solving the equations using a digital computer without making use of any analogy (or any circuit).

With reference to the digital simulation of analog circuits, I would not use the formula (pi*diameter). I assume that the circumference may be approximated by a polygon while length of each side of the polygon is decided by the distance between the centers of the two pixels that make up the two ends of the side of the polygon.
 
  • #54
It may be useful to have an analog computer that can solve a set of simultaneous nonlinear algebraic equations, even if it does nothing else. As the first step, having an analog computer that can solve a set of simultaneous linear algebraic equations may also be useful. It is also okay if the total number of equations in the set cannot be altered.

The above analog computer (which may be thought of as a module) may be coupled to a digital computer. When the digital computer comes across a set of equations, it can command the analog computer to solve the same.

But nobody would buy the module if the module fails frequently. Moreover, the module would have value only if one can prove that using the module would speed up the computations (when compared to solutions carried out using digital processors alone).
 
  • #55
I am extremely grateful to each and every one who has replied to my questions. Physics Forum has already given me more than what I expected to get (both in terms of quality and quantity).
 
  • #56
Kirana Kumara P said:
Now there is a difference between obtaining solutions by simulating this approximately analogous circuit on a digital computer, obtaining solutions by using the approximate analog circuit itself (circuit in the form of hardware), and obtaining solutions by directly solving the equations using a digital computer without making use of any analogy (or any circuit).
It is all very well hypothesising that there are a huge number of possible solutions and that some are better than others. If it can be done with analogue components then it can be done faster digitally. If it can't be done with analogue components then there is no issue, digital will be the only way to solve it.

Kirana Kumara P said:
I assume that the circumference may be approximated by a polygon while length of each side of the polygon is decided by the distance between the centers of the two pixels that make up the two ends of the side of the polygon.
As I wrote, it is not a good example. Where pixels are arranged in a rectangular matrix, it is the Manhattan distance not the linear distance that decides the number of pixels needed to draw the side of a polygon.

Kirana Kumara P said:
It may be useful to have an analog computer that can solve a set of simultaneous nonlinear algebraic equations, even if it does nothing else. As the first step, having an analog computer that can solve a set of simultaneous linear algebraic equations may also be useful. It is also okay if the total number of equations in the set cannot be altered.
Solving sets of simultaneous linear algebraic equations is now done efficiently using digital computers. Going back to analogue computers would be a waste of time and resources.

Any technique “may be useful". Solving real problems is actually better than "may be useful".
 
  • Like
Likes Kirana Kumara P
  • #57
Baluncore said:
If it can be done with analogue components then it can be done faster digitally. If it can't be done with analogue components then there is no issue, digital will be the only way to solve it.
What can we do if it can't be done digitally? I hope analog computing might come to the rescue.

Baluncore said:
As I wrote, it is not a good example. Where pixels are arranged in a rectangular matrix, it is the Manhattan distance not the linear distance that decides the number of pixels needed to draw the side of a polygon.
I gave the example just to explain the concept; finer details are not important.

Baluncore said:
Solving sets of simultaneous linear algebraic equations is now done efficiently using digital computers. Going back to analogue computers would be a waste of time and resources.
But not as efficiently as I want it to be (there is difference between efficiency and the wall clock time).
 
  • #58
Kirana Kumara P said:
What can we do if it can't be done digitally? I hope analog computing might come to the rescue.
You have got it backwards. Analogue computing was once the “maiden in distress”, who died. Digital computing is still the “knight in shining armour”.

Kirana Kumara P said:
But not as efficiently as I want it to be (there is difference between efficiency and the wall clock time).
Everyone wants better algorithms.

You are thinking in circles trying to justify your misplaced faith in analogue computers. Go ahead, write a representative set of equations, design an analogue processor to solve them, then build a prototype and test it.
 
  • Like
Likes Kirana Kumara P and anorlunda
  • #59
There has been interest in large-scale analog chips to perform functions that are not practical on digital computers. (e.g. http://www.news.gatech.edu/2016/03/02/configurable-analog-chip-computes-1000-times-less-power-digital). One area of possible application is in neural networks. I have seen problems in the last 5 years that can not realistically be done on digital computers even in non-real-time batch mode.

As others have said, it has always been possible to include real-time switching in analog simulations that can change the circuit as a simulation runs. That can be done in real time in nano seconds. However, it would be necessary to have all the options programmed ahead of time and just switch between signal paths. Variable gains and parameters are always possible. It is easy to have variable gains that will zero out one signal path and turn another path on.

Although a lot is possible with old-fashioned analog computers, they have great practical disadvantages compared to digital computers. I would not consider them unless there was no possibility of solving the problem digitally. It sounds like you have already tried digital computers on your problem and can not do it in real time. I am not really familiar with the current work on large-scale analog chips. Apparently they require very little power (and therefore cooling).
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #60
I think that this thread has been hindered by differing definitions of analog computer.

Many of us think of a general purpose machine based on operational amplifiers. We arrange feedback around those amplifiers to represent our equations. The amplifiers have limitations including settling time.

A broader view of the term includes electric circuits whose equations are direct analogs of the study system. Their behavior is what it is, instantaneously (except for near light speed propagation). As I said in #18, a simple resistor is an analog computer useful to solve Ohm's Law. The challenge is to find a nonlinear circuit which really is analogous. There have been many analog/hybrid attempts do to that with neuron analogies.

The OP has not demonstrated here that he understands circuits well enough to understand the difference.
 
  • Like
Likes Kirana Kumara P
  • #61
Baluncore said:
Everyone wants better algorithms.

But in my case speedier simulation is not only desirable but more importantly it is a necessity. I have no problem going to vacuum tubes, hydraulic or pneumatic analog computers, or I have no problem going back to thousand years and building a mechanical analog computer if it can deliver faster simulations (not just faster when compared to any other type of computer but it should solve my problem within the allowed time duration).
 
  • #62
My electrical analog computer has the following characteristics:
1) It is described by an electrical circuit 2) The circuit represents the electrical analogy for the original physics (or problem or system) to be simulated.

The individual electronic/electrical components of the above analog computer (filters, amplifiers, switches, resistors, capacitors etc.) can be either analog or digital. Even if some of these components are digital, I call the circuit as analog computer.

Based on the above definition of analog computer, my "common sense" says that (of course, "common sense" can go wrong at times) the analog computer will be the fastest, followed by the purely digital simulation. Digital simulation of the analog circuit might be the slowest. I "feel" that this would hold true at least for many of the problems (this may or may not be true for all the problems, i.e., there is a possibility that which of the simulations is faster depends on the problem taken up).

The reason I feel that the digital simulation of analog circuit can be slower than the purely digital simulation is that the purely digital simulation does not require any circuit (and does not require any electrical analogy). Although both the digital simulations could be perfectly analogous, they may use different algorithms for their respective digital solutions (since how the original problem is digitally represented/defined/explained could be different).

Now, if some theory says that digital simulation of analog computer is faster than the analog computer, the theory may assume that the same accuracy is expected out of the results from the two simulations. But if some amount of error is allowed in the simulations, digital simulation may be calculating more accurate results even when that is not required (that may be the reason that would make it slower when compared to the analog computer, even while the theory may predict otherwise).
 
  • #63
Just some unrealistic thoughts below (although I am a mechanical engineer and do not know much about electrical circuits, I know that the following are quite unrealistic, at least for the time being).

Let us assume that someone designs an analog computer which has tens of thousands of electrical components. Then if the entire analog circuit can be placed (fabricated) on a very small piece of material (like VLSI, or a modern Intel processor), then the analog computer is likely to be very fast (fast simulations, fast switching). Other advantages: reliable, less heat generated, less space required, lower power consumption.

Going a step further, someone who designs an analog computer (such as the analog computer mentioned in the last paragraph above) should be able to get it fabricated (on a very small chip as mentioned above) by outsourcing the fabrication to work to some company (e.g., Intel). Ordering (fabricating) just one piece (one analog computer) should also be allowed.
 
  • #64
Kirana Kumara P said:
Just some unrealistic thoughts below (although I am a mechanical engineer and do not know much about electrical circuits, I know that the following are quite unrealistic, at least for the time being).

Let us assume that someone designs an analog computer which has tens of thousands of electrical components. Then if the entire analog circuit can be placed (fabricated) on a very small piece of material (like VLSI, or a modern Intel processor), then the analog computer is likely to be very fast (fast simulations, fast switching). Other advantages: reliable, less heat generated, less space required, lower power consumption.

Going a step further, someone who designs an analog computer (such as the analog computer mentioned in the last paragraph above) should be able to get it fabricated (on a very small chip as mentioned above) by outsourcing the fabrication to work to some company (e.g., Intel). Ordering (fabricating) just one piece (one analog computer) should also be allowed.

There is so much wrong with this I don't even know where to begin.

Tens of thousands electrical components for your integrated analog computer is too small. A VLSI analog computer like you're describing would more likely be millions (an op amp is thousands of components once parasitics have been extracted) of components to simulate. You could simulate it of course but it won't be fast, since the simulation time increases as the square of components, to first order.

Sure you could do it in a VLSI chip but what do you mean "a modern Intel processor"? Intel's fabrication process is *highly* optimized for digital and their internal analog designers have to jump through outrageous hoops just to get anything analog (like a clock generator) to work.

Anyway, you aren't going to be outsourcing anything to Intel. They don't operate a foundry service. You would need something like MOSIS to broker space on a wafer run for you. I doubt you can afford a full wafer engineering run. You'll get 40 to 100 parts depending on the process you use. Using an older process would still set you back a few tens of thousands of dollars, just in fabrication cost. You could by a HPC cluster for that price.

The real sticking point, though, will be how do you propose to design the analog computer? What tools will you use? Public domain tools, quite frankly, suck and professional tools are really expensive. I mean REALLY EXPENSIVE.

Also, did you know it is hard to make different op amp circuits (for example, integrators) on the same wafer act in a similar way? How do you propose to deal with that? I'm guessing you don't know that's a problem.

I think your concept of an analog computer doing fast simulations for specific problems is sound in principle, I think you are way, way out of your depth. Simulate with software and be done with it. Start with MATLAB or SciPy and migrate to C if it's too slow. This is my recommendation.
 
  • Like
Likes Kirana Kumara P
  • #65
There already are VLSI analog chips that are user-programmable. They are called Field Programmable Analog Arrays (FPAA). They can be used similarly to Field Programmable Gate Arrays (FPGA). See http://www.anadigm.com/fpaa.asp , http://www.okikatechnologies.com/ , and https://en.wikipedia.org/wiki/Field-programmable_analog_array .

Programming either FPAAs or FPGAs requires some help in scaling all the signals. The calculations are fixed point. MATLAB has a tool to turn a floating point signal diagram into a FPGA fixed point diagram and a similar thing would be useful for FPAAs. In fact, as far as I know, it might apply directly.
 
  • Like
Likes Kirana Kumara P
  • #66
In all fairness, Field Programmable Analog Arrays universally suck. There is a reason they are only offered by tiny companies and no one buys them.

For the OP's application, anyway, an FPAA would be a total Charlie-Foxtrot since they are invariably switched-capacitor internally and so the OP would have to deal with sampled-data effects on top of solving the desired equations. The whole point of going analog here is speed... and FPAAs just aren't going to get it done.
 
  • Like
Likes Kirana Kumara P
  • #67
analogdesign said:
In all fairness, Field Programmable Analog Arrays universally suck. There is a reason they are only offered by tiny companies and no one buys them.

For the OP's application, anyway, an FPAA would be a total Charlie-Foxtrot since they are invariably switched-capacitor internally and so the OP would have to deal with sampled-data effects on top of solving the desired equations. The whole point of going analog here is speed... and FPAAs just aren't going to get it done.
I am not familiar with them, but one spec sheet that I looked at said that it had a signal band width up to 2 MHz. From that, I assumed that any sampling effects are minimal except for very high frequencies. (http://www.anadigm.com/_doc/DS231000-U001.pdf )
 
  • Like
Likes Kirana Kumara P
  • #68
The sampled data effect is the need for good quality anti-aliasing filtering. That means you're adding op amps on your front end even if you are eliminating them by using this part you linked to.

Also, based on the OP's speed requirements, 2 MHz just isn't going to cut it.
 
  • Like
Likes Kirana Kumara P
  • #69
analogdesign said:
The sampled data effect is the need for good quality anti-aliasing filtering. That means you're adding op amps on your front end even if you are eliminating them by using this part you linked to.

Also, based on the OP's speed requirements, 2 MHz just isn't going to cut it.
If the OP is looking at something with mode frequencies over 2 MHz, then it is no wonder that digital computers are not solving it. But he was talking about running at a 30 millisecond frame, with a desire for 1 millisecond frame. So I assume that he is dealing with frequencies way below 2 MHz ... more on the order of well under 500 Hz.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #70
The OP said at one point the he or she want the calculations done in 1 ms. Assuming 0.1% settling (that's about 10 bits) and a feedback factor or 0.1 (who knows how much gain would be needed?) and with a then the GBW required would be -ln(0.001)/(2*3.14*0.1*1ms) = 11 MHz. That's right at the limit of the analog circuits on the part you linked to. Who knows? Maybe it would work.
 
  • Like
Likes Kirana Kumara P
  • #71
I am not sure that we are talking about the same thing. It is routine for analog oscilloscopes to handle 30 Mhz signals (100 Mhz on expensive ones). So I wouldn't think that there are any problem with the mode frequencies of a system that only needs to complete a calculation every 30 milliseconds (33 times a second,with a wish to increase to 1000). Those frequencies must be significantly less than 33 Hz. Any analog circuit can handle that.
 
  • Like
Likes Kirana Kumara P
  • #72
If you look at the spectrum of a single that settles in 30 ms you will see that it has components in excess of 33 Hz. An analog computer needs to settle to a new value based on changes in its input stimulus; it isn't a system passed a 33 Hz sine. I think that is the disconnect. For a system to settle to 1% in 30 ms it needs a BW far in excess of 1/30 ms = 33 Hz (see above).
 
  • Like
Likes Kirana Kumara P
  • #73
Since analog oscilloscopes routinely handle up to 30 MHz (and expensive ones go up to 100 MHz), I find it hard to believe that any modern system would have trouble with 33 Hz.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #74
How many miller integrators are needed to solve the set of equations ?
And what will the time constant of those analogue integrators need to be ?

There is actually no problem here as there is no set of equations to solve.

In my opinion you are all dancing on the head of a pin.
 
  • Like
Likes Nidum and Kirana Kumara P
  • #75
But dancing on the head of a pin is so *fun*!
 
  • Like
Likes Kirana Kumara P
  • #76
Baluncore said:
There is actually no problem here as there is no set of equations to solve.

As I have mentioned in one of my earlier replies, the very final form of the equations can be known only *during* runtime.
 
  • #77
FactChecker said:
There already are VLSI analog chips that are user-programmable. They are called Field Programmable Analog Arrays (FPAA). They can be used similarly to Field Programmable Gate Arrays (FPGA).

The Wikipedia entry on FPGA of course tells that FPGAs can be re-wired "on the filed", at the time of simulation, in real-time etc. However I am not sure whether the connections can be changed (switching) *during* simulations (because "real-time" in the Wikipedia entry could mean "just before the start of the simulation" also). It would be helpful if someone could clarify.
 
  • #78
I have come across Application-Specific Integrated Circuit (ASIC). Would this be useful?

I think an ASIC is usually a digital processor. Instead of using the standard CPU, can the use of ASICs speed up the simulations if one chooses to use the digital computing?

Or, is it possible to have an analog ASIC? (Let us not worry about the cost for now). Would it be possible to change the connections (switching) if this happens to be the case? Does one of my previous questions (#63) and the reply from the member analogdesign (#64) refer to the same thing that I am talking about here?
 
  • #79
Kirana Kumara P said:
I have come across Application-Specific Integrated Circuit (ASIC). Would this be useful?
Yes. It is the only way to go if you want the highest speed processors to run dedicated algorithms.

Kirana Kumara P said:
Or, is it possible to have an analog ASIC? (Let us not worry about the cost for now).
There is really no such thing as a fast analogue ASIC. There are reconfigurable analogue processors but they are slow and not really useful for analogue computers because they lack the external interfaces needed get data in and out.

Baluncore said:
It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array
The advantage of using FPGAs or ASICs is that a single ASIC could probably be programmed to contain between 4 and 32 specialised digital processors. Expect an ASIC to have a cycle time of 1 nsec and be able to change parameters or data flow in less than 2 nsec. There is absolutely no way an array of analogue processors could keep up with a similar array of ASICs. I expect an ASIC processor would outrun an analogue processor by a factor of about 1000.

Digital processors are now so fast that the arrays of 1024 risc processors now appearing would be capable of outrunning a similar array of 1024 analogue processors by a factor of 100. The advantage of arrays of digital processors is that the internal interprocessor data flow connections have a similar speed to the processors in the chip.
 
  • Like
Likes Kirana Kumara P
  • #80
Kirana Kumara P said:
As I have mentioned in one of my earlier replies, the very final form of the equations can be known only *during* runtime.
The final form is not needed unless you have an operating physical processor.

I expect the number of equations, the degree of the equations and the accuracy requirements should be specified now. If the general form of the equation set needing solution was given, it would be possible to determine an optimum processor topology, the expected cost and solution time. Without an example equation set, there is no problem to solve.
 
  • Like
Likes Kirana Kumara P
  • #81
Wouldn't it be a good idea to build an ASIC (a product or module based on the ASIC) that can solve a set of simultaneous nonlinear algebraic equations? One can assume that the equations are quadratic to begin with. Solving even a set of simultaneous linear algebraic equations will have wide applications. This is because a wide variety of simulations (not just a certain algorithm for the simulation of biological organs) need to solve these equations. Hence a product (module) which can solve these equations and which one would couple to a digital computer would have diverse applications.

Or, is it that the set of equations explained above are too general to be of any use as far as the circuit design is concerned (since the numerical values of the coefficients in the set of equations are not defined here)?

Or, would it be that a module that can solve a set of equations cannot be used to solve another set of equations when the total number of equations in the latter set is lesser than the total number of equations in the former set?
 
  • #82
Adding to my reply numbered #81 above, let us assume for the time being that we have 5000 equations in the set of equations (whether the set of linear simultaneous algebraic equations or the set of nonlinear simultaneous algebraic equations). And let us assume that 1% error is allowed in the final result (again for the problem defined in #81 above).
 
  • #83
One small addition (correction) to my reply numbered #82 above. If one wishes to commercialize the ASIC module (product) mentioned in the replies #81 and #82 above, one should aim to achieve the same accuracy as normally offered by a general purpose normal digital computer (1% accuracy not enough). It would be even better if the user can -- just before the computations start -- decide how much error is allowed in the final results.
 
  • #84
Kirana Kumara P said:
Wouldn't it be a good idea to build an ASIC (a product or module based on the ASIC) that can solve a set of simultaneous nonlinear algebraic equations?
You go ahead, write the equation set for a single processor, define, then load that processor into an FPGA so you can demonstrate that it works. I've got bigger fish to fry.

Kirana Kumara P said:
It would be even better if the user can -- just before the computations start -- decide how much error is allowed in the final results.
Why complicate things by allowing the user adjust an irrelevant parameter. Are you trying to sabotage your own project?
 
  • Like
Likes Kirana Kumara P
  • #85
(1) Why does it matter to you how long the computation takes ? You have quoted display frame rate as the limiting factor but would it matter if there was a processing lag per frame of 10 milliseconds ? 60 seconds ? A week ? The individual frames could still be assembled so as to display at the correct rate but there would just be a delay in starting the sequence .

(2) There are existing ANSYS models of a beating heart - why is your problem so much more complex ?
 
  • Like
Likes Kirana Kumara P
  • #86
Nidum said:
(1) Why does it matter to you how long the computation takes ? You have quoted display frame rate as the limiting factor but would it matter if there was a processing lag per frame of 10 milliseconds ? 60 seconds ? A week ? The individual frames could still be assembled so as to display at the correct rate but there would just be a delay in starting the sequence .

(2) There are existing ANSYS models of a beating heart - why is your problem so much more complex ?

(1) One computation should be over within 30 milliseconds. I cannot do computations beforehand and display the frames later. This is because I want to "poke" or "cut" a biological organ displayed on the computer screen, and I want to know in real-time how the biological organ is going to deform when I poke or cut it. The virtual surgical tool displayed on the screen is tied to the position of the mouse pointer (I would control the position of the mouse pointer). I myself won't be knowing where exactly on the biological organ I am going to "poke" after 10 seconds,say (boundary conditions are not known beforehand, boundary conditions decided by the location of mouse pointer, mouse pointer controlled by a user, the user "pokes" the biological organ as per his wish and as he feels like, he himself won't be knowing where he would "poke" next). The system should be interactive (and real-time). The system is meant for training surgeons in commonly encountered surgical tasks.

(2) I am aware of the ANSYS models that you have mentioned. They won't serve my purpose.
 
  • #87
Kirana Kumara P said:
The Wikipedia entry on FPGA of course tells that FPGAs can be re-wired "on the filed", at the time of simulation, in real-time etc. However I am not sure whether the connections can be changed (switching) *during* simulations (because "real-time" in the Wikipedia entry could mean "just before the start of the simulation" also). It would be helpful if someone could clarify.
All computers (analog or digital) allow switching of signal paths during operation. But you must know ahead of time what types of switches will be required and have the options available before the simulation starts. The unused signal paths should usually run during the entire simulation so that their signals can be switched in at any time without startup transients. Then the only transients during a switch will be caused by the difference between the signal alternatives.

PS. Remember that the first "fly-by-wire" fighters like the first batches of F-16 had analog computers for their flight controls. They were capable of switching, variable gains, etc., with no problems. At that time, there were piloted handling qualities flight simulations that used a combination of analog and digital computers. The analog computers easily integrated with digital computers that ran with 1000 Hz frames.
 
  • Like
Likes Kirana Kumara P
  • #88
OK - now I finally understand what you are trying to do .

The computational problem can be reduced by using different resolutions for the FE and for the visible display . The underlying FE model can have a relatively coarse mesh to allow fast computation and the display could be generated using a 3D cad engine to generate smooth curves and photo realistic images .
 
  • Like
Likes Kirana Kumara P
  • #89
Kirana Kumara P said:
(1) One computation should be over within 30 milliseconds. I cannot do computations beforehand and display the frames later. This is because I want to "poke" or "cut" a biological organ displayed on the computer screen, and I want to know in real-time how the biological organ is going to deform when I poke or cut it. ... I myself won't be knowing where exactly on the biological organ I am going to "poke"
This really tells us how difficult your problem is. The variability of your problem makes it very hard to anticipate how things will change during the simulation. It is not just a prepared-for switching problem.
 
  • Like
Likes Kirana Kumara P
  • #90
Over short time periods the solutions for the mutating model are not going to change that much and possibly some parts of the solutions will not change at all .

In this situation an iterative solver might give very fast results . Generate the next solution from the previous one .
 
  • Like
Likes Kirana Kumara P
  • #91
Nidum said:
The computational problem can be reduced by using different resolutions for the FE and for the visible display . The underlying FE model can have a relatively coarse mesh to allow fast computation and the display could be generated using a 3D cad engine to generate smooth curves and photo realistic images .

If the underlying FE model is represented by a relatively coarse mesh, that leads to inaccuracy. Of course, that is one of the ways (approximations) through which people have already tried to address the problem. People have tried several other ways (approximations) also. The approximations were required since engineers sometimes cannot say that they don't have a solution. But they keep on looking for better solutions (or true solutions). The problem in hand is not new and it is not the one which is identified by me. It is also a well known fact that present day digital computers cannot solve the problem accurately. Of course, people who have used various approximations have used the digital computers most of the time (if not always), an have done whatever is possible with the digital computers (more realistic simulations are "future work"!).
 
  • #92
Nidum said:
Over short time periods the solutions for the mutating model are not going to change that much and possibly some parts of the solutions will not change at all .

In this situation an iterative solver might give very fast results . Generate the next solution from the previous one .

These are already tried in the literature. Again, these are some of the other ways of getting some approximate solution.

One more approximation that one can usually come across in the literature is to compute -- instead of thirty frames per second -- much lesser number of frames per second and go for interpolation!
 
  • #93
FactChecker said:
At that time, there were piloted handling qualities flight simulations that used a combination of analog and digital computers. The analog computers easily integrated with digital computers that ran with 1000 Hz frames.

The literature clearly tells that building surgical simulators (used to train surgeons in surgical procedures) is much much harder than building flight simulators (used for training pilots).
 
  • Like
Likes FactChecker
  • #94
FactChecker said:
This really tells us how difficult your problem is. The variability of your problem makes it very hard to anticipate how things will change during the simulation. It is not just a prepared-for switching problem.

Could this mean that one cannot design an analog computer that can solve this type of a problem (it is not just a prepared-for switching problem, things will change during the simulation, variability of the problem)?
 
  • Like
Likes FactChecker
  • #95
Kirana Kumara P said:
Could this mean that one cannot design an analog computer that can solve this type of a problem (it is not just a prepared-for switching problem, things will change during the simulation, variability of the problem)?
This does sound much more difficult than a flight simulator. I don't know how to set up an analog simulation for it. It's a special application that would require different experience than I have. Sorry.
 
  • Like
Likes Kirana Kumara P
  • #96
Last edited:
  • Like
Likes Kirana Kumara P and FactChecker
  • #97
Kirana Kumara P said:
The problem in hand is not new and it is not the one which is identified by me. It is also a well known fact that present day digital computers cannot solve the problem accurately. Of course, people who have used various approximations have used the digital computers most of the time (if not always), an have done whatever is possible with the digital computers (more realistic simulations are "future work"!).
Everything is an approximation. Just because a problem has not been solved to your satisfaction does not mean it is established as impossible. If that was the case, no problem would ever be solved.

Changing the element data structure can make a difficult problem tractable. Consider a 3D array of elements distributed as though throughout the mass of an organ. Each element would know where in 3D space it was, and which elements were it's immediate neighbours. A cut would be represented by a change of the interconnection parameters between elements, with a resulting disconnection of positions. More virtual elements could be inserted where detail was needed by replacing existing elements with multiple elements. Neighbours would be identified by dynamic links to other elements. Areas not being affected could be left with coarse resolution. Areas previously needing high resolution could have some elements merged to reduce processor time and so be returned to coarse resolution, but retaining well defined boundaries.

All the requirements for switching signal paths has then gone. It is all determined by changing element positions, boundary parameters and dynamic links to neighbouring elements.

How many elements would a 100 mm cube with 1 mm elements need? It would require about 10^6 elements. If each “relaxation” required 1 usec of digital 1GHz processing per element, it would need 1 second of processor time per relaxation. That could be done in about 30 msec by 32 independent processors working on elements in the same data array. It is an ideal problem for the arrays of up to 1024 RISC processors now available on a chip.

The very fact that the elements know where they are in space, as they move with each milligram of meat, makes the problem tractable. You can even drop a specimen in a jar and send it off to histology while continuing the simulation.

It would be impossible to solve that kind of problem with analogue computing elements. That is because available processors from the pool need to be continuously assigned to process elements in variable 3D positions with different separations and boundary connection parameters in the array. Initialising the state variables in an analogue processor every 1 usec would be as impossible as the task of building a rigid million element 3D array of analogue processors.
 
  • Like
Likes Kirana Kumara P
  • #98
Baluncore said:
Initialising the state variables in an analogue processor every 1 usec would be as impossible as the task of building a rigid million element 3D array of analogue processors.
In an analog computer, the state variables are continuously available as a result of the continuous integrals, derivatives, and other signal values. They are continuous and have no frame time or sampling frequency..
 
  • Like
Likes Kirana Kumara P
  • #99
FactChecker said:
In an analog computer, the state variables are continuously available as a result of the continuous integrals, derivatives, and other signal values. They are continuous and have no frame time or sampling frequency..
A 1 litre organ will weigh about 1kg. 1 mm resolution in 3D will require about 1 million elements. You cannot realistically build an array of 1 million analogue computers so you have to switch analogue processors between elements or tasks in microseconds. That is simply not possible due to the cost of the analogue sample and hold storage arrays required, or the number of A–D and D–A converters needed to operate at video speeds of about 50 MHz in parallel.
 
  • Like
Likes Kirana Kumara P
  • #100
  • Like
Likes Kirana Kumara P
Back
Top