Changing circuitry of analog computer *During* simulations?

Click For Summary
The discussion centers on the potential for analog circuitry to change during simulations, particularly in the context of real-time simulations of biological organs. Participants clarify that while analog computers can modify component values during computations, changing connections mid-simulation introduces undefined conditions and stability concerns. The feasibility of using feedback loops to automate these changes based on prior results is debated, with caution advised to avoid self-excitation issues. The conversation also touches on the speed limitations of analog computers compared to digital systems, suggesting that digital solutions may ultimately be more efficient for complex simulations. Overall, the exploration of dynamic circuitry in analog computing remains largely theoretical, with practical applications needing careful consideration.
  • #31
f95toli said:
But not as fast as a digital computer. The Strong Church Thesis tells us that any analog computer can be efficiently simulated using a digital computer. Since digital circuits are much, much faster than analog circuits it follows that there is nothing to be gained by this method in terms of speed.

Note that the speed of analog computers is limited by the same factors that limits the speed of any other analog circuits; there will always be some time associated with transferring information around a circuit and we do not have efficient memories or buffers to make this process easier.

I'm not familiar with the Strong Church Thesis but I can tell you that the fastest signal processing systems have always been analog systems. Most communications technologies (voiceband, RF, hard-disk read channel, fiber optics, etc) start out as analog because digital isn't fast enough and eventually turn digital once the technology catches up.

For a current example, the highest rate communications channels (for example, DDR4 and 40G+ ethernet) use primarily analog signal paths. An analog equalizer and decision circuit is much, much faster than an ADC followed by a DSP. 40G systems are moving to ADC-based architectures now but analog signal processing is still competitive. A digital computer could in principle "simulate" an analog equalizer, but I assure you it wouldn't be faster or more efficient.

I would say general purpose analog computing is done because digital computers are so good at "simulating the simulations". However, analog signal processing is still alive and well and mostly working in the communications and imaging spaces.
 
  • Like
Likes FactChecker and Kirana Kumara P
Engineering news on Phys.org
  • #32
analogdesign said:
A digital computer could in principle "simulate" an analog equalizer, but I assure you it wouldn't be faster or more efficient.
In that example the analogue signal is real time, so both the digital processor and analogue systems are operating in real time, both waiting for the next data. But since the continuously changing state variables in an analogue equaliser can only be stored in analogue components, usually capacitors, the analogue system can only ever equalise one channel, while a digital signal processor could equalise a great many channels in parallel, all at the same time. That is where the efficiency of digital systems arises. The digital system never needs to be calibrated as unstable components drift off value with to heat and time.

analogdesign said:
I would say general purpose analog computing is done because digital computers are so good at "simulating the simulations". However, analog signal processing is still alive and well and mostly working in the communications and imaging spaces.
What do you mean by "is done"? Do you mean "is now dead and gone", or "is still used today"?

There is a big difference between building parallel arrays of analogue computers to solve non-linear differential equations, and using RF technology to receive and demodulate one signal channel. Signal processing using algorithms like digital IQ mixing, the DFT and digital filtering is rapidly replacing analogue signal processing. Look at the world of SDR where only the front-end down-converter is still analogue.

Analogue computing was replaced by digital processors many decades ago.
 
  • Like
Likes Kirana Kumara P
  • #33
Baluncore said:
In that example the analogue signal is real time, so both the digital processor and analogue systems are operating in real time, both waiting for the next data. But since the continuously changing state variables in an analogue equaliser can only be stored in analogue components, usually capacitors, the analogue system can only ever equalise one channel, while a digital signal processor could equalise a great many channels in parallel, all at the same time. That is where the efficiency of digital systems arises. The digital system never needs to be calibrated as unstable components drift off value with to heat and time.

I do not agree with this. Typically a communication system migrates to digital implementation for functionality and cost reasons, not efficiency. The capacitor here is analogous to a register. Just as you can have multiple registers (in memory or physically) you can have arrays of capacitors to do multiple channels in parallel. I worked on an integrated analog signal processor (in this century) that had 10s of thousands of physical equalizer channels. It would have been impossible to process this volume of data digitally using FPGAs or even custom digital ASICs because of power and area constraints. Remember the power of an ADC goes up about 4X per bit (if it is noise limited), so keeping the data in the analog domain when feasible is an excellent power-saving technique (it creates its own problems, of course). In my experience, an analog solution is almost always lower power than a competing digital solution, but it loses out in development cost, design time, functionality, and ease of use and integration in a larger system. But it wins on power, and that is why analog solutions still find use in practice.

I completely agree that as time goes on applications that were once served by analog signal processing (out of necessity) migrate to digital processing for various reasons. However, analog techniques are continuously applied to new, faster, or more power-sensitive applications. For the foreseeable future I believe analog signal processing will still be of interest.

Baluncore said:
What do you mean by "is done"? Do you mean "is now dead and gone", or "is still used today"?

I wasn't clear enough here. I meant "dead and gone". As you said, general purpose analog computers (and hybrid computers) were for the most part gone by the early 1980s and by then only used in very specialized applications (such as aerodynamics simulation).

Baluncore said:
There is a big difference between building parallel arrays of analogue computers to solve non-linear differential equations, and using RF technology to receive and demodulate one signal channel. Signal processing using algorithms like digital IQ mixing, the DFT and digital filtering is rapidly replacing analogue signal processing. Look at the world of SDR where only the front-end down-converter is still analogue.

Analogue computing was replaced by digital processors many decades ago.

Indeed, although be sure you define what you mean by "one channel". Typically in a celluar basestation (where I have some design experience, and these days are implemented as SDRs) the intermediate frequency (or the baseband if a ZIF architecture is used) is sampled and an entire band of channels is digitized at once. So, the analog front end processes a great many (100s) of channels simultaneously. Lastly, I would submit that the front end of any DSP, namely the ADC, is itself a sophisticated analog signal processor, although how much analog processing it does depends strongly on the architecture (SAR vs Pipelined vs Sigma-Delta).

I guess my point in all of this was to show that rather sophisticated analog signal processing is still used in practical systems, although it is "under the hood". Certainly no one these days would use a general-purpose analog computer. It would make no sense except I suppose as a hobby project.
 
  • Like
Likes Kirana Kumara P and Carrock
  • #34
I am very much grateful to all those who have replied to my questions.

I am still not very clear whether the time required to change the connections is significant when compared to the time required for the simulation.

Of course, the answer to the above question may be problem dependent. However, let us for now assume that I can come up with a good electrical analogy for a problem in my mind. I think how I can come up with a suitable electrical analogy could itself be a research problem. And I may try to find a solution to the research problem only if there is a possibility of finding a solution to the research problem. Hence the above question.

To be clearer, I may define my research problem as a set of coupled nonlinear partial differential equations involving three variables (that correspond to three dimensions). The equations have to be solved over a 3D region of arbitrary shape (the solution domain is a specified 3D region). Now approximating the equations together with the geometry to an analog computer itself could be a serious research problem.

One of the methods of solving the above problem could be to make use of the finite element method (although it may be possible to solve the differential equations directly by building a suitable analog computer, without making use of the finite element method). The finite element method enables one to approximate the set of differential equations by a set of nonlinear simultaneous algebraic equations (but here each of the equations in the set of algebraic equations can contain hundreds of terms). Again, building an analog computer that can carry out this simulation could itself be a research problem.

Now coming to the details of the simulation, there can be change in the geometry (or solution domain or solution region) during simulations. But let us assume that this change in geometry can be properly addressed by substituting the whole simulation by a set of simulations; for each of the individual simulations within the set, there is no change in the geometry during the (individual) simulations. Now each of the individual simulations correspond to a particular network of connections, and when one switches from one individual simulation to the next individual simulation, network of connections would change. I am worried whether this switching would take significant amount of time. For now let us assume that I would come up with a really good electrical analogy so that the time required for the individual simulations is negligibly small (of course, this is the idea behind choosing an analog computer over a digital computer). Still, we can expect the analog computer to be faster than a digital computer only if changing the connections (or switching) does not take significant amount of time.

My goal is not to prove that an analog computer is faster than a digital computer or vice versa. I want to complete a whole simulation within one second. The whole simulation consists of a set of thirty individual simulations. Each of the individual simulations require a particular configuration of connections, while the configuration of connections is different for each of the individual simulations. It is a well known fact that present day digital computers (even clusters or supercomputers) are not capable of providing the correct solution (they cannot complete the thirty individual simulations within one second). Hence I am curious whether one could be able to address the problem by building a suitable analog computer. I can see that building an analog computer would solve the problem if both of these hold good 1) one should be able to find a very good electrical analogy 2) the time required to change the connections (or the network of connections) should be sufficiently small. Assuming that it is possible to find a very good electrical analogy, the success of the analog computer to be built depends on whether one can change the network of connections within a sufficiently small time interval. Since finding a suitable electrical analogy would require significant amount of work, I would involve myself in that task only if there is a possibility that it is possible to change the network of connections within a sufficiently small time interval.
 
  • #35
Kirana Kumara P said:
I am worried whether this switching would take significant amount of time.

We can not possibly answer that without knowing how long you define significant. Let's say that a relay switches in 10 milliseconds. Is that significant?

Your description of the problem does not describe any dynamics at all. It is the dynamics (i.e. range of interest in the frequency domain) that determine the needs of switching. Indeed, your description sounds like the problem may be static, with no dynamics at all.

If you have a nonlinear 3D problem, the required granularity is also a critical parameter. If you represented it with finite elements, how many elements would you need?

The quality of answers you receive here depends strongly on the quality of the question description. You are asking for design advice. In engineering, requirements specifications always precede design.
 
  • Like
Likes Kirana Kumara P
  • #36
Kirana Kumara P said:
Still, we can expect the analog computer to be faster than a digital computer only if changing the connections (or switching) does not take significant amount of time.
False. For the same bandwidth technology, a digital processor will produce results more than 100 times faster than any analogue computer, with or without changes of parameters. A digital signal never has to settle to a fixed value, it need only be clear of the transition threshold. An analogue circuit requires more time and a lower noise environment. It is unlikely that the errors of an analogue computer at speed will be much less than 0.4%, equivalent to 8 bits. We can do a great many 16 bit digital computations, (with less than 0.004% error), in the time it takes one analogue signal to settle.

Kirana Kumara P said:
One of the methods of solving the above problem could be to make use of the finite element method (although it may be possible to solve the differential equations directly by building a suitable analog computer, without making use of the finite element method).
A finite array of analogue computer elements is still FEM.

Kirana Kumara P said:
Of course, the answer to the above question may be problem dependent.
You are correct, there are many problem dependent possibilities. Without specifications, or a set of equations, anything is possible. It seems like you are trying faithfully to maintain a belief in analogue computers, while all the evidence suggests they have been extinct for quite some time.
I no longer expect you to present specifications and equations, as to do that would threaten your faithfully held belief.
 
  • Like
Likes Kirana Kumara P
  • #37
for the most part the passive components in an analog simulation would remain constant but hypothetically there can be non linear components or ones with initial conditions or ones that depend on the value of another term (feedback). For example a light bulb will change its resistance as it heats up or a capacitor can have an initial DC charge or cell energy uptake depends on available oxygen and other factors. Active non linear and time dependent circuits can also be developed but I have no idea how one would implement that for a biological system.

As far as I know, the analog computer is best at solving partial differential equations not necessarily in simulating systems. If you can write a Diff Eq for this system then you need to look at each term for nonlinearity and time dependence and endeavor to build a circuit that emulates that function. you're going to have to limit what you simulate, the biological systems have many feedback terms and depending on the result and resolution you want, it could be overwhelming.
 
  • Like
Likes Kirana Kumara P
  • #38
anorlunda said:
We can not possibly answer that without knowing how long you define significant. Let's say that a relay switches in 10 milliseconds. Is that significant?

Your description of the problem does not describe any dynamics at all. It is the dynamics (i.e. range of interest in the frequency domain) that determine the needs of switching. Indeed, your description sounds like the problem may be static, with no dynamics at all.

If you have a nonlinear 3D problem, the required granularity is also a critical parameter. If you represented it with finite elements, how many elements would you need?

The quality of answers you receive here depends strongly on the quality of the question description. You are asking for design advice. In engineering, requirements specifications always precede design.

Ten milliseconds is insignificant for me (and hence okay for me). This is because I have about 30 milliseconds for completing one individual simulation and to change the connections/parameters once. If changing the connections/parameters can be completed in 10 milliseconds, I still have 20 milliseconds for the actual (individual) simulation. Since the very idea of going for an analog computer is to make the simulations run faster, it should be possible for the individual simulations to complete within the remaining 20 milliseconds (otherwise there is no point going for an analog computer). I wish to know whether the 10 milliseconds number is a realistic estimate of the time needed to change the connections/parameters (at least I wish to know whether the 10 milliseconds figure is not impossible while solving certain problems; here, these "certain problems" may or may not be the problem that I have in my mind).

Yes, one can assume that I am interested to solve a static problem. But I am interested to solve a series of static problems (30 static problems) within one second. I am not interested in the dynamics that may come into picture when one switches from one static problem to the next. Let us assume that the system is designed well so that it is practically quite stable (something close to a critically damped system where the oscillations before reaching the steady state are minimum). From a practical point of view, even if the unwanted dynamics and oscillations happen to be unavoidable, it may be reasonable to assume that it would not usually take more than 10 milliseconds for the oscillations to settle down. Hence it may be reasonable to assume that I can complete one individual (static) simulation within 30 milliseconds (including the time required to change the connections/parameters once), if the time required to change the parameters/connections is only about 10 milliseconds.

I would prefer using a minimum of about 500 elements. There is no harm using more number of elements. But I want an individual simulation (plus the time required to change the connection/parameters once) to be completed within 30 milliseconds.
 
  • #39
Kirana Kumara P said:
I would prefer using a minimum of about 500 elements. There is no harm using more number of elements. But I want an individual simulation (plus the time required to change the connection/parameters once) to be completed within 30 milliseconds.

It was about 45 years ago when we quit using analog circuit analogies to solve equations because digital solutions became so much better. Therefore, I think it is likely that the others in this thread who say that you are wrong, and that digital solutions would be faster are right and you are wrong. However, we don't have enough details about your problem to say that conclusively. Therefore, I'll take you at your word that your analog circuits can do in 30 ms what supercomputers are not able to achieve. So what then?

It sounds like the speed to switch components between solutions is not the limiting factor. You can get solid state relays even faster than 10 ms to switch one resistor out and a different resistor in.

But with 500 or more nodes, you are talking about thousands, or tens of thousands of analog components. The time to design and fabricate them will be very long. You will likely need a digital computer anyhow to decide on the parameter changes and to issue commands to the thousands of relays. Heat dissipation will become a problem for this monster machine that will fill a whole room.

Most of all, reliability could make it all impractical. In my early days, we used analog computers in a project. We had a technician who showed up early every working days to replace the vacuum tubes that failed the previous day. That took an hour each day and we only had 10 amplifiers to worry about. Even with modern components, the MTBF of this machine might be less than an hour.

Switching speed aside, I am highly skeptical that your analog computer dream will be practical. If you asked me to collaborate with you on the project, I would flee the scene. It would be a career killer to go back 45 years in technology to solve a problem.

My best advice is that you should re-verify your understanding about what modern digital computers can accomplish in a short time. It is likely that you got it wrong the first time.
 
  • Like
Likes Kirana Kumara P
  • #40
The topology of integrators and summing amplifier circuits used for the solution of differential equations in analogue computers is the same as was used in analogue state variable filters. Those filters have now been replaced by very low power digital filters throughout signal processing technology.
https://en.wikipedia.org/wiki/State_variable_filter
https://en.wikipedia.org/wiki/Digital_filter

It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array
 
  • Like
Likes Kirana Kumara P
  • #41
Baluncore said:
False. For the same bandwidth technology, a digital processor will produce results more than 100 times faster than any analogue computer, with or without changes of parameters. A digital signal never has to settle to a fixed value, it need only be clear of the transition threshold. An analogue circuit requires more time and a lower noise environment. It is unlikely that the errors of an analogue computer at speed will be much less than 0.4%, equivalent to 8 bits. We can do a great many 16 bit digital computations, (with less than 0.004% error), in the time it takes one analogue signal to settle.A finite array of analogue computer elements is still FEM.You are correct, there are many problem dependent possibilities. Without specifications, or a set of equations, anything is possible. It seems like you are trying faithfully to maintain a belief in analogue computers, while all the evidence suggests they have been extinct for quite some time.
I no longer expect you to present specifications and equations, as to do that would threaten your faithfully held belief.

When one uses the word "simulation", it can be one of these:

1) Simulation of physics using a digital computer
2) Simulation of physics using an analog computer
3) Simulation of the analog computer of point 2) above, on a digital computer

One may note that points 1) to 3) are not one and same (although the results obtained using the three methods are expected to be practically the same). When I talk about analog simulations, it always refers to 2) above (not 3) above). I am interested to simulate certain physics, not certain analog computer. How one represents the physics depends on whether one uses a digital computer or an analog computer.

To give a simple example, if someone wants to find the circumference of a circle using a digital computer, he can just use the well known formula that can calculate the circumference if the radius is known. On the other hand, analog computer is something like actually drawing a circle with the specified radius on the ground, and then actually measuring the circumference using a thread. Now the point 3) above is like drawing the circle on a computer screen using an equation that describes the circle, and then "measuring the circumference" by counting the number of pixels on the circumference and by knowing the distance between individual pixels. Hence it is not fair to simply compare the speed of digital computers to analog computers because the problem to be solved itself are different (although the physics that needs to be simulated is the same, the analog model is different from the digital model).

Hence I do not understand your point that digital simulation should be at least 100 times faster than the corresponding analog simulation. I believe that the success of analog computers depend on the possibility of finding a good electrical analogy. Of course, when it comes to problems like the simulation of biological organs, it may turn out to be a very complicated task and the risk of failure may also be high, but this makes the problem interesting, challenging, and important. But no one would try to take up the challenge (and risk) if there is no possibility of being successful.

These three points are still of concern to me (thanks to many of the replies above, which helped me to get insight into the following points):
1) Time consumed for actively changing the connections and/or parameters
2) Time required for the oscillations to settle
3) Unpredictable and uncontrolled variation in parameters because of heating and drift over time

Constructing arrays of analog computing elements resembles more to the Finite Difference Method (not the Finite Element Method). The Finite Difference Method carries out simulations by replacing differential equations with difference equations.

There is a reason for not presenting the nonlinear differential equations. Available literature does not give the final form (which is required for our purpose) of these differential equations. This is like first defining a variable "a" as a function of the variables "b", "c", "d", then defining a variable "e" as a function of "a", then defining a variable "f" as a function of "e", then writing down the nonlinear differential equation in terms of "f". One may need to use a digital computer to get the final form of the differential equations. Moreover, some of the parameters (coefficients) in the differential equations will not be known beforehand. For example, the coefficients may depend on the position of the mouse pointer on the screen that is connected to a digital computer; the mouse pointer would be actively controlled by a human user (nobody can predict beforehand how the human user is going to move the mouse pointer); hence the digital computer would note down the position of the mouse pointer, then calculate the coefficients for the differential equations. Then the digital computer can issue commands (at appropriate times) to the analog computer to change the connections/parameters of the analog computer.

If I am going to use the Finite Element Method, I can tell the form of the final set of equations to be solved using an analog computer; that is, as I have already told, I need to solve just a set of simultaneous nonlinear algebraic equations (not nonlinear differential equations). As informed in the previous paragraph, I can get the numerical values of the coefficients only during run-time. As informed already, I have about 5000 equations in the set of simultaneous nonlinear algebraic equations (but if the number is too much, I am okay with 1000 equations also).

Finally, speed is more important for me than accuracy. I am okay with about 5% error also.
 
  • #42
Baluncore said:
It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array

While using FPGAs, is it possible to change the connections and parameters *During* simulations?
 
  • #43
anorlunda said:
It was about 45 years ago when we quit using analog circuit analogies to solve equations because digital solutions became so much better. Therefore, I think it is likely that the others in this thread who say that you are wrong, and that digital solutions would be faster are right and you are wrong. However, we don't have enough details about your problem to say that conclusively. Therefore, I'll take you at your word that your analog circuits can do in 30 ms what supercomputers are not able to achieve. So what then?

It sounds like the speed to switch components between solutions is not the limiting factor. You can get solid state relays even faster than 10 ms to switch one resistor out and a different resistor in.

But with 500 or more nodes, you are talking about thousands, or tens of thousands of analog components. The time to design and fabricate them will be very long. You will likely need a digital computer anyhow to decide on the parameter changes and to issue commands to the thousands of relays. Heat dissipation will become a problem for this monster machine that will fill a whole room.

Most of all, reliability could make it all impractical. In my early days, we used analog computers in a project. We had a technician who showed up early every working days to replace the vacuum tubes that failed the previous day. That took an hour each day and we only had 10 amplifiers to worry about. Even with modern components, the MTBF of this machine might be less than an hour.

Switching speed aside, I am highly skeptical that your analog computer dream will be practical. If you asked me to collaborate with you on the project, I would flee the scene. It would be a career killer to go back 45 years in technology to solve a problem.

My best advice is that you should re-verify your understanding about what modern digital computers can accomplish in a short time. It is likely that you got it wrong the first time.

The relevant literature tells that none has been successful in solving the problem using the present day digital computers (major part of the problem involves solving a set of 5000 nonlinear simultaneous algebraic equations thirty times a second). I am clear about this.

A modern supercomputer (which is a digital computer) may make a simulation which would take one hour to complete on a desktop computer to complete within one second. However, it may not be capable of making a simulation which would take one minute to complete on a desktop computer to complete within 30 milliseconds. This is because of the time required for communication between processors in a supercomputer. This can be avoided only if at least one the following would hold good in the future:
1) We will have individual digital processors which are very fast (like 100 GHz processors)
2) Time required for communication between the processors in a supercomputer is reduced to a great extent.

Coming to analog computers, earlier I was worried about:
1) switching speed
2) time required for the signals to settle
3) unwanted change in the parameters because of heating and drift over time

From your reply above, I have to add the following important point to the above list:
1) reliability

Coming to practical difficulties of analog computing, the following points need to be noted also:
1) time, effort, and expertise required to design and fabricate
2) difficulty in finding collaborators
3) space requirement and cooling arrangement

I have not lost all the hope about the analog computer still, since I am ready to sacrifice a bit of accuracy as trade off for speed (even about 5% error may be okay for me). Assuming that switching speed is not of major concern, this might take care of time required for the signals to settle, and unwanted change in the parameters because of heating and drift over time. Reliability can come into picture after the computer is built. However, realizing the analog computer would likely be a long and tedious task, and the success may not be guaranteed. If successful, one may be able to tell that the analog computer has done something which even a modern supercomputer has not been able to achieve so far.
 
  • #44
Baluncore said:
A finite array of analogue computer elements is still FEM.

Yes, it could be FEM (or it could be other techniques, e.g., the Finite Difference Method).

One of my earlier replies could imply that I do not agree with the statement "a finite array of analog computer elements is still FEM". Hence I am writing this reply to clarify that I do not contradict the statement. But it need not be FEM alone (in fact the Finite Difference Method has more resemblance).
 
  • #45
I wish to ask a new question here.

Suppose we have an analog circuit (true hardware, not a digital computer simulation of hardware). If we give some inputs, the circuit delivers the output (result) in certain time interval (solution time). Let us call this "analog simulation".

Next, let us assume that we would simulate the same circuit on a digital computer. Here we are not bothered about the physics behind the construction of the analog circuit, and we will not consider the analytical solution for the problem in hand. In fact, we will not even bother about the original problem. We will just concentrate on the analog circuit and simulate the analog circuit on a digital computer. Let us call this "digital simulation of analog circuit".

I wish to know which of the two simulations is faster. Or, whether "analog simulation" is faster or "digital simulation of analog circuit" is faster is dependent on the problem (circuit) in hand?

One may note that both the above two simulations are different from the purely "digital simulation" which does not require any analog circuit at all (real or virtual).
 
  • #46
Kirana Kumara P said:
I wish to know which of the two simulations is faster. Or, whether "analog simulation" is faster or "digital simulation of analog circuit" is faster is dependent on the problem (circuit) in hand?
An analogue processor solves the problem as a continuous function. The digital processor takes discrete steps to reach the solution. To satisfy the Nyquist-Shannon sampling theorem, the rate of samples taken by the digital processor will be more than twice the highest significant harmonic component in the signal. That same rule applies to digital filters. It can be shown that if the sampling theorem is obeyed, the discrete digital and the continuous analogue are solving the same system of equations. Fourier would agree.
I expect a digital processor to produce results about 100 times faster than an analogue processor.

Kirana Kumara P said:
One may note that both the above two simulations are different from the purely "digital simulation" which does not require any analog circuit at all (real or virtual).
There should be no difference. Both digital algorithms will be an analogue of the organ being simulated. If they are different, one must be a worse approximation than the other.

You are worrying about irrelevant things. Digital processors will now always beat analogue processors. Your faith in an array of analogue processors looks like an example of “Steampunk”.
 
  • Like
Likes Kirana Kumara P
  • #47
Baluncore said:
An analogue processor solves the problem as a continuous function. The digital processor takes discrete steps to reach the solution. To satisfy the Nyquist-Shannon sampling theorem, the rate of samples taken by the digital processor will be more than twice the highest significant harmonic component in the signal. That same rule applies to digital filters. It can be shown that if the sampling theorem is obeyed, the discrete digital and the continuous analogue are solving the same system of equations. Fourier would agree.
I expect a digital processor to produce results about 100 times faster than an analogue processor.
You should not use the Nyquist theorem for real-time processes. Nyquist says that a sampling rate of twice the frequency would allow accurate phase and gain results given an infinite sample to analyse. (There are more complicated versions that apply to a finite sample length.) That is not relevant to real-time simulations. In reality, you would want at least 20 samples per cycle to get real-time results with marginally satisfactory phase and gain errors. It is easy to estimate what the phase and gain errors can be for any sampled signal. For high frequency real-time processes, I believe that analog circuits still have the advantage when applied to a complicated signal network. (But I have never worked with high-frequency systems, so I don't know what issues might create problems in analog simulations.)
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #48
FactChecker said:
You should not use the Nyquist theorem for real-time processes. Nyquist says that a sampling rate of twice the frequency would allow accurate phase and gain results given an infinite sample to analyse. (There are more complicated versions that apply to a finite sample length.) That is not relevant to real-time simulations. In reality, you would want at least 20 samples per cycle to get real-time results with marginally satisfactory phase and gain errors. It is easy to estimate what the phase and gain errors can be for any sampled signal.
I did not refer to “samples per cycle” but to twice the highest significant harmonic component in the signal. The frequency components in the simulation are very low which is why views every 30msec make an acceptably smooth image. A simulation over time is a continuous process, extracting a snapshot of the state every 30msec does not make it an isolated fixed short length sample.
The OP has referred to 5% error in post #43, “(even about 5% error may be okay for me)”.

FactChecker said:
(There are more complicated versions that apply to a finite sample length.)
Can you please give me a reference to such a version.
 
  • Like
Likes Kirana Kumara P
  • #49
Baluncore said:
I did not refer to “samples per cycle” but to twice the highest significant harmonic component in the signal.
Yes. We are both talking about the same thing.
The frequency components in the simulation are very low which is why views every 30msec make an acceptably smooth image. A simulation over time is a continuous process, extracting a snapshot of the state every 30msec does not make it an isolated fixed short length sample.
But still quite different from analyzing a long series. The rule of thumb of minimum 20 samples per cycle is a good starting point.

Can you please give me a reference to such a version.
Sorry, I do not have a reference at this time. I ran into one long ago, but I don't know where it is now.

ADDITIONAL NOTE (CORRECTION?): The rule of thumb of a minimum 20 samples per cycle was relevant to control law design. There the problem is to respond fast enough and accurately enough to control a process through feedback. That might be different from the problem of simulating system without trying to control it. I don't know about that situation.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #50
Baluncore said:
There should be no difference. Both digital algorithms will be an analogue of the organ being simulated. If they are different, one must be a worse approximation than the other.

A digital solution methodology that employs a worse approximation could be slower than a digital solution methodology that employs a better approximation. Coming back to the problem of finding the circumference of a circle using a digital computer, it may be faster to calculate (simulate) the more accurate solution (2*pi*radius) when compared to calculating the circumference by drawing a circle with the given radius on the computer screen by using a circle generation algorithm and then "measuring" the circumference by actually counting the number of pixels on the circumference and the distance between the pixels.
 
  • #51
Coming back to the problem of the simulation of biological organs in real-time, let me give a few more details about the problem in my mind. The biological organs that I have in my mind are: liver and kidney. Let us assume that we have the geometry of the biological organs in hand (CAD files, say). I am not bothered about the inner details of the biological organs; the organs may be assumed to be homogeneous and isotropic. When subjected to specified displacements at certain points on the surface, the entire surface of the organ would undergo deformation. Large deformations are allowed. The material behaviour is nonlinear and the material may be assumed to be hyperelastic; hyperelastic material properties are known. Over the course of time, geometry of biological organs may change, e.g., because of cutting. Mass of the biological organ may be ignored. Dynamics and inertia effects may also be ignored. My problem is to find the displacement of the entire surface of the organ when displacements only at a few points on the surface are known; this computation should be completed within 30 milliseconds (or, we should be able to complete about 30 such computations within a second). There can be a slight change in the geometry (because of cutting, say) and the boundary conditions, once a computation finishes and before the next computation starts.

Literature is clear that as of now nobody has been successful in solving the above problem using a digital computer with reasonably good granularity, i.e., granularity that is usable for practical purposes (of course, many have made simplifying assumptions, and thus claimed that they have got a solution to the problem).
 
  • #52
Kirana Kumara P said:
A digital solution methodology that employs a worse approximation could be slower than a digital solution methodology that employs a better approximation.
In which case only a fool would consider using the worst and slowest of the available solutions.
There are a huge number of bad and slow solutions. Any fixation on those failures would represent a lack of, or a misapplication of intelligence.

Kirana Kumara P said:
Coming back to the problem of finding the circumference of a circle using a digital computer, it may be faster to calculate (simulate) the more accurate solution (2*pi*radius) when compared to calculating the circumference by drawing a circle with the given radius on the computer screen by using a circle generation algorithm and then "measuring" the circumference by actually counting the number of pixels on the circumference and the distance between the pixels.
That is not really a good example or demonstration of anything. Also, the number of pixels around a circle is not π * the diameter in pixels. I believe it is actually closer to 2*√2 multiplied by the diameter of the circle in pixels.
 
  • Like
Likes Kirana Kumara P
  • #53
Baluncore said:
In which case only a fool would consider using the worst and slowest of the available solutions.
There are a huge number of bad and slow solutions. Any fixation on those failures would represent a lack of, or a misapplication of intelligence.That is not really a good example or demonstration of anything. Also, the number of pixels around a circle is not π * the diameter in pixels. I believe it is actually closer to 2*√2 multiplied by the diameter of the circle in pixels.

While solving real-world problems (represented in the form of some equations, say), it may not be possible to find a perfect electrical analogy (one may have to use some approximately analogous circuit). Now there is a difference between obtaining solutions by simulating this approximately analogous circuit on a digital computer, obtaining solutions by using the approximate analog circuit itself (circuit in the form of hardware), and obtaining solutions by directly solving the equations using a digital computer without making use of any analogy (or any circuit).

With reference to the digital simulation of analog circuits, I would not use the formula (pi*diameter). I assume that the circumference may be approximated by a polygon while length of each side of the polygon is decided by the distance between the centers of the two pixels that make up the two ends of the side of the polygon.
 
  • #54
It may be useful to have an analog computer that can solve a set of simultaneous nonlinear algebraic equations, even if it does nothing else. As the first step, having an analog computer that can solve a set of simultaneous linear algebraic equations may also be useful. It is also okay if the total number of equations in the set cannot be altered.

The above analog computer (which may be thought of as a module) may be coupled to a digital computer. When the digital computer comes across a set of equations, it can command the analog computer to solve the same.

But nobody would buy the module if the module fails frequently. Moreover, the module would have value only if one can prove that using the module would speed up the computations (when compared to solutions carried out using digital processors alone).
 
  • #55
I am extremely grateful to each and every one who has replied to my questions. Physics Forum has already given me more than what I expected to get (both in terms of quality and quantity).
 
  • #56
Kirana Kumara P said:
Now there is a difference between obtaining solutions by simulating this approximately analogous circuit on a digital computer, obtaining solutions by using the approximate analog circuit itself (circuit in the form of hardware), and obtaining solutions by directly solving the equations using a digital computer without making use of any analogy (or any circuit).
It is all very well hypothesising that there are a huge number of possible solutions and that some are better than others. If it can be done with analogue components then it can be done faster digitally. If it can't be done with analogue components then there is no issue, digital will be the only way to solve it.

Kirana Kumara P said:
I assume that the circumference may be approximated by a polygon while length of each side of the polygon is decided by the distance between the centers of the two pixels that make up the two ends of the side of the polygon.
As I wrote, it is not a good example. Where pixels are arranged in a rectangular matrix, it is the Manhattan distance not the linear distance that decides the number of pixels needed to draw the side of a polygon.

Kirana Kumara P said:
It may be useful to have an analog computer that can solve a set of simultaneous nonlinear algebraic equations, even if it does nothing else. As the first step, having an analog computer that can solve a set of simultaneous linear algebraic equations may also be useful. It is also okay if the total number of equations in the set cannot be altered.
Solving sets of simultaneous linear algebraic equations is now done efficiently using digital computers. Going back to analogue computers would be a waste of time and resources.

Any technique “may be useful". Solving real problems is actually better than "may be useful".
 
  • Like
Likes Kirana Kumara P
  • #57
Baluncore said:
If it can be done with analogue components then it can be done faster digitally. If it can't be done with analogue components then there is no issue, digital will be the only way to solve it.
What can we do if it can't be done digitally? I hope analog computing might come to the rescue.

Baluncore said:
As I wrote, it is not a good example. Where pixels are arranged in a rectangular matrix, it is the Manhattan distance not the linear distance that decides the number of pixels needed to draw the side of a polygon.
I gave the example just to explain the concept; finer details are not important.

Baluncore said:
Solving sets of simultaneous linear algebraic equations is now done efficiently using digital computers. Going back to analogue computers would be a waste of time and resources.
But not as efficiently as I want it to be (there is difference between efficiency and the wall clock time).
 
  • #58
Kirana Kumara P said:
What can we do if it can't be done digitally? I hope analog computing might come to the rescue.
You have got it backwards. Analogue computing was once the “maiden in distress”, who died. Digital computing is still the “knight in shining armour”.

Kirana Kumara P said:
But not as efficiently as I want it to be (there is difference between efficiency and the wall clock time).
Everyone wants better algorithms.

You are thinking in circles trying to justify your misplaced faith in analogue computers. Go ahead, write a representative set of equations, design an analogue processor to solve them, then build a prototype and test it.
 
  • Like
Likes Kirana Kumara P and anorlunda
  • #59
There has been interest in large-scale analog chips to perform functions that are not practical on digital computers. (e.g. http://www.news.gatech.edu/2016/03/02/configurable-analog-chip-computes-1000-times-less-power-digital). One area of possible application is in neural networks. I have seen problems in the last 5 years that can not realistically be done on digital computers even in non-real-time batch mode.

As others have said, it has always been possible to include real-time switching in analog simulations that can change the circuit as a simulation runs. That can be done in real time in nano seconds. However, it would be necessary to have all the options programmed ahead of time and just switch between signal paths. Variable gains and parameters are always possible. It is easy to have variable gains that will zero out one signal path and turn another path on.

Although a lot is possible with old-fashioned analog computers, they have great practical disadvantages compared to digital computers. I would not consider them unless there was no possibility of solving the problem digitally. It sounds like you have already tried digital computers on your problem and can not do it in real time. I am not really familiar with the current work on large-scale analog chips. Apparently they require very little power (and therefore cooling).
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #60
I think that this thread has been hindered by differing definitions of analog computer.

Many of us think of a general purpose machine based on operational amplifiers. We arrange feedback around those amplifiers to represent our equations. The amplifiers have limitations including settling time.

A broader view of the term includes electric circuits whose equations are direct analogs of the study system. Their behavior is what it is, instantaneously (except for near light speed propagation). As I said in #18, a simple resistor is an analog computer useful to solve Ohm's Law. The challenge is to find a nonlinear circuit which really is analogous. There have been many analog/hybrid attempts do to that with neuron analogies.

The OP has not demonstrated here that he understands circuits well enough to understand the difference.
 
  • Like
Likes Kirana Kumara P

Similar threads

Replies
1
Views
2K
Replies
9
Views
7K
Replies
1
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 9 ·
Replies
9
Views
946
  • · Replies 3 ·
Replies
3
Views
3K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K