Changing circuitry of analog computer *During* simulations?

AI Thread Summary
The discussion centers on the potential for analog circuitry to change during simulations, particularly in the context of real-time simulations of biological organs. Participants clarify that while analog computers can modify component values during computations, changing connections mid-simulation introduces undefined conditions and stability concerns. The feasibility of using feedback loops to automate these changes based on prior results is debated, with caution advised to avoid self-excitation issues. The conversation also touches on the speed limitations of analog computers compared to digital systems, suggesting that digital solutions may ultimately be more efficient for complex simulations. Overall, the exploration of dynamic circuitry in analog computing remains largely theoretical, with practical applications needing careful consideration.
Kirana Kumara P
Messages
51
Reaction score
2
I am wondering whether it would be possible for the analog circuitry itself to change DURING the time an analog computer does computations.

Over and above the above mentioned question, is it possible that the circuitry itself can change DURING simulations, while this change in the circuitry is decided by the result of the calculations during the previous step (while solving a problem).

I am asking the above question because I have not come across any example where an analog computer changes its circuitry (connections) DURING simulations. Or, the circuitry (connections) is constructed (wired) before the computations (simulations) start, and once a computation starts, the circuitry cannot change during that particular run.

Of course, I know that a circuitry can be re-wired to carry out some other computation.

And of course, the terms "circuitry", "connections", and "analogue computer" above may be interpreted in very general sense. I am aware that "re-wiring" need not necessarily involve manually re-wiring the circuitry ("re-wiring" may be accomplished using software tools).

I believe that if at all it is possible for the circuitry itself to change DURING simulations, it may be advantages to use analog computers instead of digital computers while solving certain problems.

Hope I am clear and looking for an answer.

Thanks and best regards,
Kirana Kumara P
 
Engineering news on Phys.org
Of course, you can modify any component value (amplifier, integrator,..) DURING a simulation with an ANALOG computer. This is not a problem because an anlaog computer is nothing else than an electronic analog circuitry consisting of amplifiers, analog summing circuits, integrators, voltage dividers,..).
Furthermore, there are block-based DIGITAL simulation packages (e.g. VISSIM) which also allow parameter changes during simulations in the time domain.
 
  • Like
Likes Kirana Kumara P
I agreed with @LvW , of course they can.

Even ancient analog computers from the 50s and 60s (programmed with plug cords) could have switches and relays that switch the equations being solved.
 
  • Like
Likes Kirana Kumara P
LvW said:
Of course, you can modify any component value (amplifier, integrator,..) DURING a simulation with an ANALOG computer. This is not a problem because an anlaog computer is nothing else than an electronic analog circuitry consisting of amplifiers, analog summing circuits, integrators, voltage dividers,..).
Furthermore, there are block-based DIGITAL simulation packages (e.g. VISSIM) which also allow parameter changes during simulations in the time domain.

Thank you for your reply. I wish to know answers to two more points: 1) Whether the "connections" can also change DURING a simulation 2) Whether the change in the component value and the change in the "connections" could be automatically calculated (decided) depending on a result that is already computed (but this result computed DURING the SAME simulation).
 
(1) Yes - why not? However, in the time slot between both connection states you have undefined conditions, of course. More than that, this seems meaningful only in case you have a continuous input signal (and not a step).
(2) This means that you will have an additional control loop which connects the output (decision maker) with one part of the circuit. In such a case, you must be careful in order to avoid self-excitement (stability problems).
 
  • Like
Likes Kirana Kumara P
In the early sixties i was a wireman on an assembly line building hybrid computers, analog computer controlled by a programmed digital one. They were destined for Cape Canaveral .
One of the assemblies i made was an array of multiturn potentiometers with little servomotors to turn the knobs. Of course today you'd use digital potentiometers.. That'd do part of what you propose.

Digital computers soon afterward got fast enough to replace most analog .

Doesn't a simple diode or analog comparator do what you suggest ?
What's your application ?
 
  • Like
Likes Kirana Kumara P
jim hardy said:
In the early sixties i was a wireman on an assembly line building hybrid computers, analog computer controlled by a programmed digital one. They were destined for Cape Canaveral .
One of the assemblies i made was an array of multiturn potentiometers with little servomotors to turn the knobs. Of course today you'd use digital potentiometers.. That'd do part of what you propose.

Digital computers soon afterward got fast enough to replace most analog .

Doesn't a simple diode or analog comparator do what you suggest ?
What's your application ?

Thank you for your reply. In fact I am a mechanical engineer and do not know much about electrical circuits. I knew that an analog computer can be programmed using a digital computer (thus making it a hybrid computer in fact). But I was under the impression that this "programming/"building" the circuitry" is possible only before a simulation starts; I thought that once a simulation starts (in other words, once an analog computer starts solving one particular problem), neither the analog "circuitry" (by "circuitry" I mean "connections") nor any of the parameters of the components in the circuitry can be changed. But as per the above replies from LuW, one can change the connections as well as parameters DURING a particular simulation.

My (speculative) application is in the area of the real-time simulation of biological organs (here very complicated calculations need to be completed within a very small fraction of a second). As of now, digital computers are incapable of meeting the requirement of real-time performance.
 
LvW said:
(1) Yes - why not? However, in the time slot between both connection states you have undefined conditions, of course. More than that, this seems meaningful only in case you have a continuous input signal (and not a step).
(2) This means that you will have an additional control loop which connects the output (decision maker) with one part of the circuit. In such a case, you must be careful in order to avoid self-excitement (stability problems).

Thank you once again for your replies. It would be helpful if you could answer the following questions also:

1) Why do we need to have a continuous input signal (why not a step)?
2) Is the self-excitation problem avoidable by a good design (at least to some extent so that it would not pose practical difficulties)?

(Just for information, I am a mechanical engineer and do not know much about electrical circuits.)
 
LvW said:
(1) Yes - why not? However, in the time slot between both connection states you have undefined conditions, of course. More than that, this seems meaningful only in case you have a continuous input signal (and not a step).
(2) This means that you will have an additional control loop which connects the output (decision maker) with one part of the circuit. In such a case, you must be careful in order to avoid self-excitement (stability problems).

Could you provide some quantitative idea about the time needed to change the connections and/or parameters when compared to the time required for the "simulations" (Of course, in an analog computer the time required for the "simulations" is negligibly small).

This is because I am thinking on the possibility of achieving real-time simulation of biological organs by building a suitable analog computer. I am thinking about analog computers because it is incredibly fast. I should be able to change the connections/parameters DURING the simulations because the geometry of biological organs can change DURING the simulations. Of course, for the time being I am not bothered about what would happen during the time the connections/parameters are changed. Hence I would get the solution using an analog computer for a particular set of connections/parameters; the solution can be obtained in real-time since I am using an analog computer. Next, if there is a change in the geometry of biological organs (because of a surgical cut, say), I would change the connections and parameters, and then I would again get the solution in real-time. But since I am interested to simulate the surgical cut in real-time (which means that I should be able to complete the solution within a very small fraction of a second), using analog computer will not solve my problem if changing the connections/parameters several times (which corresponds to cutting incrementally) cannot be completed within a very small fraction of a second.

Hence I wish to know whether the whole simulation (including the task of changing the connections/parameters) can be completed within a very small fraction of a second if one can build a suitable analog computer. (Of course, individual simulations can be completed within a very small fraction of a second if one goes for an analog computer. The term "individual simulations" here means solving for a particular set of connections and parameters.)
 
  • #10
Kirana Kumara P said:
But as per the above replies from LuW, one can change the connections as well as parameters DURING a particular simulation.
Please note that this option is not available (as far as I know) for SPICE-based circuit simulation programs.
In this context, I have mentioned BLOCK-oriented programs only (like VISSIM).

Kirana Kumara P said:
Thank you once again for your replies. It would be helpful if you could answer the following questions also:
1) Why do we need to have a continuous input signal (why not a step)?
2) Is the self-excitation problem avoidable by a good design (at least to some extent so that it would not pose practical difficulties)?

1) The step response consists of a transient starting at t=0. If - during the response time - the system is changed by switchung between two states the transient (which you are interested in) is destroyed because intial conditions are altered.
2.) Yes - of course. However, you need to be familiar with feedback theory and stability criteria.
 
  • Like
Likes Kirana Kumara P
  • #11
Kirana Kumara P said:
This is because I am thinking on the possibility of achieving real-time simulation of biological organs by building a suitable analog computer. I am thinking about analog computers because it is incredibly fast.

But not as fast as a digital computer. The Strong Church Thesis tells us that any analog computer can be efficiently simulated using a digital computer. Since digital circuits are much, much faster than analog circuits it follows that there is nothing to be gained by this method in terms of speed.

Note that the speed of analog computers is limited by the same factors that limits the speed of any other analog circuits; there will always be some time associated with transferring information around a circuit and we do not have efficient memories or buffers to make this process easier.
 
  • Like
Likes Kirana Kumara P
  • #12
This topic is all a bit hypothetical. Maybe you could give us some idea of the form of the equations that need to be solved. Many of us have years of electronic computation experience and know ways of doing quite complex things very simply and quickly.

It would be a pity to attach your project to analogue computing if there was a more flexible digital solution available. I would be quite surprised if we could not digitally out-compute an analogue computer with an array of digital RISC or signal processors.

On the other hand, if there were simple analogue solutions, we would probably recognise them quite quickly.
 
  • Like
Likes Kirana Kumara P
  • #13
Baluncore said:
This topic is all a bit hypothetical. Maybe you could give us some idea of the form of the equations that need to be solved. Many of us have years of electronic computation experience and know ways of doing quite complex things very simply and quickly.

It would be a pity to attach your project to analogue computing if there was a more flexible digital solution available. I would be quite surprised if we could not digitally out-compute an analogue computer with an array of digital RISC or signal processors.

On the other hand, if there were simple analogue solutions, we would probably recognise them quite quickly.

My problem is to solve a set of coupled nonlinear partial differential equations over an arbitrary region (solution region). I may also use numerical techniques like the finite difference method or the finite element method to get the solutions. Do you think it is impossible to obtain faster solutions using analog computers when compared to digital computers, when the set of equations, boundary conditions, and the solution region are specified?
 
  • #14
Kirana Kumara P said:
My problem is to solve a set of coupled nonlinear partial differential equations over an arbitrary region (solution region). I may also use numerical techniques like the finite difference method or the finite element method to get the solutions. Do you think it is impossible to obtain faster solutions using analog computers when compared to digital computers, when the set of equations, boundary conditions, and the solution region are specified?
You must put numbers on it before we can answer. We have no idea what you mean by fast.
 
  • Like
Likes Kirana Kumara P
  • #15
Kirana Kumara P said:
Do you think it is impossible to obtain faster solutions using analog computers when compared to digital computers, when the set of equations, boundary conditions, and the solution region are specified?
It might be that your non-linear equations perfectly fit some electronic analogue. Without seeing the form of the equations, and the non-linearity, it is impossible to tell.

An analogue computer can get trapped in a dead end as easily as a digital computer. With the digital computer you can repeat the failure exactly and analyse the problem. Because of analogue noise, an analogue computer will not always take the same path to a destination so it is difficult to repeat a failure for analysis.

I believe the digital simulation of an analogue computer has for some time now been faster and more accurate than the analogue computer itself.
 
  • Like
Likes Kirana Kumara P
  • #16
Kirana Kumara P said:
it is incredibly fast. I should be able to change the connections/parameters DURING the simulations because the geometry of biological organs can change DURING the simulations.

The idea that animal tissue can outrun a computer just doesn't ring true for me.
Nerve impulses move only around 400 ft/sec i was taught. A physical cut proceeds only as fast as a scalpel can move . Or are you simulating something more kinematic like a high power laser ?As fascinating as it'd be to do this analog
i recommend you write a program with one of your finite element solutions , have it access a timer and report how many microseconds elapsed during execution.

I did something similar on an embedded microcontroller running interpreted Basic which is really slow . It was for a crane weigh cell that put out an ASCII string every 0.3 second representing the weight on the hook, some tens of tons. That ASCII number had to be converted to analog voltage with a DAC and handed to a monitor that compared tension to strain gages looking for unexpected deformation..
I had it set one output line high at routine start and set it back low at when finished. Watching that line with a 'scope i could see how long it took.
Wow did i learn about streamlining a program with that one ! Cut my execution time by 75% with common sense things like eliminating loops and unnecessary calculations..

So my point is
If your criterion for "Extremely Fast" is what you can perceive with your senses, i think you need to familiarize yourself with just what the digital guys can do nowadays.
Search on DSP IC" .

old jim
 
  • Like
Likes Kirana Kumara P
  • #17
anorlunda said:
You must put numbers on it before we can answer. We have no idea what you mean by fast.

By "fast" I mean that I should be able to complete the calculation within 30 milliseconds. It would be even better if I could complete the calculations within 1 millisecond.
 
  • #18
f95toli said:
But not as fast as a digital computer. The Strong Church Thesis tells us that any analog computer can be efficiently simulated using a digital computer. Since digital circuits are much, much faster than analog circuits it follows that there is nothing to be gained by this method in terms of speed.

Huh? Your definitions must not match mine. Two electrons experiencing Coulomb force are an analog computer that runs instantaneously. A resistor with voltage applied instantaneously solves Ohm's Law (or Maxwell's Equations if you prefer.). How could digital be faster than that?
 
  • Like
Likes Kirana Kumara P
  • #19
Kirana Kumara P said:
By "fast" I mean that I should be able to complete the calculation within 30 milliseconds. It would be even better if I could complete the calculations within 1 millisecond.

Do you mean that to simulate the process in real time, you would like to calculate 1 ms of elapsed time in 1 ms of computer time?

Most of today's CPUs can accomplish very complicated things in 1ms.
 
  • Like
Likes Kirana Kumara P
  • #20
anorlunda said:
Huh? Your definitions must not match mine. Two electrons experiencing Coulomb force are an analog computer that runs instantaneously. A resistor with voltage applied instantaneously solves Ohm's Law (or Maxwell's Equations if you prefer.). How could digital be faster than that?

I guess it depends on what you mean by a computer. This is far outside my expertise, all I know about this comes from reading about e.g. adiabatic quantum computing and whether the D-Wave computer (which if it is classical is analog) is faster than a digital computer.
However, I think the point is that the time required for a digital computer to solve a given problem is bounded by a polynomial function of the resources used by the analog computer.
According to one of my review articles one standard reference where my statement is discussed in more details (although it mainly deals with the NP completeness etc)

Vergis, Anastasios, Kenneth Steiglitz, and Bradley Dickinson. "The complexity of analog computation." Mathematics and computers in simulation 28.2 (1986): 91-113.

You can find a PDF of the article online.

Also, the following is somewhat more readable
https://www.cs.princeton.edu/courses/archive/fall06/cos576/papers/yao_acm03.pdf

(see the bit about the ECT)

One interesting consequence of this is that physical systems of any kind can not solve NP complete problems.

Also, on somewhat related note, I recently saw a talk about work on using superconducting electronics to simulate neurons; although the circuit was still digital.
 
  • Like
Likes Kirana Kumara P
  • #21
Baluncore said:
I believe the digital simulation of an analogue computer has for some time now been faster and more accurate than the analogue computer itself.

I can interpret the term "digital simulation of analog computer" these ways:

1) There are software packages for normal digital computers which can "build" the circuits virtually, and then simulate on the digital computers how the circuit behaves when subjected to a given input. The software packages can even predict the time required to solve a problem on the analog computer/circuit, without really building a prototype of the analog computer.

2) Manually writing the code for the normal digital computers, where the code delivers the same results (or almost the same results) when compared to the results that would have been obtained if an analog computer was used for the calculations (instead of the digital computer). Here, the code for the normal dital computer should describe/model the analog computer in mind.

3) Hybrid computer, where the coding is done on the normal digital computer, and then transferred to a processor which is a sufficiently general-purpose circuit/processor.

Right now I have assumed that you mean point number 2) while saying "digital simulation of analog computer". I request you to please correct me if my interpretation of the phrase is wrong.
 
  • #22
Kirana Kumara P said:
Right now I have assumed that you mean point number 2) while saying "digital simulation of analog computer". I request you to please correct me if my interpretation of the phrase is wrong.
 
  • Like
Likes Kirana Kumara P
  • #23
I am grateful to everyone who has replied to my thread. I have got answers from this forum to many of my questions.

Apart from thinking about a project, the reason I picked up the topic is because I thought it would be great if it is possible for analog computers to change the connections and parameters DURING simulations. It is really a great news that during a simulation, the connections and parameters can change depending on the result from the previous computation that is carried out during the same simulation. This may be a step towards a "general purpose analog computer".

I wish to know how fast or how slow it would be to change the connections and parameters DURING a simulation, i.e., is there anything like "the time required to change the connections and parameters is much more than the time required for the true simulation part" or "the time required to change the connections and parameters is negligible" or "the ratio of the time required to change the connections and parameters to the time required for the true simulation part is problem/circuit/parameters dependent" etc. If it happens to be the case that "the time required to change the connections and parameters is negligible", it would be a great news for those who would like to see analog computers competing with digital computers (at least while solving certain specific problems).

We may see that in a normal digital computer, transistors can change their states several times a second. This is nothing but changing the "connections", and this happens very fast. Same way, is it possible for analog computers to change their "connections" very fast, that too DURING simulations?

The true reason that I started this thread is that I believed that it would be really great if analog computers possesses the following three properties: 1) they can change their connections and/or parameters DURING a simulation 2) it takes extremely small amount of time to change the connections and/or parameters 3) the connections and parameters can change automatically DURING a simulation, depending on the result from the previous computation that is carried out during the same simulation. I believed that if all of the above three points happen to be true, that can result in an analog computer that can outsmart digital computers (at least while solving certain specific problems which have important applications, e.g., surgery/surgical simulation). I wanted to know whether what I believed is true and whether the three points mentioned above are true.
 
  • #24
jim hardy said:
As fascinating as it'd be to do this analog
i recommend you write a program with one of your finite element solutions , have it access a timer and report how many microseconds elapsed during execution.
old jim

Several researchers have tried to get nonlinear finite element solutions using digital computers. They tried to obtain the solutions within 30 milliseconds but without success. Even employing clusters or supercomputers has not been successful since that involves inefficient data transfers between processors. Hence the thoughts about going for analog computing.

Nonlinear finite elements invariably require the solution of a set of nonlinear simultaneous algebraic equations. Would it be possible to obtain this solution within 30 milliseconds if one goes for analog computing?
 
  • #25
Kirana Kumara P said:
Would it be possible to obtain this solution within 30 milliseconds if one goes for analog computing?
It depends on the equations being solved.
 
  • #26
Are you up to something like this ?
https://blogs.scientificamerican.co...groundbreaking-simulation-of-the-human-heart/


Kirana Kumara P said:
Nonlinear finite elements invariably require the solution of a set of nonlinear simultaneous algebraic equations.
hmmm how many equations in that set? You might need one analog computer per equation.
I'm assuming these equations are f(time) ?
The little bit of analog computing I've done gives continuous solution as time rolls by.
One can scale it to run faster or slower than real time.

With what will you monitor its output(s) ?
 
  • Like
Likes Kirana Kumara P
  • #27
The problem with FE methods is that all elements communicate with their immediate neighbours. That must be true for both analogue and digitally implemented arrays of processors. The problem with an analogue array is that each must be built and connected, initialised and then started. If you have 10,000 nodes you will need 10,000 analogue processors. A digital processor on the other hand can quickly switch algorithm or be reloaded with the next problem. It has more flexibility. It takes time to read an analogue voltage maybe 1usec, or to charge a capacitor to a specified voltage, say 10usec. Loading a digital register now takes only a few nanoseconds.

There are digital arrays like these now beginning to appear that will make a big difference to FEM.
https://www.parallella.org/2016/10/05/epiphany-v-a-1024-core-64-bit-risc-processor/

http://www.hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.23-Tuesday-Epub/HC28.23.70-Many-Core-Epub/HC28.23.720-KiloCore-BrentBohnenstiehl-v06-41.pdf
 
  • Like
Likes Kirana Kumara P and jim hardy
  • #28
jim hardy said:
Are you up to something like this ?
https://blogs.scientificamerican.co...groundbreaking-simulation-of-the-human-heart/
hmmm how many equations in that set? You might need one analog computer per equation.
I'm assuming these equations are f(time) ?
The little bit of analog computing I've done gives continuous solution as time rolls by.
One can scale it to run faster or slower than real time.

With what will you monitor its output(s) ?


My project is something similar. But the differences are:
1) I am happy with just the macroscopic level (no multiscale modelling)
2) I am happy with just solid mechanics (multiphysics is not a necessity)
3) But in my case one should be able to obtain one complete solution within 30 milliseconds

I may need around five thousand equations in the set for the modelling to be reasonably accurate. More the equations, more accurate the modelling is going to be. However, slightly lesser number of equations (say three thousand equations, or one thousand equations in the worst case) are also okay (here I am ready to sacrifice some amount of accuracy as a trade off for speed).

Equations (i.e., a particular set of equations) are not time dependent.

However, the set of equations can change over the time. This may need change in the connections and parameters of the analog circuit. Changing the connections and/or parameters should not take too much time.

Ideally, I should be able to solve 30 different sets of equations in one second (the number 30 assumes significance because, for visual continuity, one needs about 30 frames per second, like for a reasonably good video, one needs to have 30 still photos per second). In the worst case, I should be able to solve 10 different sets of equations (each set having about 5000 equations) in one second. Of course, the time required to change the connections and/or parameters should also be included in this one second time interval.
 
  • #29
Baluncore said:
The problem with FE methods is that all elements communicate with their immediate neighbours. That must be true for both analogue and digitally implemented arrays of processors. The problem with an analogue array is that each must be built and connected, initialised and then started. If you have 10,000 nodes you will need 10,000 analogue processors. A digital processor on the other hand can quickly switch algorithm or be reloaded with the next problem. It has more flexibility. It takes time to read an analogue voltage maybe 1usec, or to charge a capacitor to a specified voltage, say 10usec. Loading a digital register now takes only a few nanoseconds.

There are digital arrays like these now beginning to appear that will make a big difference to FEM.
https://www.parallella.org/2016/10/05/epiphany-v-a-1024-core-64-bit-risc-processor/

http://www.hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.23-Tuesday-Epub/HC28.23.70-Many-Core-Epub/HC28.23.720-KiloCore-BrentBohnenstiehl-v06-41.pdf

Let us assume that we can find a good electrical analogy for our problem in hand. Then it may be possible to get faster solutions using an analog computer (when compared to digital computers) if building, connecting, initialising, and starting does not take too much time (10 microsecond delay is okay for me since I have 30 milliseconds to complete "one simulation"; here "one simulation" includes the time required to change the connections and parameters, initialize, start etc. once). Of course, I have made use of some of the replies in this thread to arrive at this conclusion. Please correct me if I am wrong.

Regarding the links, many a times lesser general-purpose processors are not as useful as they appear to be, while solving complicated problems. For example one may follow this link on GPUs:
http://dl.acm.org/citation.cfm?id=1816021
 
  • #30
Kirana Kumara P said:
Let us assume that we can find a good electrical analogy for our problem in hand.
There is no problem in hand, because you have not yet presented a single example equation, let alone a set of equations for any processor. It is time to stop hypothesising and get real.

Let's instead make the safe assumption that there is a digital algorithm that runs at 100 times the speed of an analogue computer and can be reprogrammed on the fly. Indeed, nine of those kilo-processor arrays would give you a 96 x 96 processor array with 9216 elements. There is no way an analogue computer could be built and calibrated before a digital processor array finished that and many other jobs.

Intel manufactures CPUs. I do not trust a committee of twelve Intel employees when they set out to denigrate multiple GPUs, claiming that Intel CPUs are better. It suggests Intel have stagnated and are now being threatened by multiple GPUs.
 
  • Like
Likes Kirana Kumara P
  • #31
f95toli said:
But not as fast as a digital computer. The Strong Church Thesis tells us that any analog computer can be efficiently simulated using a digital computer. Since digital circuits are much, much faster than analog circuits it follows that there is nothing to be gained by this method in terms of speed.

Note that the speed of analog computers is limited by the same factors that limits the speed of any other analog circuits; there will always be some time associated with transferring information around a circuit and we do not have efficient memories or buffers to make this process easier.

I'm not familiar with the Strong Church Thesis but I can tell you that the fastest signal processing systems have always been analog systems. Most communications technologies (voiceband, RF, hard-disk read channel, fiber optics, etc) start out as analog because digital isn't fast enough and eventually turn digital once the technology catches up.

For a current example, the highest rate communications channels (for example, DDR4 and 40G+ ethernet) use primarily analog signal paths. An analog equalizer and decision circuit is much, much faster than an ADC followed by a DSP. 40G systems are moving to ADC-based architectures now but analog signal processing is still competitive. A digital computer could in principle "simulate" an analog equalizer, but I assure you it wouldn't be faster or more efficient.

I would say general purpose analog computing is done because digital computers are so good at "simulating the simulations". However, analog signal processing is still alive and well and mostly working in the communications and imaging spaces.
 
  • Like
Likes FactChecker and Kirana Kumara P
  • #32
analogdesign said:
A digital computer could in principle "simulate" an analog equalizer, but I assure you it wouldn't be faster or more efficient.
In that example the analogue signal is real time, so both the digital processor and analogue systems are operating in real time, both waiting for the next data. But since the continuously changing state variables in an analogue equaliser can only be stored in analogue components, usually capacitors, the analogue system can only ever equalise one channel, while a digital signal processor could equalise a great many channels in parallel, all at the same time. That is where the efficiency of digital systems arises. The digital system never needs to be calibrated as unstable components drift off value with to heat and time.

analogdesign said:
I would say general purpose analog computing is done because digital computers are so good at "simulating the simulations". However, analog signal processing is still alive and well and mostly working in the communications and imaging spaces.
What do you mean by "is done"? Do you mean "is now dead and gone", or "is still used today"?

There is a big difference between building parallel arrays of analogue computers to solve non-linear differential equations, and using RF technology to receive and demodulate one signal channel. Signal processing using algorithms like digital IQ mixing, the DFT and digital filtering is rapidly replacing analogue signal processing. Look at the world of SDR where only the front-end down-converter is still analogue.

Analogue computing was replaced by digital processors many decades ago.
 
  • Like
Likes Kirana Kumara P
  • #33
Baluncore said:
In that example the analogue signal is real time, so both the digital processor and analogue systems are operating in real time, both waiting for the next data. But since the continuously changing state variables in an analogue equaliser can only be stored in analogue components, usually capacitors, the analogue system can only ever equalise one channel, while a digital signal processor could equalise a great many channels in parallel, all at the same time. That is where the efficiency of digital systems arises. The digital system never needs to be calibrated as unstable components drift off value with to heat and time.

I do not agree with this. Typically a communication system migrates to digital implementation for functionality and cost reasons, not efficiency. The capacitor here is analogous to a register. Just as you can have multiple registers (in memory or physically) you can have arrays of capacitors to do multiple channels in parallel. I worked on an integrated analog signal processor (in this century) that had 10s of thousands of physical equalizer channels. It would have been impossible to process this volume of data digitally using FPGAs or even custom digital ASICs because of power and area constraints. Remember the power of an ADC goes up about 4X per bit (if it is noise limited), so keeping the data in the analog domain when feasible is an excellent power-saving technique (it creates its own problems, of course). In my experience, an analog solution is almost always lower power than a competing digital solution, but it loses out in development cost, design time, functionality, and ease of use and integration in a larger system. But it wins on power, and that is why analog solutions still find use in practice.

I completely agree that as time goes on applications that were once served by analog signal processing (out of necessity) migrate to digital processing for various reasons. However, analog techniques are continuously applied to new, faster, or more power-sensitive applications. For the foreseeable future I believe analog signal processing will still be of interest.

Baluncore said:
What do you mean by "is done"? Do you mean "is now dead and gone", or "is still used today"?

I wasn't clear enough here. I meant "dead and gone". As you said, general purpose analog computers (and hybrid computers) were for the most part gone by the early 1980s and by then only used in very specialized applications (such as aerodynamics simulation).

Baluncore said:
There is a big difference between building parallel arrays of analogue computers to solve non-linear differential equations, and using RF technology to receive and demodulate one signal channel. Signal processing using algorithms like digital IQ mixing, the DFT and digital filtering is rapidly replacing analogue signal processing. Look at the world of SDR where only the front-end down-converter is still analogue.

Analogue computing was replaced by digital processors many decades ago.

Indeed, although be sure you define what you mean by "one channel". Typically in a celluar basestation (where I have some design experience, and these days are implemented as SDRs) the intermediate frequency (or the baseband if a ZIF architecture is used) is sampled and an entire band of channels is digitized at once. So, the analog front end processes a great many (100s) of channels simultaneously. Lastly, I would submit that the front end of any DSP, namely the ADC, is itself a sophisticated analog signal processor, although how much analog processing it does depends strongly on the architecture (SAR vs Pipelined vs Sigma-Delta).

I guess my point in all of this was to show that rather sophisticated analog signal processing is still used in practical systems, although it is "under the hood". Certainly no one these days would use a general-purpose analog computer. It would make no sense except I suppose as a hobby project.
 
  • Like
Likes Kirana Kumara P and Carrock
  • #34
I am very much grateful to all those who have replied to my questions.

I am still not very clear whether the time required to change the connections is significant when compared to the time required for the simulation.

Of course, the answer to the above question may be problem dependent. However, let us for now assume that I can come up with a good electrical analogy for a problem in my mind. I think how I can come up with a suitable electrical analogy could itself be a research problem. And I may try to find a solution to the research problem only if there is a possibility of finding a solution to the research problem. Hence the above question.

To be clearer, I may define my research problem as a set of coupled nonlinear partial differential equations involving three variables (that correspond to three dimensions). The equations have to be solved over a 3D region of arbitrary shape (the solution domain is a specified 3D region). Now approximating the equations together with the geometry to an analog computer itself could be a serious research problem.

One of the methods of solving the above problem could be to make use of the finite element method (although it may be possible to solve the differential equations directly by building a suitable analog computer, without making use of the finite element method). The finite element method enables one to approximate the set of differential equations by a set of nonlinear simultaneous algebraic equations (but here each of the equations in the set of algebraic equations can contain hundreds of terms). Again, building an analog computer that can carry out this simulation could itself be a research problem.

Now coming to the details of the simulation, there can be change in the geometry (or solution domain or solution region) during simulations. But let us assume that this change in geometry can be properly addressed by substituting the whole simulation by a set of simulations; for each of the individual simulations within the set, there is no change in the geometry during the (individual) simulations. Now each of the individual simulations correspond to a particular network of connections, and when one switches from one individual simulation to the next individual simulation, network of connections would change. I am worried whether this switching would take significant amount of time. For now let us assume that I would come up with a really good electrical analogy so that the time required for the individual simulations is negligibly small (of course, this is the idea behind choosing an analog computer over a digital computer). Still, we can expect the analog computer to be faster than a digital computer only if changing the connections (or switching) does not take significant amount of time.

My goal is not to prove that an analog computer is faster than a digital computer or vice versa. I want to complete a whole simulation within one second. The whole simulation consists of a set of thirty individual simulations. Each of the individual simulations require a particular configuration of connections, while the configuration of connections is different for each of the individual simulations. It is a well known fact that present day digital computers (even clusters or supercomputers) are not capable of providing the correct solution (they cannot complete the thirty individual simulations within one second). Hence I am curious whether one could be able to address the problem by building a suitable analog computer. I can see that building an analog computer would solve the problem if both of these hold good 1) one should be able to find a very good electrical analogy 2) the time required to change the connections (or the network of connections) should be sufficiently small. Assuming that it is possible to find a very good electrical analogy, the success of the analog computer to be built depends on whether one can change the network of connections within a sufficiently small time interval. Since finding a suitable electrical analogy would require significant amount of work, I would involve myself in that task only if there is a possibility that it is possible to change the network of connections within a sufficiently small time interval.
 
  • #35
Kirana Kumara P said:
I am worried whether this switching would take significant amount of time.

We can not possibly answer that without knowing how long you define significant. Let's say that a relay switches in 10 milliseconds. Is that significant?

Your description of the problem does not describe any dynamics at all. It is the dynamics (i.e. range of interest in the frequency domain) that determine the needs of switching. Indeed, your description sounds like the problem may be static, with no dynamics at all.

If you have a nonlinear 3D problem, the required granularity is also a critical parameter. If you represented it with finite elements, how many elements would you need?

The quality of answers you receive here depends strongly on the quality of the question description. You are asking for design advice. In engineering, requirements specifications always precede design.
 
  • Like
Likes Kirana Kumara P
  • #36
Kirana Kumara P said:
Still, we can expect the analog computer to be faster than a digital computer only if changing the connections (or switching) does not take significant amount of time.
False. For the same bandwidth technology, a digital processor will produce results more than 100 times faster than any analogue computer, with or without changes of parameters. A digital signal never has to settle to a fixed value, it need only be clear of the transition threshold. An analogue circuit requires more time and a lower noise environment. It is unlikely that the errors of an analogue computer at speed will be much less than 0.4%, equivalent to 8 bits. We can do a great many 16 bit digital computations, (with less than 0.004% error), in the time it takes one analogue signal to settle.

Kirana Kumara P said:
One of the methods of solving the above problem could be to make use of the finite element method (although it may be possible to solve the differential equations directly by building a suitable analog computer, without making use of the finite element method).
A finite array of analogue computer elements is still FEM.

Kirana Kumara P said:
Of course, the answer to the above question may be problem dependent.
You are correct, there are many problem dependent possibilities. Without specifications, or a set of equations, anything is possible. It seems like you are trying faithfully to maintain a belief in analogue computers, while all the evidence suggests they have been extinct for quite some time.
I no longer expect you to present specifications and equations, as to do that would threaten your faithfully held belief.
 
  • Like
Likes Kirana Kumara P
  • #37
for the most part the passive components in an analog simulation would remain constant but hypothetically there can be non linear components or ones with initial conditions or ones that depend on the value of another term (feedback). For example a light bulb will change its resistance as it heats up or a capacitor can have an initial DC charge or cell energy uptake depends on available oxygen and other factors. Active non linear and time dependent circuits can also be developed but I have no idea how one would implement that for a biological system.

As far as I know, the analog computer is best at solving partial differential equations not necessarily in simulating systems. If you can write a Diff Eq for this system then you need to look at each term for nonlinearity and time dependence and endeavor to build a circuit that emulates that function. you're going to have to limit what you simulate, the biological systems have many feedback terms and depending on the result and resolution you want, it could be overwhelming.
 
  • Like
Likes Kirana Kumara P
  • #38
anorlunda said:
We can not possibly answer that without knowing how long you define significant. Let's say that a relay switches in 10 milliseconds. Is that significant?

Your description of the problem does not describe any dynamics at all. It is the dynamics (i.e. range of interest in the frequency domain) that determine the needs of switching. Indeed, your description sounds like the problem may be static, with no dynamics at all.

If you have a nonlinear 3D problem, the required granularity is also a critical parameter. If you represented it with finite elements, how many elements would you need?

The quality of answers you receive here depends strongly on the quality of the question description. You are asking for design advice. In engineering, requirements specifications always precede design.

Ten milliseconds is insignificant for me (and hence okay for me). This is because I have about 30 milliseconds for completing one individual simulation and to change the connections/parameters once. If changing the connections/parameters can be completed in 10 milliseconds, I still have 20 milliseconds for the actual (individual) simulation. Since the very idea of going for an analog computer is to make the simulations run faster, it should be possible for the individual simulations to complete within the remaining 20 milliseconds (otherwise there is no point going for an analog computer). I wish to know whether the 10 milliseconds number is a realistic estimate of the time needed to change the connections/parameters (at least I wish to know whether the 10 milliseconds figure is not impossible while solving certain problems; here, these "certain problems" may or may not be the problem that I have in my mind).

Yes, one can assume that I am interested to solve a static problem. But I am interested to solve a series of static problems (30 static problems) within one second. I am not interested in the dynamics that may come into picture when one switches from one static problem to the next. Let us assume that the system is designed well so that it is practically quite stable (something close to a critically damped system where the oscillations before reaching the steady state are minimum). From a practical point of view, even if the unwanted dynamics and oscillations happen to be unavoidable, it may be reasonable to assume that it would not usually take more than 10 milliseconds for the oscillations to settle down. Hence it may be reasonable to assume that I can complete one individual (static) simulation within 30 milliseconds (including the time required to change the connections/parameters once), if the time required to change the parameters/connections is only about 10 milliseconds.

I would prefer using a minimum of about 500 elements. There is no harm using more number of elements. But I want an individual simulation (plus the time required to change the connection/parameters once) to be completed within 30 milliseconds.
 
  • #39
Kirana Kumara P said:
I would prefer using a minimum of about 500 elements. There is no harm using more number of elements. But I want an individual simulation (plus the time required to change the connection/parameters once) to be completed within 30 milliseconds.

It was about 45 years ago when we quit using analog circuit analogies to solve equations because digital solutions became so much better. Therefore, I think it is likely that the others in this thread who say that you are wrong, and that digital solutions would be faster are right and you are wrong. However, we don't have enough details about your problem to say that conclusively. Therefore, I'll take you at your word that your analog circuits can do in 30 ms what supercomputers are not able to achieve. So what then?

It sounds like the speed to switch components between solutions is not the limiting factor. You can get solid state relays even faster than 10 ms to switch one resistor out and a different resistor in.

But with 500 or more nodes, you are talking about thousands, or tens of thousands of analog components. The time to design and fabricate them will be very long. You will likely need a digital computer anyhow to decide on the parameter changes and to issue commands to the thousands of relays. Heat dissipation will become a problem for this monster machine that will fill a whole room.

Most of all, reliability could make it all impractical. In my early days, we used analog computers in a project. We had a technician who showed up early every working days to replace the vacuum tubes that failed the previous day. That took an hour each day and we only had 10 amplifiers to worry about. Even with modern components, the MTBF of this machine might be less than an hour.

Switching speed aside, I am highly skeptical that your analog computer dream will be practical. If you asked me to collaborate with you on the project, I would flee the scene. It would be a career killer to go back 45 years in technology to solve a problem.

My best advice is that you should re-verify your understanding about what modern digital computers can accomplish in a short time. It is likely that you got it wrong the first time.
 
  • Like
Likes Kirana Kumara P
  • #40
The topology of integrators and summing amplifier circuits used for the solution of differential equations in analogue computers is the same as was used in analogue state variable filters. Those filters have now been replaced by very low power digital filters throughout signal processing technology.
https://en.wikipedia.org/wiki/State_variable_filter
https://en.wikipedia.org/wiki/Digital_filter

It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array
 
  • Like
Likes Kirana Kumara P
  • #41
Baluncore said:
False. For the same bandwidth technology, a digital processor will produce results more than 100 times faster than any analogue computer, with or without changes of parameters. A digital signal never has to settle to a fixed value, it need only be clear of the transition threshold. An analogue circuit requires more time and a lower noise environment. It is unlikely that the errors of an analogue computer at speed will be much less than 0.4%, equivalent to 8 bits. We can do a great many 16 bit digital computations, (with less than 0.004% error), in the time it takes one analogue signal to settle.A finite array of analogue computer elements is still FEM.You are correct, there are many problem dependent possibilities. Without specifications, or a set of equations, anything is possible. It seems like you are trying faithfully to maintain a belief in analogue computers, while all the evidence suggests they have been extinct for quite some time.
I no longer expect you to present specifications and equations, as to do that would threaten your faithfully held belief.

When one uses the word "simulation", it can be one of these:

1) Simulation of physics using a digital computer
2) Simulation of physics using an analog computer
3) Simulation of the analog computer of point 2) above, on a digital computer

One may note that points 1) to 3) are not one and same (although the results obtained using the three methods are expected to be practically the same). When I talk about analog simulations, it always refers to 2) above (not 3) above). I am interested to simulate certain physics, not certain analog computer. How one represents the physics depends on whether one uses a digital computer or an analog computer.

To give a simple example, if someone wants to find the circumference of a circle using a digital computer, he can just use the well known formula that can calculate the circumference if the radius is known. On the other hand, analog computer is something like actually drawing a circle with the specified radius on the ground, and then actually measuring the circumference using a thread. Now the point 3) above is like drawing the circle on a computer screen using an equation that describes the circle, and then "measuring the circumference" by counting the number of pixels on the circumference and by knowing the distance between individual pixels. Hence it is not fair to simply compare the speed of digital computers to analog computers because the problem to be solved itself are different (although the physics that needs to be simulated is the same, the analog model is different from the digital model).

Hence I do not understand your point that digital simulation should be at least 100 times faster than the corresponding analog simulation. I believe that the success of analog computers depend on the possibility of finding a good electrical analogy. Of course, when it comes to problems like the simulation of biological organs, it may turn out to be a very complicated task and the risk of failure may also be high, but this makes the problem interesting, challenging, and important. But no one would try to take up the challenge (and risk) if there is no possibility of being successful.

These three points are still of concern to me (thanks to many of the replies above, which helped me to get insight into the following points):
1) Time consumed for actively changing the connections and/or parameters
2) Time required for the oscillations to settle
3) Unpredictable and uncontrolled variation in parameters because of heating and drift over time

Constructing arrays of analog computing elements resembles more to the Finite Difference Method (not the Finite Element Method). The Finite Difference Method carries out simulations by replacing differential equations with difference equations.

There is a reason for not presenting the nonlinear differential equations. Available literature does not give the final form (which is required for our purpose) of these differential equations. This is like first defining a variable "a" as a function of the variables "b", "c", "d", then defining a variable "e" as a function of "a", then defining a variable "f" as a function of "e", then writing down the nonlinear differential equation in terms of "f". One may need to use a digital computer to get the final form of the differential equations. Moreover, some of the parameters (coefficients) in the differential equations will not be known beforehand. For example, the coefficients may depend on the position of the mouse pointer on the screen that is connected to a digital computer; the mouse pointer would be actively controlled by a human user (nobody can predict beforehand how the human user is going to move the mouse pointer); hence the digital computer would note down the position of the mouse pointer, then calculate the coefficients for the differential equations. Then the digital computer can issue commands (at appropriate times) to the analog computer to change the connections/parameters of the analog computer.

If I am going to use the Finite Element Method, I can tell the form of the final set of equations to be solved using an analog computer; that is, as I have already told, I need to solve just a set of simultaneous nonlinear algebraic equations (not nonlinear differential equations). As informed in the previous paragraph, I can get the numerical values of the coefficients only during run-time. As informed already, I have about 5000 equations in the set of simultaneous nonlinear algebraic equations (but if the number is too much, I am okay with 1000 equations also).

Finally, speed is more important for me than accuracy. I am okay with about 5% error also.
 
  • #42
Baluncore said:
It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array

While using FPGAs, is it possible to change the connections and parameters *During* simulations?
 
  • #43
anorlunda said:
It was about 45 years ago when we quit using analog circuit analogies to solve equations because digital solutions became so much better. Therefore, I think it is likely that the others in this thread who say that you are wrong, and that digital solutions would be faster are right and you are wrong. However, we don't have enough details about your problem to say that conclusively. Therefore, I'll take you at your word that your analog circuits can do in 30 ms what supercomputers are not able to achieve. So what then?

It sounds like the speed to switch components between solutions is not the limiting factor. You can get solid state relays even faster than 10 ms to switch one resistor out and a different resistor in.

But with 500 or more nodes, you are talking about thousands, or tens of thousands of analog components. The time to design and fabricate them will be very long. You will likely need a digital computer anyhow to decide on the parameter changes and to issue commands to the thousands of relays. Heat dissipation will become a problem for this monster machine that will fill a whole room.

Most of all, reliability could make it all impractical. In my early days, we used analog computers in a project. We had a technician who showed up early every working days to replace the vacuum tubes that failed the previous day. That took an hour each day and we only had 10 amplifiers to worry about. Even with modern components, the MTBF of this machine might be less than an hour.

Switching speed aside, I am highly skeptical that your analog computer dream will be practical. If you asked me to collaborate with you on the project, I would flee the scene. It would be a career killer to go back 45 years in technology to solve a problem.

My best advice is that you should re-verify your understanding about what modern digital computers can accomplish in a short time. It is likely that you got it wrong the first time.

The relevant literature tells that none has been successful in solving the problem using the present day digital computers (major part of the problem involves solving a set of 5000 nonlinear simultaneous algebraic equations thirty times a second). I am clear about this.

A modern supercomputer (which is a digital computer) may make a simulation which would take one hour to complete on a desktop computer to complete within one second. However, it may not be capable of making a simulation which would take one minute to complete on a desktop computer to complete within 30 milliseconds. This is because of the time required for communication between processors in a supercomputer. This can be avoided only if at least one the following would hold good in the future:
1) We will have individual digital processors which are very fast (like 100 GHz processors)
2) Time required for communication between the processors in a supercomputer is reduced to a great extent.

Coming to analog computers, earlier I was worried about:
1) switching speed
2) time required for the signals to settle
3) unwanted change in the parameters because of heating and drift over time

From your reply above, I have to add the following important point to the above list:
1) reliability

Coming to practical difficulties of analog computing, the following points need to be noted also:
1) time, effort, and expertise required to design and fabricate
2) difficulty in finding collaborators
3) space requirement and cooling arrangement

I have not lost all the hope about the analog computer still, since I am ready to sacrifice a bit of accuracy as trade off for speed (even about 5% error may be okay for me). Assuming that switching speed is not of major concern, this might take care of time required for the signals to settle, and unwanted change in the parameters because of heating and drift over time. Reliability can come into picture after the computer is built. However, realizing the analog computer would likely be a long and tedious task, and the success may not be guaranteed. If successful, one may be able to tell that the analog computer has done something which even a modern supercomputer has not been able to achieve so far.
 
  • #44
Baluncore said:
A finite array of analogue computer elements is still FEM.

Yes, it could be FEM (or it could be other techniques, e.g., the Finite Difference Method).

One of my earlier replies could imply that I do not agree with the statement "a finite array of analog computer elements is still FEM". Hence I am writing this reply to clarify that I do not contradict the statement. But it need not be FEM alone (in fact the Finite Difference Method has more resemblance).
 
  • #45
I wish to ask a new question here.

Suppose we have an analog circuit (true hardware, not a digital computer simulation of hardware). If we give some inputs, the circuit delivers the output (result) in certain time interval (solution time). Let us call this "analog simulation".

Next, let us assume that we would simulate the same circuit on a digital computer. Here we are not bothered about the physics behind the construction of the analog circuit, and we will not consider the analytical solution for the problem in hand. In fact, we will not even bother about the original problem. We will just concentrate on the analog circuit and simulate the analog circuit on a digital computer. Let us call this "digital simulation of analog circuit".

I wish to know which of the two simulations is faster. Or, whether "analog simulation" is faster or "digital simulation of analog circuit" is faster is dependent on the problem (circuit) in hand?

One may note that both the above two simulations are different from the purely "digital simulation" which does not require any analog circuit at all (real or virtual).
 
  • #46
Kirana Kumara P said:
I wish to know which of the two simulations is faster. Or, whether "analog simulation" is faster or "digital simulation of analog circuit" is faster is dependent on the problem (circuit) in hand?
An analogue processor solves the problem as a continuous function. The digital processor takes discrete steps to reach the solution. To satisfy the Nyquist-Shannon sampling theorem, the rate of samples taken by the digital processor will be more than twice the highest significant harmonic component in the signal. That same rule applies to digital filters. It can be shown that if the sampling theorem is obeyed, the discrete digital and the continuous analogue are solving the same system of equations. Fourier would agree.
I expect a digital processor to produce results about 100 times faster than an analogue processor.

Kirana Kumara P said:
One may note that both the above two simulations are different from the purely "digital simulation" which does not require any analog circuit at all (real or virtual).
There should be no difference. Both digital algorithms will be an analogue of the organ being simulated. If they are different, one must be a worse approximation than the other.

You are worrying about irrelevant things. Digital processors will now always beat analogue processors. Your faith in an array of analogue processors looks like an example of “Steampunk”.
 
  • Like
Likes Kirana Kumara P
  • #47
Baluncore said:
An analogue processor solves the problem as a continuous function. The digital processor takes discrete steps to reach the solution. To satisfy the Nyquist-Shannon sampling theorem, the rate of samples taken by the digital processor will be more than twice the highest significant harmonic component in the signal. That same rule applies to digital filters. It can be shown that if the sampling theorem is obeyed, the discrete digital and the continuous analogue are solving the same system of equations. Fourier would agree.
I expect a digital processor to produce results about 100 times faster than an analogue processor.
You should not use the Nyquist theorem for real-time processes. Nyquist says that a sampling rate of twice the frequency would allow accurate phase and gain results given an infinite sample to analyse. (There are more complicated versions that apply to a finite sample length.) That is not relevant to real-time simulations. In reality, you would want at least 20 samples per cycle to get real-time results with marginally satisfactory phase and gain errors. It is easy to estimate what the phase and gain errors can be for any sampled signal. For high frequency real-time processes, I believe that analog circuits still have the advantage when applied to a complicated signal network. (But I have never worked with high-frequency systems, so I don't know what issues might create problems in analog simulations.)
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #48
FactChecker said:
You should not use the Nyquist theorem for real-time processes. Nyquist says that a sampling rate of twice the frequency would allow accurate phase and gain results given an infinite sample to analyse. (There are more complicated versions that apply to a finite sample length.) That is not relevant to real-time simulations. In reality, you would want at least 20 samples per cycle to get real-time results with marginally satisfactory phase and gain errors. It is easy to estimate what the phase and gain errors can be for any sampled signal.
I did not refer to “samples per cycle” but to twice the highest significant harmonic component in the signal. The frequency components in the simulation are very low which is why views every 30msec make an acceptably smooth image. A simulation over time is a continuous process, extracting a snapshot of the state every 30msec does not make it an isolated fixed short length sample.
The OP has referred to 5% error in post #43, “(even about 5% error may be okay for me)”.

FactChecker said:
(There are more complicated versions that apply to a finite sample length.)
Can you please give me a reference to such a version.
 
  • Like
Likes Kirana Kumara P
  • #49
Baluncore said:
I did not refer to “samples per cycle” but to twice the highest significant harmonic component in the signal.
Yes. We are both talking about the same thing.
The frequency components in the simulation are very low which is why views every 30msec make an acceptably smooth image. A simulation over time is a continuous process, extracting a snapshot of the state every 30msec does not make it an isolated fixed short length sample.
But still quite different from analyzing a long series. The rule of thumb of minimum 20 samples per cycle is a good starting point.

Can you please give me a reference to such a version.
Sorry, I do not have a reference at this time. I ran into one long ago, but I don't know where it is now.

ADDITIONAL NOTE (CORRECTION?): The rule of thumb of a minimum 20 samples per cycle was relevant to control law design. There the problem is to respond fast enough and accurately enough to control a process through feedback. That might be different from the problem of simulating system without trying to control it. I don't know about that situation.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #50
Baluncore said:
There should be no difference. Both digital algorithms will be an analogue of the organ being simulated. If they are different, one must be a worse approximation than the other.

A digital solution methodology that employs a worse approximation could be slower than a digital solution methodology that employs a better approximation. Coming back to the problem of finding the circumference of a circle using a digital computer, it may be faster to calculate (simulate) the more accurate solution (2*pi*radius) when compared to calculating the circumference by drawing a circle with the given radius on the computer screen by using a circle generation algorithm and then "measuring" the circumference by actually counting the number of pixels on the circumference and the distance between the pixels.
 
Back
Top