Exploring Quadratic Elements for Simple Multiplying Circuits

In summary, the conversation discusses the possibility of using simple elements to expose quadratic behavior in circuits, specifically for the purpose of constructing a multiplier. The idea of using a four quadrant multiplier is mentioned, as well as the use of a neural network with millions of neurons. The complexity of each synapse in the network is a crucial factor, and the use of simple elements is emphasized. The possibility of using an op-amp with four transistors is also mentioned as a potential solution.
  • #1
haael
539
35
Are there any simple elements out there that expose quadratic behavior? I. e. current to voltage or frequency to voltage or anything.

I have done some google digging for multiplying circuits and found out that they are all very complex. But a simple circuit can be made using square function.

So: are there some natural simple elements that can be used for this purpose?
 
Engineering news on Phys.org
  • #2
A four quadrant multiplier works and it's not hard to put in a circuit. It will give you A times B quite happily - hence Bsquared.
 
  • #3
The transfer characteristic for an FET is square law.

[tex]{I_D} = {I_{DSS}}{\left( {1 - \frac{{{V_{GS}}}}{{{V_P}}}} \right)^2}[/tex]

or

[tex]{V_{GS}} = {V_P}\left( {1 - \sqrt {\frac{{{I_D}}}{{{I_{DSS}}}}} } \right)[/tex]
 
  • #5
A four quadrant multiplier works and it's not hard to put in a circuit. It will give you A times B quite happily - hence Bsquared.
I know, but my point was to construct the multiplier from some quadratic element.

Multipliers are surprisingly complex, I found out. When you imagine some circuit with million multipliers, the simplicity becomes the key factor.

Studiot, thanks for the answer.
 
  • #6
haael said:
I know, but my point was to construct the multiplier from some quadratic element.

Multipliers are surprisingly complex, I found out. When you imagine some circuit with million multipliers, the simplicity becomes the key factor.

Studiot, thanks for the answer.

If you want a cloud of a million multipliers, you'd better do that digitally, no?
 
  • #7
If you want a cloud of a million multipliers, you'd better do that digitally, no?
Only if the precision is more important than time.

The thing I was thinking of, was a neural network. Not a simulation, but a "physical" circuit. Each synapse must have its own multiplier for learning. Number of elements in each synapse is a bit important factor.
 
  • #8
You can construct an analog square or square-rooting circuit by including a nonlinear resistance (eg the channel of an FET) in the feedback loop of an op amp.
 
  • #9
O course, but I have to develop some simplified op-amp, since the classical ones have about 50 transistors :).
 
  • #10
O course, but I have to develop some simplified op-amp, since the classical ones have about 50 transistors

Not sure why you can't use a standard op amp. They are easier and smaller than rolling your own from discrete components, especially if you use SMD.

However if you need to use discrete components for some reason (is this a simulation or hardware?) you could do worse than to look at the article by Griffiths

A Simple Op Amp
Wireless World July 1970 p337 -338.

The author built a simple op amp from 4 transistors to study op amps and configurations.

I don't want to post the full article here, but if you want to let me have an email address that can receive jpgs by PM I can let you have a scan.
 
  • #11
Not sure why you can't use a standard op amp. They are easier and smaller than rolling your own from discrete components, especially if you use SMD.
It is just an idea; I doubt I ever actually build this. The point is: to design a neural network with buttloads of neurons. A few millions for instance. The count of synapses is a square of the count of neurons, so it rises very quickly with the size of the whole network.

Now the complexity of a synapse becomes a crucial factor. Only one transistor less per synapse is a billion less transistors in the whole network.

I hope you understand me now. All elements must be as simple as possible. The precision is not very important. An op-amp can be built with 4 transistors, OK. Each neuron would have one op-amp and one capacitor that stores its potential. Each synapse would contain one capacitor to store its weight, one multiplier to compute the potential times the weight, one multiplier to compute the learning signal, one integrator to adjust the weight and a bunch of resistors. One multiplier is made of 3 op-amps and 2 square elements (FETs). That gives at least 32 transistors per synapse.
 
  • #12
If you do not have the electronic expertise to design what you are looking for, perhaps you could draw a diagram with 'black boxes' describing the functions - input and ouput.

Then those who can do the circuitry might be more able to help, rather than simply answering questions.
 
  • #13
haael said:
It is just an idea; I doubt I ever actually build this. The point is: to design a neural network with buttloads of neurons. A few millions for instance. The count of synapses is a square of the count of neurons, so it rises very quickly with the size of the whole network.

Now the complexity of a synapse becomes a crucial factor. Only one transistor less per synapse is a billion less transistors in the whole network.

I hope you understand me now. All elements must be as simple as possible. The precision is not very important. An op-amp can be built with 4 transistors, OK. Each neuron would have one op-amp and one capacitor that stores its potential. Each synapse would contain one capacitor to store its weight, one multiplier to compute the potential times the weight, one multiplier to compute the learning signal, one integrator to adjust the weight and a bunch of resistors. One multiplier is made of 3 op-amps and 2 square elements (FETs). That gives at least 32 transistors per synapse.

It's been a while since I looked at neuron networks, but isn't there more than one bit of weight stored at each synapse? And even if it is only one bit, you need to refresh that capacitor (like in DRAM circuits).

Neural nets are definitely an interesting area of study.
 
  • #14
Here you are:
[PLAIN]http://img87.imageshack.us/img87/7646/neuron.png

but isn't there more than one bit of weight stored at each synapse? And even if it is only one bit, you need to refresh that capacitor
A capacitor can store more than 1 bit, doesn't it. Sustaining the capacitor potential is not that hard actually.
 
Last edited by a moderator:
  • #15
haael said:
Here you are:
[PLAIN]http://img87.imageshack.us/img87/7646/neuron.png


A capacitor can store more than 1 bit, doesn't it. Sustaining the capacitor potential is not that hard actually.

What are the multiple parallel paths? Extra bits of storage?

And I'd be interested in hearing your thoughts on how to hold an analog voltage value on a cap long-term. There's a whole lot of literature on sample-and-hold circuits, and both the sample and the hold parts are challenging...
 
Last edited by a moderator:
  • #16
Something to consider whilst I am looking at the diagram.

Why exactly do you need a multiplier?

Do you in fact need full multiplication facilities?

Another route to multiplication is repeated addition.
 
  • #17
The potential change of a neuron is:
[tex]d N_i = f( \sum_{j} S_{i,j} N_j )[/tex]

The learning signal of a synapse is:
[tex]d S_{i,j} = l N_i N_j[/tex]

Multiplication pops up many times here. The synapse potential computation could be done somehow without a multiplier if there was a variable resistor acting as a synapse weight, capable of negative resistance.

But the learning signal can not be calculated without multiplying two potentials.
 
  • #18
So are the voltages truly analog or can they be quantised?
 
  • #19
So are the voltages truly analog or can they be quantised?
They are analog in principle, but as I said, precision is not that important here, so they don't actually need to be "exactly" equal to their theoretical value from the mathematical model.

What are the multiple parallel paths? Extra bits of storage?
These are the additional synapses.

And I'd be interested in hearing your thoughts on how to hold an analog voltage value on a cap long-term.
I don't know. Maybe we can replace capacitors with memristors or use floating-gate MOSFETs.
 
  • #20
You see what I am thinking is that it is far more efficient to have a chip holding a multiplication table that could be accessed by all (or sub groups) of units.
With modern programmable arrays or whatever you would have enough storage to hold more 'answers' than the resolution of a 'voltmeter' anyway.
 
  • #21
This is a very interesting topic. Personally I've never researched how these process work but from your large numbers, I wondered what has been done using optical techniques; thinking imaging here. So I did a little googling and found this paper. I've only scanned it, but I think you will be interested. Before he gets into his device, he gives an Introduction on Bio-Inspired Computing , Parallel Distributed Processing, Learning from Examples, and Why Optics.

In this thesis an all-optical neural network is investigated. All key neural functions,
weighted summation, connections and threshold operation are implemented in the optical domain. The proposed optical neural network uses the longitudinal modes of a laser
diode as neurons. The outputs of this Laser Neural Network (LNN) correspond to the light
intensity contained in the longitudinal modes of the laser diode. The inputs to the neural
network are implemented by providing controlled optical feedback to the laser diode for
each of the longitudinal modes. For this purpose, the laser diode is coupled to an external
cavity in which inputs and weights are implemented by use of a transmission matrix and
a number of optical components. The inputs of this LNN are in the optical transmission
domain.

http://alexandria.tue.nl/extra3/proefschrift/boeken/9902284.pdf"

Wickipedia indicates that http://en.wikipedia.org/wiki/Self-organizing_map" hase been done with liquid crystals.

http://spiedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PSISDG001296000001000378000001&idtype=cvips&gifs=yes&ref=no"

Regards
 
Last edited by a moderator:
  • #22
Studiot said:
You see what I am thinking is that it is far more efficient to have a chip holding a multiplication table that could be accessed by all (or sub groups) of units.
Sadly, that means that you can not have efficiency and capacity at the same time. Either you have one multiplier per multiplication operation, which is very fast, but space-consuming. Or you can use one multiplier for a few multiplication operations, but you must serialize access to them somehow, which saves space but slows things down.

I was thinking of changing the representation of potentials. Instead of voltage, I could use i.e. frequency or phase shift and do multiplication using some law like relation between amplitude and frequency. But I didn't come up to anytning yet.
 
  • #23
Sadly, that means that you can not have efficiency and capacity at the same time.

Yes but all engineering is a compromise.
Maybe one table store would be overloaded by 1 million units, but 10? 100? 1000?

Do you really need 1:1 ie 1million?

Even if you did the approach is still another valid one and could more easily be mass produced since every table would be the same. Getting 1 million analog multipliers to work in sync is no mean feat.
 
Last edited:
  • #24
The conclusion is: the world is not yet ready for fast, big, analog neural network. It is certainly too hard to build it up from transistors or op-amps, but I was hoping there are natural simple elements that do the necessary tasks. The biggest problem I see is with the multiplication.

Maybe we should explore other physical phenomena and see if any does multiplication. The approach that dlgoff has shown is good: if the electricity does not multiply, then find something else that does.

I wonder if it is possible to build a "multiplying transistor". When there are two parallel wires with current flowing through them, they attract themselves with the Lorentz force proportional to the product of the currents. I think this could be used to construct a transistor-like device, something like that:
[PLAIN]http://img840.imageshack.us/img840/8899/mtransplus.png[PLAIN]http://img258.imageshack.us/img258/409/mtransminus.png
We pass current through 2 paths and see if they attract or repell.
 
Last edited by a moderator:

Related to Exploring Quadratic Elements for Simple Multiplying Circuits

1. What is the purpose of exploring quadratic elements for simple multiplying circuits?

The purpose of exploring quadratic elements for simple multiplying circuits is to find more efficient ways to perform mathematical operations, specifically multiplication, in electronic circuits. By utilizing quadratic elements, it is possible to reduce the number of components needed and increase the speed and accuracy of multiplying circuits.

2. How do quadratic elements differ from traditional electronic components?

Quadratic elements differ from traditional electronic components in that they are able to perform mathematical operations such as multiplication, division, and square root directly, without the need for additional circuitry. This makes them more efficient and reduces the complexity of electronic circuits.

3. What are some potential applications of using quadratic elements in multiplying circuits?

The use of quadratic elements in multiplying circuits has potential applications in various fields such as digital signal processing, cryptography, and scientific computing. It can also be applied in designing more efficient and accurate electronic devices, such as calculators and computers.

4. Are there any challenges or limitations to using quadratic elements in multiplying circuits?

One of the main challenges of using quadratic elements in multiplying circuits is the need for precise tuning and calibration to ensure accurate results. Additionally, the cost of these specialized components may be higher compared to traditional electronic components.

5. What advancements have been made in exploring quadratic elements for simple multiplying circuits?

There have been significant advancements in exploring quadratic elements for simple multiplying circuits in recent years. This includes the development of new quadratic element designs, improvements in accuracy and efficiency, and integration of these elements into existing electronic systems. Additionally, researchers continue to explore new ways to utilize quadratic elements for more complex mathematical operations.

Similar threads

  • Electrical Engineering
Replies
5
Views
1K
  • Electrical Engineering
Replies
3
Views
785
Replies
12
Views
1K
  • Electrical Engineering
Replies
4
Views
1K
Replies
4
Views
2K
  • Electrical Engineering
Replies
13
Views
1K
  • Electrical Engineering
Replies
9
Views
1K
Replies
12
Views
1K
Replies
4
Views
1K
  • Electrical Engineering
Replies
20
Views
7K
Back
Top