# I Optimization of an Amplifer Circuit

1. Jan 11, 2018

### SSGD

The gains of an OP-AMP are listed below:

$$G_d = (R_1*R_4+R_2*R_3+2*R_2*R_4)/(2*R_1*(R_3+R_4))$$
$$G_s = (R_1*R_4-R_2*R_3)/(R_1*(R_3+R_4))$$
$$\frac {\partial G_d} {\partial R_1} = -R_2*(R_3+2*R_4)/(2*R_1^2*(R_3+R_4))$$

My questions is...

Is there a mathematical perform the following:

Differential gain equal to 1 (Gd=1)
Summing gain equal to 1 (Gs=0)
Minimize partial derivatives (This would reduce the sensitivity to the Gains to Resistor Tolerances)

Idea is that if I can minimize the Partial Derivatives then the tolerance of the resistors could be larger and get the same desired results (Gains are 1 and 0 respectively). This isn't about this one specific example. This could be done for most systems that have tolerance stickups that have to be accounted for.

I am looking for a mathematical process to perform the above. Lagrangian doesn't seem to work or I don't truly understand its fully use it for this process.

Thanks

2. Jan 11, 2018

### Staff: Mentor

You show three equations in seven variables -- the four resistances, the two gains, and the partial. It's not possible to get a unique solution from such a system of equations.
You mention partial derivatives but included only the partial with respect to R1. With the other three partials, you would then have six equations in ten unknowns. Setting Gs and Gd to specific values would decrease the number of unknowns, but there are still more unknowns than equations.

3. Jan 11, 2018

### SSGD

Man, I need to proof read my posts before sending them... Sorry for all the errors...

4. Jan 11, 2018

### SSGD

I didn't want to list all of the partial derivatives. There would we 10 equations in all and only 4 variables. This would be an over determined system.
$G_d = 1$

$G_s = 0$

$min\left(\frac {\partial G_d} {\partial R_1}\right)$
$min\left(\frac {\partial G_d} {\partial R_2}\right)$
$min\left(\frac {\partial G_d} {\partial R_3}\right)$
$min\left(\frac {\partial G_d} {\partial R_4}\right)$
$min\left(\frac {\partial G_s} {\partial R_1}\right)$
$min\left(\frac {\partial G_s} {\partial R_2}\right)$
$min\left(\frac {\partial G_s} {\partial R_3}\right)$
$min\left(\frac {\partial G_s} {\partial R_4}\right)$

I want to hold Gd and Gs fixed and minimize all the partial derivatives. This seems like a constrained least squares problem, but not exactly.

Last edited by a moderator: Jan 11, 2018
5. Jan 11, 2018

### Staff: Mentor

Can you show the circuit that you wrote those equations for?

6. Jan 11, 2018

### SSGD

A linear approximation for the differential gain and the tolerance could be written as such:

$$G_d\left(\vec R+\vec T\right) \approx G_d\left(\vec R\right)+\nabla_\vec R G_d\left(\vec R\right) \cdot \vec T$$

If we assume the tolerances are all equal we would get the following:

$$G_d\left(\vec R+\vec T\right) \approx G_d\left(\vec R\right)+\nabla_\vec R G_d\left(\vec R\right) \cdot \left[1,1,...,1\right]T$$

$$G_d\left(\vec R+\vec T\right) \approx G_d\left(\vec R\right)+T\left[\sum_{k=0}^n \frac {\partial G_d} {\partial R_k}\right]$$

$$min\left[\sum_{k=0}^n \frac {\partial G_d} {\partial R_k}\right]$$

Lagrangian would be:

$$L\left(\vec R,\lambda \right) = \sum_{k=0}^n \frac {\partial G_d} {\partial R_k} +\lambda \left(G_d-1 \right)$$

If we didn't assume equal tolerance then Lagrangian would be:

$$L\left(\vec R,\lambda \right) = \sum_{k=0}^n T_k\frac {\partial G_d} {\partial R_k} +\lambda \left(G_d-1 \right)$$

How could we incorporate Gs (Summing Gain) into the Lagrangian or find a better way of doing this???

7. Jan 11, 2018

### PAllen

Whether it makes sense electronically is not my area of knowledge. But as a mathematical system, as I understand it, it makes some sense.

You have two functions of 4 variables. Each of 8 partial derivatives is simply another function of the variables (obviously not wholly independent). You have two constraints, which still leaves you with effectively 2 free variables. You want to choose these to minimize 8 other expressions. This makes it overdetermined. What you want is then some rule for overall minization. For example, you could say you want minimize sum of squares of partials over the two free variables. You have to pick what you want here.

As to solving it, I cannot offer any specific advice.

[edit: my reply is to the description in post #4. I have not looked at #7, and don’t intend to.]

Last edited: Jan 11, 2018
8. Jan 11, 2018

### Staff: Mentor

In principle you can express the quantity you want to minimize as function of two unknowns, and then minimize it as usual (calculate the derivatives, set them to zero). This will probably be extremely messy but it is possible, at least numerically in the last step. Note that you can solve for two resistances based on the gain equations, so you can express everything as function of two resistances explicitly.
In practice plugging everything into suitable software and scanning over the parameter space is easier.

@SSGD: You probably don't want to minimize the derivatives directly, because they will get smaller if you make all resistances larger by the same factor. Take the derivatives multiplied by the resistances to get a more meaningful quantity to minimize.

9. Jan 12, 2018

### SSGD

MFB is this more in line with the idea of "Take the derivatives multiplied by the resistances to get a more meaningful quantity to minimize"

Most resistors tolerances are in a +/-% of the value of the resistor.

$$G_d\left(\left(I+P\right)\vec R\right) \approx G_d\left(\vec R\right)+\nabla_\vec R G_d\left(\vec R\right) \cdot P\vec R$$

$$L\left(\vec R,\lambda \right) = \sum_{k=0}^n P_kR_k\frac {\partial G_d} {\partial R_k} +\lambda \left(G_d-1 \right)$$

10. Jan 12, 2018

### Staff: Mentor

Where do all the new symbols come from?

I meant $min\left(R_1 \frac {\partial G_d} {\partial R_1}\right)$

11. Jan 12, 2018

### SSGD

Okay. I will see if I can write a little program to minimize the above expression.

12. Jan 15, 2018

### SSGD

So I made a little program to try to solve this and in doing so I realized a few things and started doing some reading to see if I could find more answers.

I need to explain a few things. All I did with the below expression is factor the R vector and defined P as the percent tolerance matrix which would be a diagonal matrix. The two expressions could we equal.

$$\vec R+\vec T=\left(I+P\right)\vec R$$

The Tk and Pk I my mind would weight the different partial derivatives against each other. So if the weight (tolerance) was small the partial wouldn't need to be as small in comparison to a weight (tolerance) that was large which would need to make the partial smaller.

I realized I didn't want to minimize all the partial derivative expressions. I only have two objective functions to minimize. They are listed below.

$$min\left[\sum_{k=0}^n T_k\frac {\partial G_d} {\partial R_k}\right]$$

$$min\left[\sum_{k=0}^n T_k\frac {\partial G_s} {\partial R_k}\right]$$

or from the information mfb listed

$$min\left[\sum_{k=0}^n P_kR_k\frac {\partial G_d} {\partial R_k}\right]$$

$$min\left[\sum_{k=0}^n P_kR_k\frac {\partial G_s} {\partial R_k}\right]$$

Secondly I realized this is a multiple objective constrained non-linear least squares problem and I would have to define a function to minimize that would weight the two objective functions against each other. Below is the expression for the function to minimize.

$$F\left(\vec R,\alpha,\beta,A,B \right) = \left[A\left(0-\sum_{k=0}^n T_k\frac {\partial G_d} {\partial R_k}\right)^2+B\left(0-\sum_{k=0}^n T_k\frac {\partial G_s} {\partial R_k}\right)^2\right]+\alpha \left(G_d-1\right)+\beta \left(G_s\right)$$

or from mfb the following

$$F\left(\vec R,\alpha,\beta,A,B \right) = \left[A\left(0-\sum_{k=0}^n P_kR_k\frac {\partial G_d} {\partial R_k}\right)^2+B\left(0-\sum_{k=0}^n P_kR_k\frac {\partial G_s} {\partial R_k}\right)^2\right]+\alpha \left(G_d-1\right)+\beta \left(G_s\right)$$

I would say to two objective functions are equally important. I can't figure out a way to clearly define if they really are equally important to the sensitivity of the two gains. Below is what I think I would need to optimize to reduce the effects of tolerances on the gains, given the constraints and tolerances.

$$F\left(\vec R,\alpha,\beta\right) = \left[\left(\sum_{k=0}^n T_k\frac {\partial G_d} {\partial R_k}\right)^2+\left(\sum_{k=0}^n T_k\frac {\partial G_s} {\partial R_k}\right)^2\right]+\alpha \left(G_d-1\right)+\beta \left(G_s\right)$$

or

$$F\left(\vec R,\alpha,\beta\right) = \left[\left(\sum_{k=0}^n P_kR_k\frac {\partial G_d} {\partial R_k}\right)^2+\left(\sum_{k=0}^n P_kR_k\frac {\partial G_s} {\partial R_k}\right)^2\right]+\alpha \left(G_d-1\right)+\beta \left(G_s\right)$$

13. Jan 16, 2018

### Staff: Mentor

If you want to minimize it, the last two terms should have the difference squared.

I had a closer look at the gains. If you want to keep them exact, then you cannot do anything about the derivatives. Introduce the two new parameters X=R1/R2 and Y=R3/R4 and your gains can be written as:

$G_d = \frac{X+Y+2}{2X(1+Y)} = 1$ and $G_s = \frac{X-Y}{X(1+Y)} = 0$ where I added the target values. The second equation tells us that $X=Y$, plugging that into the first one gives $2X+2 = 2X+2X^2$ with the solutions X=+1 or X=-1. The latter would need negative resistances, so X=1 and therefore Y=1 is the only option. This means R1=R2 and R3=R4. You can freely choose these pairs of resistances - it doesn't matter as only the ratio is relevant for the gains.