Numerical Simultaneous Solution of Non-Linear Coupled Equations

In summary, the conversation discusses a problem with solving a set of 6 equations that describe a gas turbine engine at part load. The equations cannot be solved symbolically and need to be solved numerically, but the person is not familiar with numerical methods and cannot find any easy-to-understand ones for implementation in code. Recommendations are given for the use of the Runge-Kutta method or the Newton-Raphson method, and the person has tried implementing the latter but with inaccurate results. Other methods such as an optimisation problem using algorithms like nonlinear least squares or the Simplex algorithm are also suggested. However, the person is unsure if their equations are sound or if the problem lies in their implementation.
  • #1
tangodirt
54
1
For the solution to this problem, I have reduced the number of equations down from 17 to 6.

Due to algebra reasons, these equations cannot really be solved symbolically (MAPLE tried, and return four full pages packed with symbols, just for one equation). These three equations need to be solved simultaneously, numerically.

The issue, is that I am not really familiar with numerical methods and I need to write my own code in C to do this. I cannot seem to find any easy-to-understand numerical methods out there for easy implementation in code.

Any recommendations that a non-mathematician can understand and does not require derivatives (for ease of programming)?

Thanks!
 
Last edited:
Mathematics news on Phys.org
  • #2
if it is a set of differential equations, you may use runge kutta
if not, you can find the roots by Newton raphson method

refer to numerical recipes in C (they have numerical recipes in MATLAB too) textbook
 
  • #3
They are not differential equations, just non-linear algebraic equations.

I've gone ahead an implemented the Newton-Raphson method in my code (did it in Python instead) and am having the same issues I have before: even with very good guesses, NR will not converge properly. It converges to the wrong values.

EES does it perfectly every time, but my implementation does not. These equations describe a simple cycle gas turbine engine at part load. For the full load case, the method converges perfectly. For the part-load case, it quickly diverges and returns junk.

Even as the guesses get progressively better, the model gets progressively worse. And, I know my implementation of NR is correct. My code even estimates the Jacobian :).

Are there any other, better methods that I can take a look at?
 
  • #4
The NR failure could be trying to tell you that the solution for your part-load condition is unstable. You wouldn't be the first person to design a gas turbine that is unstable at part load, and you probably won't be the last either!

You could try starting from your working full load solution and reduce the load in small increments to see when it fails, or if the "path" of the solution points looks sensible before it fails.

The "best way" to solve the equations is to use all the information you have, and since we don't know anything about the equations except there are 6 (or 17) of them, we haven't got much to work with.

Reducing 17 "simple" nonlnear equations to 6 more complicated ones could be a bad idea. You could try solving the original system directly.

For other methods, you could try treating this as an optimisation problem. If you want to solve the system of equations Ax = B, given an approximate solution x' write Ax' = B + R, and minimise RT.R using a standard optimisation algorithm like nonlnear least squares, conjugate gradient, or (my usual first choice) BFGS. They are all in "Numerical Recipes". The NR implementations aren't particularly efficient, but that isn't very important for a problem with 6 or 17 variables (it might be more important if you had 600 variables!)

If all that fails, a really "brute force and no brain" optimisation method is the Simplex algorithm - also in NR. It is almost guaranteed to find you SOME solution to a nonlinear system if you let it run for long enough, but it may not find the one you were looking for.
 
  • #5
AlephZero said:
The NR failure could be trying to tell you that the solution for your part-load condition is unstable. You wouldn't be the first person to design a gas turbine that is unstable at part load, and you probably won't be the last either!

You bring up an interesting point, but I don't believe my models are complicated enough to take into account surge or other peculiarities that would lead to unstable part load operation. Also, using EES and varying percent load (what I did initially) generates the proper curves from 0% load to 100% load, something my current implementation will not.

AlephZero said:
You could try starting from your working full load solution and reduce the load in small increments to see when it fails, or if the "path" of the solution points looks sensible before it fails.

All of the numbers "head" in the right direction, but as far as I can tell, the accuracy of the off-design numbers (even at 99.9%) is questionable. The only place the NR method converges on the proper numbers is at 100% load, which I find odd.

For what it's worth, my code can calculate the Jacobian two different ways (numerically, and with hard-coded partial derivatives) and both match. Part of the problem with my implementation arises when the Jacobian cannot be inverted. I have a feeling there's a way around this, but I am not good enough with calculus or linear algebra to know how to do this.

AlephZero said:
The "best way" to solve the equations is to use all the information you have, and since we don't know anything about the equations except there are 6 (or 17) of them, we haven't got much to work with.

Reducing 17 "simple" nonlnear equations to 6 more complicated ones could be a bad idea. You could try solving the original system directly.

That's probably true. Since posting my original question, I've re-derived my equations and reduced them down to 10 equations, 10 unknowns. Some of the equations can be combined and reduced very nicely. Also, some are last-child equations, meaning they can be eliminated without affecting the rest of the models (calculating a new cycle efficiency comes to mind).

These 10 equations are the same in EES as they are in my python model. Based on the solution in EES, I know the equations are sound. It is the method or (more likely) my implementation that is the problem.

AlephZero said:
For other methods, you could try treating this as an optimisation problem. If you want to solve the system of equations Ax = B, given an approximate solution x' write Ax' = B + R, and minimise RT.R using a standard optimisation algorithm like nonlnear least squares, conjugate gradient, or (my usual first choice) BFGS. They are all in "Numerical Recipes". The NR implementations aren't particularly efficient, but that isn't very important for a problem with 6 or 17 variables (it might be more important if you had 600 variables!)

If all that fails, a really "brute force and no brain" optimisation method is the Simplex algorithm - also in NR. It is almost guaranteed to find you SOME solution to a nonlinear system if you let it run for long enough, but it may not find the one you were looking for.

I'll take a look into these methods. These numerical solvers are really amazing. I feel like I've learned so much over the last few days, but I realize I haven't even begun to scratch the surface. I like how something seemingly simple can be used to solve some of the most complicated equations out there (and by no means am I calling my equations complicated!).
 
  • #6
tangodirt said:
All of the numbers "head" in the right direction, but as far as I can tell, the accuracy of the off-design numbers (even at 99.9%) is questionable. The only place the NR method converges on the proper numbers is at 100% load, which I find odd.
That says "there's a bug in your code" to me.

Try startng your code with the solution you get from EES. If it converges to something different, that suggests it's solving a different set of equations (or the EES solution is wrong!)

For what it's worth, my code can calculate the Jacobian two different ways (numerically, and with hard-coded partial derivatives) and both match. Part of the problem with my implementation arises when the Jacobian cannot be inverted. I have a feeling there's a way around this, but I am not good enough with calculus or linear algebra to know how to do this.
Getting a singular Jacobian (unless it is caused by a bug your code or your math) suggests something strange is going on - or possibly you are choosing physically unrealistic starting values for the iteration, and some of the partial derivatives are zero.

You could try a "modified Newton-Raphson" method. Solve the problem for 100% load, then re-use the Jacobian matrix from that condition instead of recalculating it. The convergence rate will be linear not quadratic, but that's better than no answer at all.

At part load, if MNR = NR but the EES answers are different, that says to me that your code is solving a different set of equations from EES.

If MNR = EES but NR is different, then you are probably solving the same equations, but your Jacobian is wrong.
 
  • #7
AlephZero said:
That says "there's a bug in your code" to me.

That's a good sign. I like bugs, because they can be fixed.

AlephZero said:
Try startng your code with the solution you get from EES. If it converges to something different, that suggests it's solving a different set of equations (or the EES solution is wrong!)

Both full load models generate the same number, so both models begin from the same starting point. The EES solution behaves exactly as expected. My model does not.

You would expect that as the engine is throttled, the mass flow rate falls off (as EES predicts). My python model has it decreasing, then flips directions and takes off, which is absurd.

AlephZero said:
Getting a singular Jacobian (unless it is caused by a bug your code or your math) suggests something strange is going on - or possibly you are choosing physically unrealistic starting values for the iteration, and some of the partial derivatives are zero.

Well, some of the partial derivatives are zero. But regardless, I reformulated my Jacobian solver code this morning and it does not return singular matrices (non-invertable) anymore.

AlephZero said:
You could try a "modified Newton-Raphson" method. Solve the problem for 100% load, then re-use the Jacobian matrix from that condition instead of recalculating it. The convergence rate will be linear not quadratic, but that's better than no answer at all.

At part load, if MNR = NR but the EES answers are different, that says to me that your code is solving a different set of equations from EES.

If MNR = EES but NR is different, then you are probably solving the same equations, but your Jacobian is wrong.

Let me give this a try and see what happens. I'm happy with a linear convergence to be honest. Like you said, any correct answer is better than no (or junk) answers.
 
  • #8
AlephZero said:
At part load, if MNR = NR but the EES answers are different, that says to me that your code is solving a different set of equations from EES.

Yep :)

I implemented a compressor and turbine curve for my model and it is what was throwing the numbers all out of whack. Disabling the curve (fixing the efficiencies) makes the code run beautifully! Thank you very much for your help!
 
  • #9
Okay, so here's another question for you guys. The code is working, but it is executing rather slowly. I would like to speed up the process.

In theory, since the guess values are recycled every iteration (results from 99% load used in 98% load calculation), the number of iterations to convergence should be very low (deltas become very small) and convergence should happen quickly. Unfortunately, this is not the case with my code.

In theory, the convergence at 99% load (using 100% load values) should be similar to the convergence at 98% load (using 99% load values). This process should "haul" and execute very quickly, as it does in EES.

[PLAIN]http://img801.imageshack.us/img801/4619/convergence.png[/CENTER]

As you can see from the plot, each point takes around 24 iterations to converge. Here's the time spent on each convergence:

[PLAIN]http://img718.imageshack.us/img718/5793/convergencetime.png[/CENTER]

With each iteration taking around a half of a second to run, this process can get quite lengthy in time when the precision of the curve (number of % load increments) is increased. In theory, the convergence time for each point should decrease as the delta percent load is decreased. With that in mind, the overall computation time should be similar regardless of the number of points (as delta percent load decreases, the number of iterations to convergence should decrease as the "guess" values become better and better, while the total number of points increases, cancelling each other out).

However, what I am seeing is that the number of iterations for each point in the cycle is nearly constant (20-25), regardless of the closeness of the guess values. Also, I have verified that the updated guess values (from previous percent load point) are in-fact being fed into the new calculation.

Any ideas?​
 
Last edited by a moderator:
  • #10
For the first few points (99% , 98%, etc) it looks fairly sensible. At least, the number of iterations is less than the 12 to get the 100% solution.

Look at how the solution changes at each iteration and see if the convergence looks linear or quadratic. If it looks quadratic for the first few points and then tails off to be linear, or the first few iterations for each point are jumping around "randomly", that suggests your Jacobian still isn't right, though it's better than when nothing worked. It only takes one wrong entry in the Jacobian matrix to screw up the rate of convergence.

Anther possibility is that you have an unrealistically small tolerance to check for convergence.

You can sometimes get numerical problems if your solution variables are in different physical units with a wide range of numerical values. Sometimes it helps to "non-dimensionalize" everything - e.g. find your 100% load solution, then scale the units so that all the variables are 1.0 at 100% load and scale the answers back into phyiscal units at the end.
 
  • #11
Here's what the changing values look like as a function of iteration count. This plot used the following parameters:

Damping: 75%
Precision = 0.001 (overall sum-of-squares error is less than 0.001).
Delta Percent Load: 1%

With 0.01, it needs 18 iterations and 0.1 requires 14 iterations. For 100, it needs 2 iterations, but the error is atrocious. Also, my code has a numerical Jacobian estimator, so it's dynamically estimating the Jacobian for me. No typo's here.

r8fjwl.jpg

Looks pretty quadratic to me. I could change it so that it looks for percent difference at each variable to be less than 0.001 (or whatever other precision I wanted), but even then I am looking at 15 iterations.
 
  • #12
It looks linear to me. The fractional reduction in the percentage difference looks about constant for each iteration.

For quadratic convergence, you would get about twice as many significant figures correct at each iteration, so the percentage differences on successive iteration would be more like 0.1, 0.01, 0.0001, 0.00000001, etc, (i.e. 10%, 1%, 0.01%, 000001%, etc)

Does "damping 75%" mean you calcualate what the change in the solution should be using Newton's method and then scale it down by a factor of 0.75, or something simlar?

Try setting the damping so Newtons method just does whatever it wants to do, either right from the start, or after a few iterations (say 2 or 3) when it has got into the right ballpark.
 
  • #13
AlephZero said:
It looks linear to me. The fractional reduction in the percentage difference looks about constant for each iteration.

For quadratic convergence, you would get about twice as many significant figures correct at each iteration, so the percentage differences on successive iteration would be more like 0.1, 0.01, 0.0001, 0.00000001, etc, (i.e. 10%, 1%, 0.01%, 000001%, etc)

Ah, okay. I misunderstood that relationship. My apologies. For what it's worth, here are the 'symbolic' jacobian and the 'numeric' Jacobian. The symbolic solution is the hard-coded equations, hand derived while the numeric solution is generated by calculating a numerical approximate of the Jacobian.

Code:
Symbolic Solution:
[[ -1.000e+00   0.000e+00   0.000e+00   0.000e+00   3.103e-02   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00]
 [  1.000e+00  -1.000e+00   0.000e+00   1.170e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   7.823e-01   1.595e+00]
 [  0.000e+00   0.000e+00  -1.000e+00   1.146e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   0.000e+00  -1.000e+00   0.000e+00   0.000e+00   0.000e+00   3.356e+00   0.000e+00   0.000e+00]
 [  8.727e-02   3.535e-02   0.000e+00   0.000e+00  -1.000e+00   6.173e-02   0.000e+00   0.000e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00  -1.000e+00   1.042e+00   0.000e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   6.899e-02  -7.908e-02   0.000e+00   0.000e+00  -1.000e+00   1.146e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00  -1.000e+00   2.075e-01   2.075e-01]
 [ -1.806e+00   0.000e+00   0.000e+00   0.000e+00   6.379e+01   0.000e+00   0.000e+00   0.000e+00  -1.000e+00   0.000e+00]
 [  0.000e+00  -1.162e-02   1.368e-02   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   3.100e-02  -1.000e+00]]

Numerical Approximate:
[[ -1.000e+00   0.000e+00   0.000e+00   0.000e+00   3.103e-02   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00]
 [  1.000e+00  -1.000e+00   0.000e+00   1.170e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   7.823e-01   1.595e+00]
 [  0.000e+00   0.000e+00  -1.000e+00   1.146e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   0.000e+00  -1.000e+00   0.000e+00   0.000e+00   0.000e+00   3.356e+00   0.000e+00   0.000e+00]
 [  8.727e-02  -3.535e-02   0.000e+00   0.000e+00  -1.000e+00   6.173e-02   0.000e+00   0.000e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00  -1.000e+00   1.042e+00   0.000e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   6.899e-02  -7.908e-02   0.000e+00   0.000e+00  -1.000e+00   1.146e+00   0.000e+00   0.000e+00]
 [  0.000e+00   0.000e+00   0.000e+00   3.449e-02   0.000e+00   0.000e+00   0.000e+00  -1.000e+00   2.075e-01   2.075e-01]
 [ -1.806e+00   0.000e+00   0.000e+00   0.000e+00   6.379e+01   0.000e+00   0.000e+00   0.000e+00  -1.000e+00   0.000e+00]
 [  0.000e+00  -1.162e-02   1.368e-02   0.000e+00   0.000e+00   0.000e+00   0.000e+00   0.000e+00   3.101e-02  -1.000e+00]]

The only element that appears different is element x = 4, y = 7 from the top left corner. My initial reaction is that the equation I typed in is incorrect.

AlephZero said:
Does "damping 75%" mean you calcualate what the change in the solution should be using Newton's method and then scale it down by a factor of 0.75, or something simlar?

Yep, that's what I am doing.

AlephZero said:
Try setting the damping so Newtons method just does whatever it wants to do, either right from the start, or after a few iterations (say 2 or 3) when it has got into the right ballpark.

When I set the damping coefficient to 1 (no damping), the solution fails to converge at various locations (94%, 84%, 76%, 70%, and finally at 66% load, starts generating junk answers and crashes with a divide by zero error.
 
  • #14
It sounds like you need a multi-dimensional root-finder. If you do a search for this term, you should find several references. For example,

http://www.gnu.org/software/gsl/manual/html_node/Multidimensional-Root_002dFinding.html"

There is also much source code available. I suspect NETLIB might have an algorithm posted, but I haven't searched. I did come across one routine posted by a professor at George Mason University:

http://iris.gmu.edu/~snash/nash/software/software.html"

Go through the "single precision" link. Then "ch07.f (nonlinear equations)". The file posted there includes a routine for a single variable root-finding routine and a multi-variable root-finding routine. The routine either calculates derivatives, or accepts them as inputs, depending upon one of the parameters input. The routines are written in FORTRAN, but you could probably translate them to C and make them handle double-precision variables.

Hope this helps.
 
Last edited by a moderator:
  • #15
I've been plugging away at my code and have devised a method to dynamically adjust the Jacobian to ensure (or at least, to try its best to ensure) that the solution converges. Ironically, out of this new solution, came quadratic convergence (I think).

Most of the solutions converge now within a few iterations. Here's another example of a well behaved solution. What do you all think?

[PLAIN]http://img39.imageshack.us/img39/3931/25937752.png[/center]​
 
Last edited by a moderator:

1. What is numerical simultaneous solution of non-linear coupled equations?

Numerical simultaneous solution of non-linear coupled equations is a method used to solve a system of equations that involve multiple variables and non-linear relationships between them. It involves using numerical methods and algorithms to find the values of the variables that satisfy all the equations simultaneously.

2. How is numerical simultaneous solution different from analytical solutions?

Numerical simultaneous solution involves using numerical methods to approximate the solutions, while analytical solutions involve finding exact algebraic expressions for the solutions. Numerical solutions are often used when analytical solutions are not possible or too complex.

3. What are some common numerical methods used in simultaneous solution of non-linear coupled equations?

Some common numerical methods used in simultaneous solution include the Newton-Raphson method, the Gauss-Seidel method, and the Jacobi method. These methods involve iteratively updating the values of the variables until a satisfactory solution is obtained.

4. What are the applications of numerical simultaneous solution?

Numerical simultaneous solution is used in various fields, including physics, engineering, economics, and biology. It is particularly useful for solving complex systems of equations that cannot be solved analytically, such as in modeling dynamic systems or optimization problems.

5. What are the limitations of numerical simultaneous solution?

One limitation of numerical simultaneous solution is that it relies on initial guesses for the values of the variables, which can affect the accuracy of the solution. Additionally, numerical methods may converge to incorrect solutions or fail to converge at all in certain cases. It is also important to consider the precision and rounding errors in the calculations of the solutions.

Similar threads

Replies
16
Views
1K
  • General Math
Replies
11
Views
1K
Replies
3
Views
1K
Replies
9
Views
1K
Replies
4
Views
768
Replies
2
Views
230
Replies
8
Views
1K
Replies
3
Views
921
  • Special and General Relativity
Replies
5
Views
889
Replies
2
Views
2K
Back
Top