A How to use the Newton-Raphson method while keeping the sum constant?

K-Manu
Messages
7
Reaction score
0
Question: How to find solutions using Newton-Raphson method with keeping the sum of solutions?

I am currently working on solving a high-dimensional equation to obtain accurate solutions using the Newton-Raphson method, which I have implemented in Python.
However, my problem requires that the sum of the solutions must equal a constant value, z, which I set. Despite trying several approaches to enforce this constraint, I have not yet achieved the desired result.
For example, I attempted the following methods:
  1. Ensuring that the sum of the updates sum(dx) → 0
  2. Normalizing the solution vector by setting x/sum(x)*z
  3. Extending the matrix to incorporate the constraint (as shown in my Python code, which I discussed here: Stack Overflow link).
Unfortunately, these approaches have not yielded the expected results.
Any mathematical insights or tips would be greatly appreciated.
 
Mathematics news on Phys.org
Please explain "the sum of the solutions". If you mean <br /> \left.\begin{array}{c}<br /> f_1(x_1, \dots, x_n) = 0 \\<br /> \vdots \\<br /> f_n(x_1, \dots, x_n) = 0<br /> \end{array}\right\}\mbox{ subject to }\sum_{i=1}^n x_i = z then that is n + 1 equations in n unknowns, which might not have any solutions; you can avoid that issue by adding z as an unknown and starting with an initial guess with z equal to your chosen value. But again, there may just not be any solutions for that value of z, or the choice of initial guess for the other x_i might be in the domain of attraction of a fixed point which does not satisfy your constraint. Adding z as an unknown is equivalent to solving <br /> g_i(x_1, \dots, x_{n-1}, z) = f_i(x_1, \dots, x_{n-1}, z - (x_1 + \dots + x_{n-1})) = 0 which can be done using standard methods.

SciPy already has a library of root finding methods (see scipy.optimize); you don't need to write your own. Those methods which require the jacobian will allow you to either supply a function which computes is analytically, or indicate that the objective function returns both the function value and the jacobian, or you can not supply one and the method will use its own numerical approximation. O you can use one of the methods which don't require the jacobian att all.

If you tell us the exact problem you are trying to solve, we may be able to provide better advice.
 
Last edited:
Have you played with the step size?

Finer steps often lead to better results.

Does the problem meet the criteria for using a Newton-Raphson approach?

I know in the case of Euler numerical approaches, we choose the method based on whether the system oscillates or not, as some methods introduce errors over time, and this error manifests as energy added or subtracted from the system.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top