- #1
Shauheen
- 2
- 0
I have been looking how to use the Levenberg Marqurdt algorithm for minimizing the errors of two functions at the same time. I looked up this topic in the internet and the only useful thing I found was a thread between "I like Serena" and "thomas430" in summer 2011.
https://www.physicsforums.com/showthread.php?t=521670
I read "I like Serena"'s notes, but still have problem in finding out how I have to calculate the Jacobian matrix.
I am measuring two values over time in the lab. Then, a numerical forward model has been developed which models the physics and now, I need to solve the inverse problem to find the unknown parameters. I have solved this problem previously several times when only one error function needed to be minimized. Based on what I have learned about LMA (from Inverse Heat Transfer by Necati Ozicik), the gradient of the error (from one function) will result in multiplication of the Jacobian matrix by the difference between the experimental and model results. This gradient needs to become zero:
∇S(P)=2[-∂T(P)/∂P][Y-T(P)]=0
In which P is the vector of the unknown parameters, T is the model value vector and Y is the experimental value.
This problem, minimizes S(P)=Ʃ[Yi-Ti(P)]^2
Now, I have to find let say 3 parameters from the minimization of two errors from my experiments. In my experiment, I measure two values vs. time, let say Y and M. These two values are dependent. The number of the collected data are not the same for these two and I think they are both equally important. I could check on the typical range of errors to make them equally significant.
My question is how to minimize the error of the combination of these two and find my unknowns?
Let say if the error is like:
E(P)=Ʃ[Yi-Ti(P)]^2+Ʃ[Mi-Ui(P)]^2
Ui and Ti are the values from my model.
What would be the Jacobian matrix in this case? for one function, it would be like:
P(k+1)=P(k)+[(J(k)T.J(k)+μ(k)Ω(k)]^-1 (J(k))T[Y-T(P(k))]
in which k is the kth iteration.
How would be the formulation once two functions are involved?
I would really appreciate if anyone could help.
https://www.physicsforums.com/showthread.php?t=521670
I read "I like Serena"'s notes, but still have problem in finding out how I have to calculate the Jacobian matrix.
I am measuring two values over time in the lab. Then, a numerical forward model has been developed which models the physics and now, I need to solve the inverse problem to find the unknown parameters. I have solved this problem previously several times when only one error function needed to be minimized. Based on what I have learned about LMA (from Inverse Heat Transfer by Necati Ozicik), the gradient of the error (from one function) will result in multiplication of the Jacobian matrix by the difference between the experimental and model results. This gradient needs to become zero:
∇S(P)=2[-∂T(P)/∂P][Y-T(P)]=0
In which P is the vector of the unknown parameters, T is the model value vector and Y is the experimental value.
This problem, minimizes S(P)=Ʃ[Yi-Ti(P)]^2
Now, I have to find let say 3 parameters from the minimization of two errors from my experiments. In my experiment, I measure two values vs. time, let say Y and M. These two values are dependent. The number of the collected data are not the same for these two and I think they are both equally important. I could check on the typical range of errors to make them equally significant.
My question is how to minimize the error of the combination of these two and find my unknowns?
Let say if the error is like:
E(P)=Ʃ[Yi-Ti(P)]^2+Ʃ[Mi-Ui(P)]^2
Ui and Ti are the values from my model.
What would be the Jacobian matrix in this case? for one function, it would be like:
P(k+1)=P(k)+[(J(k)T.J(k)+μ(k)Ω(k)]^-1 (J(k))T[Y-T(P(k))]
in which k is the kth iteration.
How would be the formulation once two functions are involved?
I would really appreciate if anyone could help.