Optimization using Newton's method gradient hessian

In summary, the conversation is about using Newton's method for numerical optimization, specifically finding the quadratic approximation of a function using the Hessian matrix and then using Newton's method to find the minimum or maximum of the function. It is important to choose a good starting point and be mindful of the computational complexity when using the Hessian matrix.
  • #1
proglearn
1
0
Hello,

This is my first post here. So I hope I'm posting in the right place, sorry if not.
http://homes.soic.indiana.edu/classes/spring2012/csci/b553-hauserk/Newtons_method.pdf

I am trying to solve the following numerical optimization function using Netwon's Method:

So, if I have the gradient of a function f(x) in xk=(1, -2) and also the hessian at the same point, how can I calculate which is the quadratic aproximation of this function using Newton's method multivariate.
 

Attachments

  • Snapshot.jpg
    Snapshot.jpg
    9.2 KB · Views: 444
Physics news on Phys.org
  • #2

Hello,

Welcome to the forum! It looks like you are working on a numerical optimization problem using Newton's method. Newton's method is a powerful tool for finding the minimum or maximum of a function, and it relies on the first and second derivatives of the function.

To calculate the quadratic approximation of the function, you will need to use the Hessian matrix, which is the matrix of second derivatives of the function. In your case, since you have the gradient and Hessian at a specific point, xk=(1, -2), you can use the following formula to calculate the quadratic approximation:

f(x) ≈ f(xk) + ∇f(xk)^T(x-xk) + 1/2(x-xk)^TH(x-xk)

Where ∇f(xk) is the gradient at xk, H is the Hessian matrix at xk, and x is the variable you are optimizing for. This formula represents the quadratic approximation of the function around the point xk.

Now that you have the quadratic approximation, you can use Newton's method to find the minimum or maximum of the function. The general formula for Newton's method is:

xk+1 = xk - [H^-1∇f(xk)]

Where H^-1 is the inverse of the Hessian matrix. This formula tells us that to find the next point, xk+1, we need to take the current point, xk, and subtract the inverse of the Hessian matrix multiplied by the gradient at xk. This process is repeated until we reach a point where the gradient is close to zero, indicating that we have found a local minimum or maximum.

I hope this helps with your problem. Keep in mind that Newton's method can be sensitive to the starting point, so make sure to choose a good initial guess. Also, be careful when using the Hessian matrix, as it can be computationally expensive for large problems. Good luck with your optimization!
 

Related to Optimization using Newton's method gradient hessian

1. How does Newton's method work for optimization?

Newton's method is an iterative optimization algorithm that uses the gradient and Hessian of a function to find the minimum or maximum value. It starts with an initial guess and then uses the gradient and Hessian to update the guess until it converges to a solution.

2. What is the difference between gradient descent and Newton's method?

The main difference between gradient descent and Newton's method is the way they update the guess in each iteration. Gradient descent only uses the gradient of the function, while Newton's method uses both the gradient and the Hessian. This allows Newton's method to converge to a solution faster and more accurately.

3. How is the Hessian matrix used in Newton's method?

The Hessian matrix is a square matrix of second-order partial derivatives of a function. In Newton's method, the Hessian is used to calculate the curvature of the function at a given point. This curvature is then used to update the guess and find a better approximation of the solution.

4. What are the advantages of using Newton's method for optimization?

Some advantages of using Newton's method for optimization include faster convergence to a solution, better accuracy, and the ability to handle non-convex functions. It is also less sensitive to the choice of initial guess compared to gradient descent.

5. Are there any limitations to using Newton's method for optimization?

One limitation of Newton's method is that it requires the Hessian matrix to be computable and invertible at every point. This may not be possible for some complex functions. Additionally, the calculation of the Hessian can be computationally expensive, making it less efficient for large-scale problems.

Similar threads

  • General Math
Replies
5
Views
868
  • Calculus and Beyond Homework Help
Replies
4
Views
857
Replies
3
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Mechanics
Replies
3
Views
1K
Replies
18
Views
2K
  • Differential Equations
Replies
1
Views
2K
Replies
2
Views
931
  • Linear and Abstract Algebra
Replies
2
Views
2K
Replies
1
Views
2K
Back
Top