Numerical Optimization ( steepest descent method)

In summary, the conversation discusses the steepest descent method with exact line searches applied to a convex quadratic function. It is shown that if the initial point is parallel to an eigenvector of the function's matrix, then the method will find the solution in one step. The individual also mentions attempting to find a relation between the eigenvector and initial point, as well as calculating the derivative of the function, but was unable to reach a conclusion and requests assistance.
  • #1
sbashrawi
55
0

Homework Statement



Consider the steepest descent method with exact line searches applied to the
convex quadratic function f(x) = 1/2 xT Qx − bT x, ( T stands for transpose). show that if the initial point is such that x0 − x* ( x* is the exact solution of Qx = b) is parallel to an eigenvector of Q, then the steepest descentmethod will find the solution in one step.

Homework Equations





The Attempt at a Solution

.

I tried to find a relation between the eigenvector and the given initial point but I couldn't
 
Physics news on Phys.org
  • #2
get anywhere. I also tried to calculate the derivative of the given quadratic function but couldn't reach to any conclusion. Any help would be appreciated.
 

FAQ: Numerical Optimization ( steepest descent method)

1. What is numerical optimization?

Numerical optimization is a mathematical technique used to find the minimum or maximum value of a function. It involves iteratively searching for the optimal solution using numerical methods.

2. What is the steepest descent method?

The steepest descent method is a numerical optimization algorithm that uses the gradient (or derivative) of a function to iteratively move in the direction of steepest descent. This method is commonly used to find the minimum value of a function.

3. How does the steepest descent method work?

The steepest descent method starts with an initial guess for the optimal solution, and then calculates the gradient of the function at that point. It then iteratively moves in the direction of steepest descent, adjusting the solution until it reaches a minimum value.

4. What are the advantages of using the steepest descent method?

The steepest descent method is a relatively simple and easy-to-implement algorithm. It also has the ability to handle a wide range of functions and can be applied to both unconstrained and constrained optimization problems.

5. What are the limitations of the steepest descent method?

One of the main limitations of the steepest descent method is that it can be computationally expensive, especially for high-dimensional problems. Additionally, it may converge slowly or get stuck in local minima, leading to suboptimal solutions.

Similar threads

Replies
1
Views
1K
Replies
5
Views
1K
Replies
1
Views
2K
Replies
1
Views
2K
Replies
3
Views
2K
Replies
1
Views
2K
Replies
4
Views
4K
Back
Top