Steepest Descent with constraints

  • Thread starter matts0611
  • Start date
  • Tags
    Constraints
In summary, the person is working on a project that requires them to use steepest descent to solve a problem, but they do not know how to calculate the partial derivatives of the function by hand. They are wondering if there is a way to modify steepest descent to work with constraints and if estimating the derivatives by taking the difference of two points close together is an acceptable method. They are also unsure of how to find the best step size when using steepest descent. They are seeking guidance and advice from someone more knowledgeable in this area.
  • #1
matts0611
7
0
Hi, I am working on a project for my research and am need of some advice.
My background is in computer engineering / programming so I'm in need of some help from some math people :)

I need to use steepest descent to solve a problem, a function that needs to be minimized.

The function has 5 variables that each have a constant range, min1 < X1 < max1 etc

I am writing this program in C.

The function that needs to be minimized is complex enough that I do not know how to calculate the partial derivatives by hand.Some questions.

1. As far as I understand steepest descent (or gradient search) works without constraints. However, my variables do have constraints. Is there a way to modify steepest descent that works with constraints?

I have used the MS Excel solver to solve my problem and it solves it pretty fast (less than one second). From what I understand Excel uses something called Generalized Reduced Gradient method. Which from what I understand is based off Steepest Descent but can be used on problems with constraints. I have found some information on this algorithm but it seems so complex. Do i need to use this algorithm or can I get away with using something simpler since the constrains in my problem are simple?

2. If I don't know the partial derivatives of my function is it ok to estimate the derivatives by taking the difference of two points close together? Is there a better way?

3. On steepest descent, from what I understand, you first find the negative gradient vector and then find the step size in that direction. How do you find the best step size? There seems to be so many different methods for this?The more things I read about this stuff on my own the more confused and the more questions I seem to have. I guess I'm just looking for some guidance from someone more knowledgeable than I in this area.

Any help or advice would be greatly appreciated.
 
Physics news on Phys.org
  • #2
matts0611 said:
Hi, I am working on a project for my research

I need to use steepest descent to solve a problem, a function that needs to be minimized.

I can't resist asking why the research requires that you use a particular algorithm.

The function that needs to be minimized is complex enough that I do not know how to calculate the partial derivatives by hand.

Don't neglect trying computer algebra software if knowing the symbolic form of the partial derivatives would be useful.

.
Is there a way to modify steepest descent that works with constraints?

Do you mean some very simple way? I don't know of any. There might be particular things about the function you are dealing with that made some simple way possible.


If I don't know the partial derivatives of my function is it ok to estimate the derivatives by taking the difference of two points close together? Is there a better way?

There is probably a better way. The better techniques for estimating the derivative of a function involve computing estimates at several different points and adding them together so that the errors in the estimates tend to cancel. There are so-called "two point", "three point", etc. methods.

3. On steepest descent, from what I understand, you first find the negative gradient vector and then find the step size in that direction. How do you find the best step size? There seems to be so many different methods for this?

Strictly speaking, I don't know of any way to find "the best" step size. For particular sets of similar problems, you can find step sizes that usually work if the functions doesn't have lots of local minima.

The more things I read about this stuff on my own the more confused and the more questions I seem to have.

That's natural. Few numerical methods work "in general". That's one reason there are so many of them. If the Excel version of the "generalized gradient" method works on your function, we can probably figure out how to code that algorithm. I've not studied that algorithm, but glancing at versions on the web, it looks like the typical ordeal that one must endure to get numerical answers.
 

What is the concept of Steepest Descent with constraints?

Steepest Descent with constraints is a mathematical optimization technique used to find the minimum value of a function subject to certain constraints. It involves iteratively moving towards the minimum value by following the steepest descent direction.

How is Steepest Descent with constraints different from regular Steepest Descent?

The main difference between Steepest Descent with constraints and regular Steepest Descent is that the former considers additional constraints while minimizing the function. These constraints can be in the form of limits on the values of the variables or restrictions on the feasible region.

What are the advantages of using Steepest Descent with constraints?

Steepest Descent with constraints is a powerful optimization technique that can handle non-linear and non-convex functions with constraints. It is also computationally efficient and can converge to the global minimum under certain conditions. Additionally, it does not require the calculation of second-order derivatives, making it suitable for high-dimensional problems.

What are some common applications of Steepest Descent with constraints?

Steepest Descent with constraints is commonly used in various fields such as engineering, physics, economics, and computer science. It is particularly useful in problems involving resource allocation, portfolio optimization, and parameter estimation.

What are some potential limitations of Steepest Descent with constraints?

One limitation of Steepest Descent with constraints is that it may converge slowly or get stuck in local minima. Additionally, the choice of initial point can significantly affect the convergence and the quality of the solution. Therefore, it is important to carefully select the initial point and use other techniques to overcome these limitations.

Similar threads

Replies
18
Views
2K
  • General Math
Replies
5
Views
844
Replies
3
Views
1K
Replies
4
Views
4K
  • Calculus
Replies
4
Views
1K
Replies
2
Views
2K
Replies
2
Views
1K
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
470
  • Calculus
Replies
9
Views
2K
Back
Top