# New to the method of steepest descent

• Beer-monster
In summary, the conversation discusses the best approach for integrating the function \int_{-\infty}^{+\infty} dx e^{\frac{ax^{2}}{2}}e^{ln[2cosh(b+cx)]} using the method of steepest descent. The speaker is unsure whether to expand the x squared term or the ln(cosh) term, and if the latter, how to proceed with the Taylor series expansion. They also mention uncertainty about whether to integrate or substitute complex variables.
Beer-monster

## Homework Statement

I'm new to this approximation method and was wondering the best way to proceed with this function:

$$\int_{-\infty}^{+\infty} dx e^{\frac{ax^{2}}{2}}e^{ln[2cosh(b+cx)]}$$

I've found the saddle point (I think). But I was wondering if it would be best to expand the x squared term or the ln(cosh) term. If the latter, should I expand the cosh as a taylor series to get ln(expanded cosh) and then expand the logarithm (of the expanded cosh?) and simplify the result?.

Sorry--I'm not quite sure what the problem you're trying to answer is.

I'm not quite sure how to integraye this function using the method of steepest descent. Usually you have a function of a complex variable, which this is not.

And often examples show only one exponential function where I have a product of two, so I'm not 100% which is the more rapidly varying one (i.e. which function do I expand as a Taylor series).

I went ahead and worked with the ln(cosh) function, and calculated its derivatives to get the Taylor series to 2nd order. Now I'm not sure how to move forward? Do I just integrate or do I need to substitute complex variables?

## 1. What is the method of steepest descent?

The method of steepest descent is a mathematical optimization technique used to find the minimum value of a function. It is also known as the gradient descent method and is commonly used in machine learning and optimization algorithms.

## 2. How does the method of steepest descent work?

The method of steepest descent works by iteratively updating the input variables in the direction of the steepest descent, which is the direction of the negative gradient of the function at a given point. This process continues until a local minimum of the function is reached.

## 3. What are the advantages of using the method of steepest descent?

Some of the advantages of using the method of steepest descent include its simplicity and efficiency in finding local minima. It also works well for functions with multiple parameters and can be easily applied to a wide range of optimization problems.

## 4. Are there any limitations to the method of steepest descent?

Yes, there are some limitations to the method of steepest descent. It may get stuck in local minima and may not be suitable for functions with multiple local minima. It also requires knowledge of the gradient of the function, which may not always be available.

## 5. How is the method of steepest descent different from other optimization techniques?

The method of steepest descent is different from other optimization techniques such as Newton's method and Broyden-Fletcher-Goldfarb-Shanno (BFGS) method in terms of the direction of the updates. While the method of steepest descent uses the gradient of the function, these other methods use the Hessian matrix and its inverse to determine the direction of updates.

• Calculus and Beyond Homework Help
Replies
1
Views
1K
• Calculus and Beyond Homework Help
Replies
1
Views
2K
• Calculus and Beyond Homework Help
Replies
1
Views
1K
• Calculus
Replies
1
Views
889
• Calculus and Beyond Homework Help
Replies
1
Views
2K
• Calculus and Beyond Homework Help
Replies
11
Views
1K
• High Energy, Nuclear, Particle Physics
Replies
3
Views
1K
• Calculus and Beyond Homework Help
Replies
3
Views
7K
• Calculus
Replies
3
Views
1K
• Linear and Abstract Algebra
Replies
3
Views
1K