Question on the Method of Steepest Decent

  • Thread starter Edwin
  • Start date
  • Tags
    Method
In summary: I asked the same months ago in fact with a change of variable s=c+ix you get integrals of the form:\int_{-\infty}^{\infty}dxF(c+ix)e^{ixt} And a term outside the integral proportional to exp(ct) ow the problem is how do you get an asymptotic expansion for a Fourier Integral (transform) above,...in fact several Number-theoretic function have the form of an inverse Laplace transform evaluated at t=logxthe problem is how do you get an asymptotic expansion for a Fourier Integral (transform) above,...There are a couple of papers I am currently
  • #1
Edwin
162
0
Is the following integral of the "Laplace Type?"

If so, is it possible to create an asymptotic expansion for contour integrals of the following form?

Contour Integral around y of e^[K*h(z)]/[f(z)*g(c/z) - epsilon],

where epsilon is a very small real valued positive constant,

C is an integer; h, f, and g are holomorphic in the region containing the contour y, y not containing the origin, and y containing a simple pole of the integral on the complex plane. The parameter K is a parameter that makes the integral convergent.

Inquisitively,

Edwin
 
Physics news on Phys.org
  • #2
I asked the same months ago in fact with a change of variable s=c+ix you get integrals of the form:

[tex] \int_{-\infty}^{\infty}dxF(c+ix)e^{ixt} [/tex] and a term outside the integral proportional to exp(ct) ow the problem is how do you get an asymptotic expansion for a Fourier Integral (transform) above,...in fact several Number-theoretic function have the form of an inverse Laplace transform evaluated at t=logx
 
  • #3
Code:
the problem is how do you get an asymptotic expansion for a Fourier Integral (transform) above,...

There are a couple of papers I am currently reading up on related to methods of generating asymptotic series for multi-point Taylor, Laurent, and Taylor/Laurent expansions. The particular paper in question generalizes 2 point Taylor expansions to m point Taylor expansions and allows one to "expand in Laurent series at some points, and Taylor series at others."

See link below for further details:

http://ftp.cwi.nl/CWIreports/MAS/MAS-E0402.pdf#search='asymptotic%20expansion%20multipoint%20taylor%20expansions'


I'm not sure whether this benefits me on my problem however; because, the particular problem I seem to have is that no combination of contours that run through saddle-points tend to enclose the desired simple pole. So, for my problem, this makes using the "method of steepest decent" less attractive, but not necessarily impossible. However, it may benefit you in constructing an asymptotic expansion for the Fourier Integral transform listed above.

Best Regards,

Edwin
 
Last edited by a moderator:

1. What is the method of steepest descent?

The method of steepest descent is an iterative optimization algorithm used to find the minimum of a function. It involves taking steps in the direction of the steepest slope or gradient of the function, eventually converging to the minimum value.

2. How does the method of steepest descent work?

The method of steepest descent involves calculating the gradient of a function at a given point and then taking a step in the direction of the steepest descent. This process is repeated iteratively until the minimum value is reached.

3. What are the advantages of using the method of steepest descent?

The method of steepest descent is relatively simple to implement and computationally efficient. It is also effective for finding the minimum value of a function, especially in high-dimensional spaces.

4. What are the limitations of the method of steepest descent?

The method of steepest descent may converge slowly or get stuck in local minima, especially for non-convex functions. It also requires the function to be differentiable and may not work well with noisy or ill-conditioned data.

5. How is the method of steepest descent used in machine learning?

The method of steepest descent is commonly used in machine learning for optimizing the parameters of a model. It is used in gradient descent algorithms, which are a popular class of optimization methods used in machine learning.

Similar threads

  • Calculus
Replies
13
Views
1K
Replies
4
Views
752
Replies
4
Views
4K
Replies
9
Views
890
Replies
1
Views
938
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
879
  • Differential Equations
Replies
5
Views
2K
  • Calculus
Replies
4
Views
2K
  • Calculus
Replies
5
Views
4K
Back
Top