Infinite Differentiation and Integration?

Click For Summary
The discussion revolves around the convergence behavior of a function's derivatives and antiderivatives taken infinitely, with four potential outcomes identified: divergence to infinity, convergence to zero, convergence to a function, or oscillation between functions. The author seeks reading material on this topic and techniques for approximating integrals that cannot be expressed in closed form, particularly through repeated integration by parts. They provide an example involving the integral of sin(x)/x, demonstrating a successful asymptotic expansion related to the cosine integral function, Ci(x). The conversation touches on the complexity of determining convergence for other functions and suggests that examining the nth term of a series can help assess convergence. Overall, the thread highlights the intricate relationship between differentiation, integration, and convergence in mathematical analysis.
Manchot
Messages
470
Reaction score
5
I was just wondering if there was any theory or theorems involving the convergence of a function's derivative/antiderivative taken an infinite number of times. It would seem to me that there are four types of ways that it can happen: it blows up to infinity, approaches zero, approaches some function, or alternates between several functions. Based on my premliminary investigations, there's even functions which do a combination of the three. Anyway, I'm basically looking for some reading material on the subject, and was hoping that one of you people have read something about it.

Furthermore, I'm looking for some reading material about techniques of evalutating/approximating integrals without a closed form, without using a sum of values in the interval as an approximation. Any help is vastly appreciated.
 
Mathematics news on Phys.org
Ok, actually, my two questions are related. Basically, what I'm trying to find is a method for approximating integrals which are a product of two "simpler" ones, using integration by parts an infinite number of times. I've had success with certain ones, such as the integrals which correspond with Si(x) and Ci(x). For example...

\int{\frac{\sin{x}}{x}dx}<br /> \\ <br /> =-\frac{\cos{x}}{x}-\int{\frac{\cos{x}}{x^2}dx}<br /> \\ <br /> =-\frac{\cos{x}}{x}-\frac{\sin{x}}{x^2}+2\int{\frac{\sin{x}}{x^3}dx}+...

Obviously, this is separated from the acutal Ci(x) by a constant, but that's not really the point. The point is that for every integration, though the average value of the sin/cos remains constant, the negative power keeps getting larger, meaning that each successive integral is less than the ones before it (for x greater than 1). Therefore, it would seem that the series converges for x greater than 1, and upon graphing it, I found that it is indeed an asymptotic expansion for Ci(x) (when the appropriate constant is added, of course).

However, for other functions, determining convergence isn't quite so simple. It occurred to me that if the derivative of one of the multiplied functions approaches zero, and if the antiderivative of the other one doesn't increase, then this type of series will converge. That's why I'm trying to find the convergence radius.
 
I think you may find this topic covered somewhat in the context you are looking for in any introductory analysis book where sequences and series are covered. Although my professor did not do much with it for our class, there was some material in our book about convergence and uniform convergence of functions, and I think of the derivatives of functions too. It preceded the section on power series, which seems like what you are looking for with the convergence radius.
 
The solution is simple: Write out the the nth term of the expansion and look at it as a normal infinite series. Apply standard test for convergence, etc. Your case of "if the derivative gets smaller and the integral stays similar..." is just a special case of the method above. Also, if the derivative stays similar and the integral gets smaller...or if they both get bigger apart but together get smaller... :-p

P.S. don't forget to include the sum from one to N of terms and the integral of the Nth term, making sure that both converge together.
 
Last edited:
Here is a little puzzle from the book 100 Geometric Games by Pierre Berloquin. The side of a small square is one meter long and the side of a larger square one and a half meters long. One vertex of the large square is at the center of the small square. The side of the large square cuts two sides of the small square into one- third parts and two-thirds parts. What is the area where the squares overlap?

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 23 ·
Replies
23
Views
5K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
6K