A question about improper integrals

  • Thread starter saltydog
  • Start date
  • Tags
    Integrals
In summary: Or some such.In summary, the conversation discusses the convergence of the integral \int_0^{\pi/2} ln[sin(x)]dx and the potential for separating the region around an asymptote into a convergent and divergent integral. The proposition is made that for a bounded function f(x) on the interval [a,b), if the limit as x approaches b is infinity, then the integral converges only if there exists a point x_c in [a,b) where |f(x)| is less than 1/|x-b| for all x in [x_c,b). However, a counter-example is given to show that this proposition is not true in general, as there are functions that satisfy this condition
  • #1
saltydog
Science Advisor
Homework Helper
1,591
3
The following is an outgrowth from a problem I encountered in the Homework section concerning the convergence of:

[tex]\int_0^{\pi/2} ln[sin(x)]dx[/tex]

I feel there is some way of separating the region around an asymptote, say x=b, into a region which yields a convergent integral and a region which does not. My initial suspicions is that the boundary is marked by the functions:

[tex]g1(x)=\frac{1}{x-b}\quad\text{and}\quad g2(x)=\frac{1}{b-x}[/tex]

This is shown in red in the attached plot along with functions in blue which have a convergent improper integral and one in green which does not.

I'm considering the following proposition:

Let f(x) be bounded on the interval [a,b) with:

[tex]\mathop\lim\limits_{x\to b} f(x)=\infty[/tex]

Then:

[tex]\int_a^b f(x)dx[/tex]

converges iff the following is true:

[tex]\exists x_c \in [a,b) : |f(x)|<\frac{1}{|x-b|} \quad\forall x\in [x_c,b)[/tex]

That is, if f(x) ever strays onto or outside the region bounded by g1 and g2 and remains there as x goes to b, then the integral will necessarilly diverge. If it remains interior to the region, it converges.

My initial thoughs are the integrals:

[tex]\int_0^1 \frac{1}{x^k}dx[/tex]

which converge if k<1 and diverge otherwise.

Can such a general statement by made for functions in general no matter how complex?

I'll attempt to prove it false by finding a counter-example but I've found none so far.

What do you guys think?
 

Attachments

  • convergence domain.JPG
    convergence domain.JPG
    5.5 KB · Views: 385
Last edited:
Physics news on Phys.org
  • #2
For x near 0, sin(x) is approx. x. The integral of ln(x) diverges as x ->0, therefore so does the integral of ln(sin(x)).
 
  • #3
mathman said:
For x near 0, sin(x) is approx. x. The integral of ln(x) diverges as x ->0, therefore so does the integral of ln(sin(x)).

Hello Mathman. Perhaps I'm missing something. My understanding is:

[tex]\int_0^{\pi/2} ln[Sin(x)]dx=-\frac{\pi}{2}ln(2)[/tex]
 
  • #4
saltydog said:
Let f(x) be bounded on the interval [a,b) with:

[tex]\mathop\lim\limits_{x\to b} f(x)=\infty[/tex]
I think you mean
Let f(x) be bounded on the interval [a,c] for any c in [a,b)
Clearly f(b-)=infinity implies f is not bounded on [a,b)

Sometimes it helps to think of impropper integrals of types one and two as being almost the same thing in different cordinates
let u=log(x)
[tex]\int_0^h \log(x) dx=\int_{-\infty}^{log(h)} u e^u du=-h+h\log(h)[/tex]
so these diverge or converge together
 
Last edited:
  • #5
lurflurf said:
I think you mean
Let f(x) be bounded on the interval [a,c] for any c in [a,b)
Clearly f(b-)=infinity implies f is not bounded on [a,b)

Yes Lurflurf, my mediocrity in Analysis shines through. I make no bones about it. Thanks for clearing that up for me. You know, I may not be gifted in Analysis but I am a bit persistent. :smile:

What about the general suitability of the proposition? Rubbish or noteworthy? I shall continue to investigate it unless a counter-example is found.
 
  • #6
consider f(x)=1/(2x-2b)
f(x)<1/(x-b)
but diverges
you need
f(x)<1(x-b)^(1+epsilon)
for some positive epsilon
once you do you will have a result that is a special case of the comparison test
x^(1-epsilon) converges on (0,1) if epsilon is positive thus any thing smaller than it will converge.
 
Last edited:
  • #7
lurflurf said:
consider f(x)=1/(2x-2b)
f(x)<1/(x-b)
but diverges
you need
f(x)<1(x-b)^(1+epsilon)
for some positive epsilon
once you do you will have a result that is a special case of the comparison test
x^(1+epsilon) converges on (0,1) if epsilon is positive thus any thing smaller than it will converge.

Alright. I understand the counter-example but it's not clear at all to me about the:

[tex]f(x)<\frac{1}{(x-b)^{1+\epsilon}}[/tex]

Also the part:

x^(1+epsilon) converges on (0,1) if epsilon is positive thus any thing smaller than it will converge.

Is that a typo? I mean that converges even if epsilon is zero.

I'll spend some time with this. It's interesting for me and thanks for helping me. :smile:
 
  • #8
Multiplying by 1/2 is an unsubtle way of breaking this boundary, you might try to patch your conjecture by allowing for a constant times your "boundary function", like a big-O order of growth condition. This will fail also. [tex]0<\frac{-1}{x(-\log{x})}<\frac{1}{x}[/tex] on (0,0.5), but it's integral diverges here. You can keep finding functions of smaller and smaller order like this with divergent integrals in an appropriate interval around zero by adding more log's (and appropriate absolute value signs).

On the other hand the integral of [tex]\frac{1}{x(-\log{x})^{1+\epsilon}}[/tex] on (0,0.5) will converge for any epsilon>0. Yet [tex]\frac{1}{x^{1-\delta}}<\frac{1}{x(-\log{x})^{1+\epsilon}}[/tex] for any delta>0 when x is close enough to 0. There's no clear boundary either way.
 
  • #9
shmoe said:
Multiplying by 1/2 is an unsubtle way of breaking this boundary, you might try to patch your conjecture by allowing for a constant times your "boundary function", like a big-O order of growth condition. This will fail also. [tex]0<\frac{-1}{x(-\log{x})}<\frac{1}{x}[/tex] on (0,0.5), but it's integral diverges here. You can keep finding functions of smaller and smaller order like this with divergent integrals in an appropriate interval around zero by adding more log's (and appropriate absolute value signs).

On the other hand the integral of [tex]\frac{1}{x(-\log{x})^{1+\epsilon}}[/tex] on (0,0.5) will converge for any epsilon>0. Yet [tex]\frac{1}{x^{1-\delta}}<\frac{1}{x(-\log{x})^{1+\epsilon}}[/tex] for any delta>0 when x is close enough to 0. There's no clear boundary either way.

Thanks Shmoe. I'll work with it. My motivation for such is simple:

I simply wish to learn what is the defining characteristic of a function near it's asymptote which determines if the integral about same converges? Is there no way of "extracting" this defining characteristic (in general terms applicable to all functions)? An example scenario I imagine would be: "if the first derivative does this and the second and higher ones do this, than the integral converges".
 
  • #10
I wish to revise my conjecture above (I can do that right?). Perhaps "scrap it" would be more appropriate. The plot below is for the four functions:

[tex]\text{green:}\quad\frac{1}{10x}\quad
\text{black:}\quad \frac{1}{x^{4/5}}\quad
\text{blue:}\quad\frac{-1}{xln(x)}\quad
\text{red:}\quad \frac{1}{x}[/tex]


As is evident in the plot, all the other functions are inside [itex]1/x[/itex] yet only the improper integral of [itex]x^{-4/5}[/itex] converges. Note also [itex]\frac{-1}{xln(x)}[/itex] and [itex]1/10x[/itex] are inside the black curve yet the improper integrals of these two functions still diverge. However, close to the y-axis (at 10^-6), the black line crosses-over both of these functions and thus comes closer to the y-axis.

I don't know about you guys, but I look at this plot, it's telling me something yet I can't figure out what it is. :confused:
 

Attachments

  • improper integral study.JPG
    improper integral study.JPG
    7.1 KB · Views: 336
  • #11
saltydog said:
Alright. I understand the counter-example but it's not clear at all to me about the:

[tex]f(x)<\frac{1}{(x-b)^{1+\epsilon}}[/tex]

Also the part:



Is that a typo? I mean that converges even if epsilon is zero.

I'll spend some time with this. It's interesting for me and thanks for helping me. :smile:
[tex]\int_0^1 x^{-1+\epsilon} dx[/tex]
converges when 0<epsilon
it diverges otherwise (in particular it diveges if epsilon=0)
It is a well know result that if
f(x) is continuous and bounded on [a,c] for all c in (a,b) and
[tex]\int_a^b f(x) dx[/tex]
converges and g(x)<=f(x) (presuming for simplicity f,g>=0) then
[tex]\int_a^b g(x) dx[/tex]
converges
the fact that some function is less than another that diverges gives no information
also it is tempting to believe that x^-1 is the border between convergent and divergent function, but it is not really true. There is a lot going on. Oh and throw 1/x^.9 onto your graph.
 
Last edited:
  • #12
saltydog said:
Alright. I understand the counter-example but it's not clear at all to me about the:

[tex]f(x)<\frac{1}{(x-b)^{1+\epsilon}}[/tex]

Also the part:



Is that a typo? I mean that converges even if epsilon is zero.

I'll spend some time with this. It's interesting for me and thanks for helping me. :smile:
yes should be 1-epsilon
 
  • #13
Well, I found two good sites on the web concerning convergence test for improper integrals. Yea, I should know these already, or forgot them or whatever. They are:

First link to convergence test

Second link to convergence test

What gets me, and I'm not trying to criticize anyone here cus' I'm in no position to do so, but what led me to all this was the Homework Forum question: does the following integral converge:

[tex]\int_0^{\pi/2} ln[Sin(x)]dx[/tex]

Well, after reviewing the convergence tests above, it's just obvious it converges just by inspection. And I don't need any colored graphics neither. That is:

[tex] Sin(x)\approx x \quad\text{when}\quad x\approx 0[/tex]

Thus:

[tex] ln[Sin(x)]\approx ln(x)\quad\text{when}\quad x\approx 0[/tex]

And:

[tex]\int_0^a ln(x)dx=(xln(x)-x)\left|_0^a[/tex]

Via L'Hopital's rule, that limit is aln(x)-a. Thus the integral converges, thus the one with Sin(x) converges too.

I'm still not satisfied though: I still feel there is some defining characteristic of functions which have convergent improper integrals about asymptoes that can be determined without using these tests or evaluating the antiderivative.
 
  • #14
My apologies. I wasn't thinking straight. Saltydog got it right.
 

1. What is an improper integral?

An improper integral is an integral where one or both of the limits of integration are infinite or the integrand is undefined at one or more points within the interval of integration.

2. How do you evaluate an improper integral?

To evaluate an improper integral, you must first determine if it converges or diverges. If it converges, you can use a variety of techniques such as the limit comparison test, comparison test, or p-series test. If it diverges, you can use the limit comparison test or direct comparison test to determine its behavior.

3. Can improper integrals have both infinite limits of integration?

Yes, some improper integrals have both infinite limits of integration. In these cases, the integral is called doubly improper and must be evaluated using the appropriate convergence or divergence tests.

4. Are improper integrals used in real-world applications?

Yes, improper integrals are used in various fields such as physics, engineering, and economics to model real-world situations. They are especially useful when dealing with infinite or undefined quantities.

5. Is there a difference between an improper integral and a regular integral?

Yes, there is a difference between an improper integral and a regular integral. A regular integral has finite limits of integration and a continuous integrand, while an improper integral has at least one infinite limit or an integrand that is discontinuous at some point within the interval of integration.

Similar threads

Replies
20
Views
2K
Replies
31
Views
919
  • Calculus
Replies
6
Views
1K
Replies
1
Views
933
Replies
3
Views
1K
Replies
4
Views
745
Replies
2
Views
286
Replies
2
Views
1K
  • Calculus
Replies
1
Views
993
Back
Top