Why Can't I Integrate g(x) Over [1, Infinity) When It Also Approaches Zero?

  • Context: High School 
  • Thread starter Thread starter Saracen Rue
  • Start date Start date
  • Tags Tags
    Infinity Integrating
Click For Summary

Discussion Overview

The discussion revolves around the integration of two functions, ##f(x) = \frac{1}{e^x - 1}## and ##g(x) = \ln\left(\frac{1}{x} + 1\right)##, over the interval ##[1, \infty)##. Participants explore why the integral of ##g(x)## does not yield a numerical answer despite both functions approaching zero as x approaches infinity. The conversation touches on concepts of convergence, the behavior of functions at infinity, and the implications for integration.

Discussion Character

  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants note that it is not sufficient for a function to approach zero; it must do so "sufficiently fast" for the integral to converge.
  • Others question how the rate of approaching zero affects the area under the curve, arguing that as long as a function approaches zero, the area should eventually become negligible.
  • A participant introduces the concept of functions that approach zero but still contribute to an infinite area, referencing the series ##\Sigma 1/n## as an example.
  • Another participant provides an example of a piecewise constant function that converges to zero yet has a divergent integral, illustrating that height alone does not determine the area.
  • One participant reflects on the derivative of ##g(x)##, suggesting that it approaches zero at a faster rate than ##g(x)## itself, raising the question of whether this implies ##g(x)## approaches a non-zero constant before reaching zero.
  • Another participant counters this by explaining that if an integrand approaches a non-zero constant, the integral would diverge, while functions that approach zero can still lead to divergent integrals under certain conditions.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the conditions under which the integral of a function converges or diverges. Multiple competing views remain regarding the implications of a function approaching zero and the behavior of the area under the curve.

Contextual Notes

Limitations in understanding the behavior of functions at infinity and the conditions for convergence of integrals are evident. The discussion highlights the complexity of these mathematical concepts without resolving the underlying questions.

Saracen Rue
Messages
150
Reaction score
10
Say we have two functions; ##f\left(x\right)=\frac{1}{e^x-1}## and ##g\left(x\right)=\ln \left(\frac{1}{x}+1\right)##. Let us find the limit of both functions as x approaches infinity;
##\lim_{x \rightarrow \infty} {f(x)} = \frac{1}{e^\infty-1} = \frac{1}{\infty} = 0## Therefore as ##x \rightarrow \infty##, ##f(x) \rightarrow 0##.
##\lim_{x \rightarrow \infty} {g(x)} = \ln \left(\frac{1}{\infty}+1\right) = \ln \left(0+1\right) = 0## Therefore as ##x \rightarrow \infty##, ##g(x) \rightarrow 0##.

Now, when we integrate ##f(x)## over the domain ##[1, \infty)##, it works perfectly fine and has a result of
0.45868. This makes sense because the area enclosed under the graph is constantly getting smaller due to ##f(x) \rightarrow 0##.

However, when I try to integrate ##g(x)## over the same domain my calculator won't give me an answer. ##g(x)## approaches 0 just as ##f(x)## does, meaning that the enclosed area should increasingly get smaller also until it reaches the point where the area incriminates being added become negligible. So why can't I get a numerical answer while integrating ##g(x)##?
 
Physics news on Phys.org
It is not sufficient that the function goes to zero. It must go to zero sufficiently fast.
 
Orodruin said:
It is not sufficient that the function goes to zero. It must go to zero sufficiently fast.
Could you please elaborate on what you mean? How could the function not being 'sufficiently fast' change the fact that it approaches zero as x approaches infinity? Integrating gives the area under the graph, and we know that as x approaches infinity the function approaches 0. This means there has to be a point in which the 'height' of the function above the x-axis becomes negligible (that is equal to zero) and therefore the area under the the graph must stop having incriminates added onto it.
 
Saracen Rue said:
Could you please elaborate on what you mean? How could the function not being 'sufficiently fast' change the fact that it approaches zero as x approaches infinity? Integrating gives the area under the graph, and we know that as x approaches infinity the function approaches 0. This means there has to be a point in which the 'height' of the function above the x-axis becomes negligible (that is equal to zero) and therefore the area under the the graph must stop having incriminates added onto it.

A function tending to 0 is not the same as a function becoming 0. Your functions never actually reach 0, hence there are always"increments".

Look up the series ##\Sigma 1/n## for a clue as to what is going on here.
 
Saracen Rue said:
This means there has to be a point in which the 'height' of the function above the x-axis becomes negligible (that is equal to zero) and therefore the area under the the graph must stop having incriminates added onto it.
No it doesn't. This argument is not rigorous. Even if the function value decreases towards zero, it does not mean that the area will. Consider the piecewise constant function that takes the value ##1/2^n## between ##x = 2^n## and ##x = 2^{n+1}##. Clearly, this function converges to zero as ##x \to \infty##. However, integrating the function from ##2^n## to ##2^{n+1}## gives ##2^n/2^n = 1##. Therefore, the integral from 1 to ##2^n## is equal to ##n## and therefore the integral diverges when taking the upper limit to infinity. When you compute an area, it is not only a matter of "height", you also need to consider the width of the region and your region is infinite.
 
I'm sorry for being so dense everyone, I haven't even heard of things like piecewise constants. I guess I'm just beyond my current level of understanding here.

There is one thing I stumbled across though - the derivative function of ##g\left(x\right)=\ln \left(\frac{1}{x}+1\right)## is ##g'(x)=\frac{-1}{x(x+1)}##. The squared factor on the denominator implies that; as x approaches infinity, the gradient function approaches 0 at a faster rate than function ##g(x)## approaches 0. Could you then extrapolate from this and state that ##g(x)## approaches a non-zero constant value before approaching 0, and thus the area under the graph will be infinite?
 
Saracen Rue said:
Could you then extrapolate from this and state that ##g(x)## approaches a non-zero constant value before approaching 0, and thus the area under the graph will be infinite?
No.

Think of it this way, if the integrant approached a constant non-zero value ##c_0## and the integral upper boundary was set to ##x = N##, then the integral would grow as ##\sim c_0 N## as ##N \to \infty##. However, if the integrand goes to zero, the integral will increase slower. However, there are many possible functions that grow slower than ##N## as ##N \to \infty## but that still tend to infinity, ##\log(N)## being one such example.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K