I How Do You Prove the Limit of a Riemann Summation Integral?

  • I
  • Thread starter Thread starter ddddd28
  • Start date Start date
  • Tags Tags
    Limit
ddddd28
Messages
73
Reaction score
4
I tried to find the integral of x^m using the definition of Riemann summation. Everything went smoothly until the limit of ∑n=1kn^m divided by k^( m+1), when k approached infinity, showed up.
It is clear that it approaches to 1/m+1, but it has to be proved, of course.
One could induce that fact easily by substituting the closed formula of the sum of powers, involving the Bernoulli numbers, but I think it might be out of the blue, considering that this formula is much more advanced than the initial integral.
At any rate, I had several ideas, which all of a them basically fell short, like trying to rearrange the terms of the sum, so maybe induction can be used or proving that the degree of the sum is m+1 and that the leading coefficient is 1/m+1, so when taking the limit, all the terms except for the first one will go to zero, without knowing them explicitly.
I managed to prove that the degree must be greater than m and less or equal than m+1, but then got stuck.
Any suggestions on how to approach the limit?
 
Physics news on Phys.org
ddddd28 said:
One could induce that fact easily by substituting the closed formula of the sum of powers, involving the Bernoulli numbers, but I think it might be out of the blue, considering that this formula is much more advanced than the initial integral.
It's a matter of taste, but my taste disagrees. Closed form summation would be a very natural method. There is a strong analogy between the subject of Calculus and the subject of The Calculus Of Finite Differences. The finite difference operator ##\triangle f(x) = f(x+1) - f(x)## is similar to a derivative. Summation is similar to integration and summation is done by "anti-differencing". (Summation formulas are often taught on a case-by-case basis and this obscures the general idea that ##\sum_{n=1}^k f(n) = F(k+1) - F(1)## where ##F(x)## is a function such that ##\triangle F(x) = f(x)##. In other words, ##F(x)## is an "anti-difference"" of ##f(x)##. )

The Calculus Of Finite Differences is like Calculus without all the complications of limits, so its strikes me as more elementary that Calculus.

I managed to prove that the degree must be greater than m and less or equal than m+1
What do you mean by "the degree" ? Did you prove the sum was a polynomial? Or did you only show the sum was bounded by polynomials?
 
By "degree" I meant the greatest power of the polynomial.
However, I didn't prove that the sum can be represented as a polynomial, since the notion of degree can be expanded beyond polynomials, as can be understood from Wikipedia. I used the formula: the limit of
ln(f(x))/lnx to find the degree, or at least
bounded, as for now.
Again, I came up with this strategy to avoid the necessity of evaluating all the terms, which are going to cancel out anyway.

Regarding the the anti difference, how do you know that such function exists for this case, and how is it found?
 
ddddd28 said:
Regarding the the anti difference, how do you know that such function exists for this case, and how is it found?

As you know, various derivations for the summation formulae for ## \sum_{n=1}^k n^m## can be found on the web. To take the most general viewpoint, use the analog of Taylor series in The Calculus Of Finite Differences. The Wikipedia article https://en.wikipedia.org/wiki/Finite_difference calls this "Newton's Series". The edition of George Boole's book ( https://ia801009.us.archive.org/6/items/calculusoffinite032268mbp/calculusoffinite032268mbp.pdf ) page 25 of the pdf (page 11 of the book) states a special case of it in eq. (5) as:

##\phi(x) = \phi(0) + \triangle \phi(0) x + \frac{ \triangle^2 \phi(0)}{ 2}x^{(2)] + \frac{\triangle^3 \phi(0)}{2 \cdot 3} x^{(3}} + ... ## (eq. 5)

where Boole's exponents denote "falling factorials" of ##x##, not ordinary exponents. e.g. ##x^{(3)} = x(x-1)(x-2)##.

Better notation would make it clear that we take the differences of ##\phi(x)## before evaluating the resulting function at ##x=0##. It would be clearer to write "##\triangle^2 \phi(x)|_{x=0}##" instead of "##\triangle^2 \phi(0)##" etc., but the same sort of bad notation is found in many modern calculus books when they present Taylor's series.

A general procedure for getting a formula for ##\sum_{n=0}^k n^m## is to let ##\phi(k) = \sum_{n=0}^k n^m## and evaluate the right hand side of eq. 5 to get the formula.

Without doing the details of algebra, we can observe:

1) The functions ##\triangle^j \phi(k)## are polynomials that can be explicitly calculated. For example :
##\triangle \phi(k) = \phi(k+1) - \phi(k) = \sum_{n=0}^{k+1} n^m - \sum_{n=0}^{k} n^m = (k+1)^m##,
##\triangle^2 \phi(k) = \triangle (\triangle \phi(k)) = \triangle ( k+1)^m = (k+2)^m - (k+1)^m ##

2) The series is finite because the higher differences of ##\phi(k)## eventually become the zero polynomial. Applying the operator ##\triangle## to a polynomial of degree ##d## produces polynomial of degree ##d-1##. The function ##\triangle ^1 \phi(k)## is a polynomial of degree ##m##. So ##\triangle^m( \triangle \phi(k))## is a constant polynomial and the last non-zero term in the series is ##\frac{\triangle^{m+1} \phi(k)|_{k=0}}{(m+1)!} k^{(m+1)}## which is a polynomial of degree ##m+1##.

Returning to the original question about ##lim_{k \rightarrow \infty} \frac{ \sum_{n=1}^k n^m}{ k^{m+1}}##, the numerator is a polynomial in ##k## of degree ##m+1##. You can divide each term in the polynomial by the denominator before taking the limit. So the limit is determined by the coefficient of ##k^{m+1}## in the numerator.

It appears we need to show ##\triangle^{m+1} \phi[k]## is the constant polynomial ##m!##. But since its now 2:20 AM at my location, I'll postpone thinking futher about that!
 
First, I am thankful for introducing the discrete calculus, a subject that I was not awared of at all. It encouraged me to study it a bit, realizing it's such a powerful tool, and yet quite simple.
Stephen Tashi, I think it is not necessary to express explicitly the terms of the "Taylor's" serie.
If the the falling factorial nm is expanded by definition, one can easily observe that nm= n^m+ p(n) and
that p(n) has a smaller degree than n^m.
Therefore, it is possible to change the sum to
∑ nm+p(n) and then to split the sum. The second sum approaches to 0 when divided by k^m+1.
Now, the rules of discrete Integral can be applied to obtain: ((k+1)m+1- 1m+1)/m+1.
Finally, only k^m+1/m+1 survives, which proves the limit.(degree considerations)
 
ddddd28 said:
If the the falling factorial nm is expanded by definition, one can easily observe that nm= n^m+ p(n) and
that p(n) has a smaller degree than n^m.

Don't we have to worry about the constant coefficient that multiplies ##n^{(m)}##? We should prove that the coefficient ##\frac{ \triangle^{m+1}\phi(k)|_{k=0}}{(m+1)!}## in the series is equal to 1.
 
Since we are talking about limits here, constants don't matter when the degree is smaller than the divided one's (k^n+1).
Regarding the leading coefficient, it's obvious that it is 1.
 
ddddd28 said:
Since we are talking about limits here, constants don't matter when the degree is smaller than the divided one's (k^n+1).
Regarding the leading coefficient, it's obvious that it is 1.

In the series, the coefficient of ##k^{m+1}## is ##\frac{\triangle^{m+1}\phi(k)|_{k=0}}{(m+1)!}##. This coefficient is 1 provided ##\triangle^{m+1)}\phi(k)|_{k=0} = (m+1)!## Are we saying this is obvious?

For example ##m = 3##.
##\phi: (0,0) ,(1,1) , (2,1+8), (3,1+8+27), (4,1+8+27+64), (5,1+ 8+27+64+125),...##
##\triangle^1\phi: (1,1) (2,8),(3,27),(4,64),(5,125),...##
##\triangle^2 \phi :(1,7),(2,19),(3,37),(4,61),...##
##\triangle^3 \phi: (1,12),(2,18),(3,24),...##
##\triangle^4 \phi: (1,6),(2,6),...##
##6 = 3!##
 
Back
Top