# A Numerical Insight for the Fundamental Theorem of Calculus

The purpose of this article is not to provide some rigorous statement, neither a rigorous clever proof of the fundamental theorem of calculus. It is rather to give some sort of numerical insight on an intuitive simplified statement of the fundamental theorem of calculus.

Simplified intuitive statement of the fundamental theorem of calculus:

” Integration and differentiation are reverse operations on a function ##f(x)##. That means :

- if we take the integral of the derivative of a function, the result is the function itself (within some constant c): That is ##\int^x \frac{df}{dt}dt=f(x)+c##
- if we take the derivative of the integral of a function, the result is the function itself again. That is ##\frac{d\int^xf(t)dt}{dx}=f(x)##”

In the rest that will follow, I am gonna give some nonrigorous yet I believe the intuitive and useful explanation for why 1 and 2 are true.

I shall use as starting points what I call the “loose” (or we can say numerical) definitions of the integration and differentiation operators.

The integration operator is defined as ##\int_{x_0}^{x_n} f(x)dx=\sum_{i=0}^{i=n-1}f(x_i)\Delta x_i## where ##\Delta x_i=x_{i+1}-x_i## and the ##\{x_i\}## is a partition of the interval ##[x_0,x_n]##, and in order for this definition to be close to the rigorous definition, ##n## must be large or equivalently the maximum of ##\Delta x_i, 0\leq i \leq n## must be small enough.

The differentiation operator is defined as ##\frac{df}{dx}=\frac{f(x+\Delta x)-f(x)}{\Delta x}## where again we need the ##\Delta x## to be small enough in order for this to be close to the real derivative of the function at point x.

We all know that the rigorous definitions involve the limit of the sum as ##n## goes to infinity (for the integration), and the limit of the ratio as ##\Delta x## goes to zero (for the differentiation).

But if we start talking about limits, this will open the doorway for an analytical approach to this subject. For my purpose let’s just stick to the numerical definitions, simply stating that the ##\Delta x_i## and ##\Delta x## should be small enough.

Furthermore lets assume that we choose the ##\{x_i\}## such that all the ##\Delta x_i=\Delta x## are equal and equal to the ##\Delta x## of the differentiation. Then lets see why 1. is true:

$$\int_{x_0}^{x_n=x}f'(t)dt=\sum_{i=0}^{n-1}f'(x_i)\Delta x_i=\sum_{i=0}^{n-1}\frac{f(x_i+\Delta x)-f(x_i)}{\Delta x}\Delta x_i=$$

$$=\sum_{i=0}^{n-1}[f(x_{i+1})-f(x_i)]=f(x_n)-f(x_0)=f(x)-f(x_0)$$

We see that because we have chosen ##\{x_i\}## such that all the ##\Delta x_i=\Delta x## many desired simplifications happen, and the integral ends up as a telescopic series, which gives the desired result. We know from the analytical thinking that the ##\{x_i\}## can be chosen in any way and 1 can still be true, but in the analytical approach we are taking the limits for ##\Delta x_i \rightarrow 0## (for the integration) and ##\Delta x \rightarrow 0## (for the differentiation), so we are making ##\Delta x_i## and ##\Delta x## both infinitesimally small, so in a way they are sort of equal again.

Lets now see why 2. is true

$$\frac{d\int_{x_0}^{x_n=x}f(t)dt}{dx}=\frac{\int_{x_0}^{x_{n+1}=x+\Delta x}f(t)dt-\int_{x_0}^{x_n=x}f(t)dt}{\Delta x}=$$

$$=\frac{\sum_{i=0}^{n}f(x_i)\Delta x_i-\sum_{i=0}^{n-1}f(x_i)\Delta x_i}{\Delta x}=\frac{f(x_n)\Delta x_n}{\Delta x}=f(x_n)=f(x)$$

One perhaps oversimplified and naïve way of thinking about why 1 and 2 are true, is to say that integration involves summation and multiplication, while differentiation involves subtraction and division, therefore integration and differentiation are reverse operations because summation is the reverse of subtraction and multiplication is the reverse of division. Someone that has gone through the analytical and rigorous way of reading about the fundamental theorem of calculus, knows that there is more to that, however, I believe this (perhaps naïve and oversimplified) thinking can be a (loose) but I believe the intuitive basis for why integration and differentiation are reverse operations.

Does this view relate to the one from physics without calculus?

derivative = velocity = distance / time

integral = distance = velocity X timeIt is not exactly like this but one could say that this is the basic idea.

More precisely what I do is that I remove the limit operator (if I can call it that way) from the definition of integral and derivative , and I just take as integral the sum of ##f(x_i)Delta x_i## for small enough ##Delta x_i## and as derivative the ratio of difference for small enough ##Delta x##. Without the limit operator everything becomes numerical or algebraic.

Does this view relate to the one from physics without calculus?

derivative = velocity = distance / time

integral = distance = velocity X time

Sorry to but in but for me its not the equations its the logic that is of interest ! The question is calculus wrong ? If so what is it ? I can see a different perspective but the formula is not written in calculus. I am interested in a unified formula, a language written in the very hand that made all that is everything including calculus.

But one tiny floor in any mathematical theorem would deem the logic courpet either by intent of them that create the language or with out intent. All calculations found correct can be computerised but with so many derivatives without any logic for use of them one questions the results to a finite end equations.

For me to understand any building my mind needs the correct value of the number 1 ? If the purpose of all physics is to isolate every equation as an equilibrium or what the rest energy of a particle is then the number 1 requires two zeros with the correction as1= 010

How does calculus confirm this result ? The values to either side of the number 1 have an infinity if the zeros were to fluctuate either side of the number 1.

values

010 = 1 electron over 1 proton = 1kw where 0 is the proton, at what point does the value change to 2 kw ?

pulse 01 to 10

I can see in your language that your going to have a hard time writing a finite equation for the conformation of formula or results of the above two questions. In fact it would be harder for you to conclude with calculus then for the UK to escape from the EU. hah

Propagation is a reality not a thrum and calculus may need an up grade or you could scrap it and start again. Make its language read as a universal language with correction of values for the number 1.

My language is the numbers 1 2 3 4 5 6 7 8 9 10 11 12 and 123456789 = 51 + 45 = 96 = 9+6=15 1+5=6

5+1+4+5 = 15 = 1+5=6

6+6=12 Harmonisation for the unified field Confirmed for 1 chromatic octave, from this point the unified field is isolated and a universe is created.

Every mathematical problem can be calculated and concluded with the above formula ! How does calculus translate the above formula ?

Its not a challenge or a request its just a simple question . If you could please have a go at solving this then the question of 1 or 2 would be solved as 1 is always enough for 2 to exist .

Yours truly

Atommix

Sorry for the late reply, its because only now I am being able to see the comments and reply (due to some conflict with my account and the insights forum login subsystem)

Well about as @jim mcnamara said, this insight was meant for students that are now introduced to calculus, for high school students or for students of technical schools that aren't taught calculus with mathematical rigor but possibly they want to gain some intuitive simple insight on calculus.

When I was writing the insight I had in mind the Rieman integrable functions but @Svein is right. But then again Lavinia is more right cause here the derivative is not the real derivative, it is just an approximation of the real derivative, so the approximations just hold for any class of functions.

The "telescoping sum" argument as to why ##int_A^B f'(x) dx = f(A) – f(B)## is a simplified case of the argument leading to Stoke's theorem. You get perfect cancellations everywhere except the boundary.

Σ((f(xi+Δx)−f(xi)/Δx)ΔxΣ((f(x_{i}+Δx)-f(x_{i})/Δx)Δx will always give f(xn)−f(x0)f(x_{n})-f(x_{0}) no matter what the function.As usual, someone has to come up with a pathological function. Not the Dirichlet function this time, but [itex] int_{0}^{1}sin(frac{1}{x})dx[/itex]…

These approximations will always be true for any function. For instance

##Σ((f(x_{i}+Δx_{i})-f(x_{i})/Δx_{i})Δx_{i}## will always give ##f(x_{n})-f(x_{0})## no matter what the function. So in order for the argument to be true one needs a limiting argument.

On the other hand, the picture is right as @Svein says for differentiable functions and is certainly helpful.

But IMO it is the limiting argument that makes sense out of the theorem.

I think the point of the insight is that for someone unfamiliar with the Fundamental Theorem of Calculus, that reader would find the discussion useful.

Mathematicians often do exactly what @Svein did – generalize or expand the scope. Which is not an incorrect position in any way. Simply put: Sometimes knowing when to limit scope can be instructive, too.

Well – it is correct for a restricted class of functions. Saying [itex]int f'(x)dx =f(x) [/itex] presupposes that f(x) is

differentiable(otherwise the expression is meaningless). The other way around ([itex]frac{d}{dx}int f(x)dx [/itex]) allows for a larger class of functions (the Riemann-integrable functions).Going to an even larger class of functions, we have the following theorem of Lebesgue:

If φ(x) is a summable function, its indefinite integral [itex]F(x)=int_{a}^{x}phi(t)dt [/itex]is a continuous function of bounded variation and it has almost everywhere a derivative equal to φ(x).

Observe the "almost everywhere" clause which is typical for all integrals based on measure theory. Lebesgue also proved a theorem about the other direction:

The derivative φ(x) of an absolutely continuous function F(x) defined on the closed interval [a, b] is summable and for every x[itex]int_{a}^{x}phi(t)dt = F(x)-F(a) [/itex].Observe the restriction on φ(x)!