# B Some help understanding integrals and calculus in general

Tags:
1. May 22, 2017

### Sho Kano

So in differential calculus we have the concept of the derivative and I can see why someone would want a derivative (to get rates of change). In integral calculus, there's the idea of a definite integral, which is defined as the area under the curve. Why would Newton or anyone be looking at the area under a graph? There seems to be no practicality in computing the area under the graph (other than the fundamental theorem of calculus which relates the two fields). I guess my question is what was the motivation behind the idea of a definite integral because Newton couldn't have known about the fundamental theorem of calculus beforehand.

2. May 22, 2017

### andrewkirk

Newton would have had an intuitive notion of the fundamental theorem of calculus, which says that integration is the reverse operation of differentiation - each one 'undoes' the effect of the other. He just wouldn't have been able to prove it and may not have had the mathematical vocabulary to state it formally (or he may have had, I don't know much about Newton's maths vocab. These days we mostly use Leibniz's).

The notion is easily intuited without formality. Given somebody walking along a straight road at a variable speed, the time derivative of the distance walked is their speed, and the time integral of their speed is the distance walked.

More abstractly, given a measurable quantity that changes at a variable rate, the time derivative of the cumulative amount of change is the rate of change and the time integral of the rate of change is the cumulative amount of change.

3. May 22, 2017

### Staff: Mentor

Work: $W = \int_C F\,ds$
Charge: $Q = \int \int \int_\Omega \rho \,dV$
Current: $I = \int \int_\Sigma J \, dS$

In addition to solve differential equations, which describe basically everything in the real world, from epidemic outbreaks to fluid dynamics, you have at some point to perform an integration.

Did you mean this by motivation?

4. May 22, 2017

### Sho Kano

Thanks for your answer, I understand the motivation behind the ideas of the derivative and the anti-derivative, but in the second part of a calculus course, a lot of time is spent on defining the definite integral and on its limit definition, which approximates the area under the curve. I guess my question now is how did Newton find the fundamental theorem that relates integral and diff calculus? I mean who would have guessed that something as random as the area under the curve is equal to a subtraction?

5. May 22, 2017

### Staff: Mentor

This happened long before Newton. Actually it was Archimedes who started to calculate volumes which are bounded by a curve.
https://en.wikipedia.org/wiki/Archimedes#Mathematics
The Riemannian integration method is basically the one Archimedes applied.

6. May 22, 2017

### Stephen Tashi

I don't know how Newton found it, but if you study The Calculus of Finite Differences and the technique of summing finite series by doing "anti-differencing" then the Fundamental Theory of Calculus becomes an intuitive result.

7. May 22, 2017

### Sho Kano

There's a proof online of the fundamental theorem of calculus by Khan Academy (). Sal starts with $F(x)=\int _{ a }^{ x }{ f(t)dt }$

We only talked about definite integrals (area under the curve) and anti-derivatives (reversing differentiation) so far, so my problem with this is how can he say this equation is true? By saying $F(x)=\int _{ a }^{ x }{ f(t)dt }$, I think he is assuming that $F(x)$ is equal to some area under the curve right?

edit: maybe he is starting with a unproven statement to prove it to be true later on? so that would mean $F(x)=\int _{ a }^{ x }{ f(t)dt }$ is true, and so that connects derivatives and areas. Then the other part of the fundamental theorem can be easily attained from here. Do I have the right idea?

Last edited: May 22, 2017
8. May 22, 2017

### Staff: Mentor

No, that's not how the definite integral is defined. Instead, it's defined as a limit. The area of a curve is just one application of the definite integral.
They are defining F(x) to be equal to that integral. Since there is a variable (x) in one of the limits of integration, the integral $\int _{ a }^{ x }{ f(t)dt }$ is a function of x. If x = a, F(a) = 0, and for other values of x, you get different values out of the integral and the function.
No, merely defining a function as an integral doesn't make the connection between derivatives and antiderivatives (indefinite integrals). What makes this connection is showing that the derivative of F(x), i.e., F'(x), is actually f(x). In short, differentiating an antiderivative results in the same function that is the integrand. In this way, the operations of differentiation and antidifferentiation are essentially inverse operations.
QUOTE="Sho Kano"]

Then the other part of the fundamental theorem can be easily attained from here.[/QUOTE]
You shouldn't think of the definite integral only as representing an area. Although that's how the definite integral is most often represented when you first learn about it, there are many, many other applications for integration that have nothing to do with area. fresh_42 listed three of them, and there are tons more.

9. May 22, 2017

### Sho Kano

Got it, It's a sum first and foremost; I shouldn't get tripped up on the area business
I see, so they are setting $F(x)$ equal to that definite integral. So that the x's will be the x's that will satisfy the relation. I should think of it this way right?

He gets the second part from the first part here:

10. May 22, 2017

### Staff: Mentor

What you wrote isn't clear. For a given value of x, you get a value of F(x), or $\int_a^x f(t)dt$. What the first part of the Fundamental Theorem of Calculus shows is that F'(x) = f(x), where f is the function in the integrand.

11. May 23, 2017

### Stephen Tashi

To illustrate the finite analog of the Fundamental Theorem of Calculus, let $F(x)$ be some function and define another function $f(x)$ by $f(x) = F(x+1) - F(x)$.

Consider the summation $\sum_{k=1}^n f(k)$.

$\sum_{k=1}^n f(k) = \sum_{k=1}^n (F(k+1) - F(k))$
$= ( F(2) - F(1)) + (F(3) - F(2)) + ...(F(n) - F(n-1)) + (F(n+1) - F(n))$
$= F(n+1) - F(1)$
by cancellation of terms in the "telescoping sum".

So a summation of $f(x)$ can be expressed in terms of $F(x)$. We can approach doing a finite sum of $f(x)$ by asking "What function $F(x)$ satisfies $F(x+1) - F(x) = f(x) \$?". Instead of asking about an anti-derivative, we ask about an "anti-difference".

For example, the familiar result $\sum_{k=1}^n k = \frac{n(n+1)}{2}$ can derived by observing that the "anti-difference" function for $f(k) = k$ is the function $F(k) = (1/2)k^2 - (1/2)k$.
So $F(n+1) - F(1) = \frac{n^2 + n}{2} - 0 = \frac{n(n+1)}{2}$

Newton and many of is predecessors were probably aware of this point of view and also the use of finite expressions to approximate the concepts of derivatives and integrals. So it isn't surprising that they would seek a continuous analog of "anti-differencing" to do integrals.

(If you look a examples in calculus texts where results like $\int_0^a x \ dx = a^2/2 - 0^2/2$ are worked out "the long way" by computing areas of rectangles, whose bases are each of length $h$, you see that the trick to getting the answer involves knowing how to find a closed form expression for a finite sum. )

Last edited: May 23, 2017
12. May 23, 2017

### Sho Kano

Sorry about that, what I have right now is that he is simply defining $F(x)$ as equal to that integral, and he can do that because that integral is a function. Then he goes on to show that the derivative and the integral are inverses operations. Is this right?

13. May 23, 2017

### Staff: Mentor

More or less. Strictly speaking, integration and differentiation aren't quite inverse operations. If you differentiate a function, you get a single function, but if you antidifferentiate a function, you get a whole family of functions.

For example, if f(x) = x2, then f'(x) = 2x. But $\int 2x dx = x^2 + C$, where C is an arbitrary constant, so differentiating x2, and then finding the antiderivative doesn't result in the function you started with, f(x) = x2.

14. May 23, 2017

### Sho Kano

Yep, what it really is is the relationship between the derivative and the limit/sum right?

15. May 23, 2017

### Staff: Mentor

No, the first part of the FTC shows the relationship between differentiation and antidifferentiation. I.e.,
If $F(x) = \int_a^x~f(t)~dt$, then $F'(x) = f(x)$.
The second part shows how to use the antiderivative of a function to evaluate a definite integral. I.e.,
If F is any antiderivative of f (that is, F' = f), then $\int_a^b~f(t)~dt = F(b) - F(a)$.

16. May 23, 2017

### Sho Kano

My reasoning is $\int _{ a }^{ x }{ f(t)dt }$ is a definite integral, so doesn't the first part show that the definite integral and the derivative are inverses? And then finally, second part connects the definite and the indefinite integral.

17. May 23, 2017

### Staff: Mentor

The second part gives you $\int_a^xf(t)dt = F(x) - F(a) \neq_{i.g.} F(x)\,$so your "theory" about inverses fails. Both are closely related processes, but to call it "inverse" has to be considered false, as inverses are precisely defined objects. The first part results in $F'(x)=f(x)$ which indicates, that the differentiation process loses information, and therefore cannot be inverted.

18. May 23, 2017

### Sho Kano

Thanks, I get that differentiation and anti-differentiation aren't precisely inverses. To sum up right now I have this: The proof connects integral and diff calculus by differentiating a definite integral to get $F'(x)=f(x)$ which ultimately means the operations are inverses (but not precisely). The problem I have with this is isn't it obvious? There's the concept of the derivative and the anti-derivative which undoes differentiation. What's so special about this theorem?

https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
says "The first part of the theorem, sometimes called the first fundamental theorem of calculus, is that the indefinite integral of a function is related to its antiderivative, and can be reversed by differentiation.[Note 1] This part of the theorem guarantees the existence of antiderivatives for continuous functions.[2]"
I'm thinking the antiderivative IS the indefinite integral, and moreover the antiderivative already exists, because it is defined as undoing differentiation; why does there need to be a proof of this. Maybe I don't know the difference between an antiderivative and an indefinite integral.

19. May 23, 2017

### Stephen Tashi

The simplest example that I've been able to look-up is the Thomae function https://en.wikipedia.org/wiki/Thomae's_function which is integrable on the interval [0,1] but has no anti-derivative.

If you think the theorem is obvious, why do you think that its obvious that $F'(x) = lim_{h \rightarrow 0} \frac{\int_0^{x+h} f(x)\ dx - \int_0^x f(x) \ dx}{h} = f(x)$?

Is there a reason the equality should hold for "nice" functions?

20. May 24, 2017

### Sho Kano

Thanks for your reply Stephen, here is where I am right now

By taking the derivative of that definite integral, and by showing that in the end, $F'=f$, means that I took a derivative of something and that something is the anti-derivative of $f$, the original function (what I started with). What does this guarantee?

Maybe it's like this. The FTC is the first "validation" that continuous functions have anti-derivatives.

Last edited: May 24, 2017