# B Some help understanding integrals and calculus in general

Tags:
1. May 22, 2017

### Sho Kano

So in differential calculus we have the concept of the derivative and I can see why someone would want a derivative (to get rates of change). In integral calculus, there's the idea of a definite integral, which is defined as the area under the curve. Why would Newton or anyone be looking at the area under a graph? There seems to be no practicality in computing the area under the graph (other than the fundamental theorem of calculus which relates the two fields). I guess my question is what was the motivation behind the idea of a definite integral because Newton couldn't have known about the fundamental theorem of calculus beforehand.

2. May 22, 2017

### andrewkirk

Newton would have had an intuitive notion of the fundamental theorem of calculus, which says that integration is the reverse operation of differentiation - each one 'undoes' the effect of the other. He just wouldn't have been able to prove it and may not have had the mathematical vocabulary to state it formally (or he may have had, I don't know much about Newton's maths vocab. These days we mostly use Leibniz's).

The notion is easily intuited without formality. Given somebody walking along a straight road at a variable speed, the time derivative of the distance walked is their speed, and the time integral of their speed is the distance walked.

More abstractly, given a measurable quantity that changes at a variable rate, the time derivative of the cumulative amount of change is the rate of change and the time integral of the rate of change is the cumulative amount of change.

3. May 22, 2017

### Staff: Mentor

Work: $W = \int_C F\,ds$
Charge: $Q = \int \int \int_\Omega \rho \,dV$
Current: $I = \int \int_\Sigma J \, dS$

In addition to solve differential equations, which describe basically everything in the real world, from epidemic outbreaks to fluid dynamics, you have at some point to perform an integration.

Did you mean this by motivation?

4. May 22, 2017

### Sho Kano

Thanks for your answer, I understand the motivation behind the ideas of the derivative and the anti-derivative, but in the second part of a calculus course, a lot of time is spent on defining the definite integral and on its limit definition, which approximates the area under the curve. I guess my question now is how did Newton find the fundamental theorem that relates integral and diff calculus? I mean who would have guessed that something as random as the area under the curve is equal to a subtraction?

5. May 22, 2017

### Staff: Mentor

This happened long before Newton. Actually it was Archimedes who started to calculate volumes which are bounded by a curve.
https://en.wikipedia.org/wiki/Archimedes#Mathematics
The Riemannian integration method is basically the one Archimedes applied.

6. May 22, 2017

### Stephen Tashi

I don't know how Newton found it, but if you study The Calculus of Finite Differences and the technique of summing finite series by doing "anti-differencing" then the Fundamental Theory of Calculus becomes an intuitive result.

7. May 22, 2017

### Sho Kano

There's a proof online of the fundamental theorem of calculus by Khan Academy (). Sal starts with $F(x)=\int _{ a }^{ x }{ f(t)dt }$

We only talked about definite integrals (area under the curve) and anti-derivatives (reversing differentiation) so far, so my problem with this is how can he say this equation is true? By saying $F(x)=\int _{ a }^{ x }{ f(t)dt }$, I think he is assuming that $F(x)$ is equal to some area under the curve right?

edit: maybe he is starting with a unproven statement to prove it to be true later on? so that would mean $F(x)=\int _{ a }^{ x }{ f(t)dt }$ is true, and so that connects derivatives and areas. Then the other part of the fundamental theorem can be easily attained from here. Do I have the right idea?

Last edited: May 22, 2017
8. May 22, 2017

### Staff: Mentor

No, that's not how the definite integral is defined. Instead, it's defined as a limit. The area of a curve is just one application of the definite integral.
They are defining F(x) to be equal to that integral. Since there is a variable (x) in one of the limits of integration, the integral $\int _{ a }^{ x }{ f(t)dt }$ is a function of x. If x = a, F(a) = 0, and for other values of x, you get different values out of the integral and the function.
No, merely defining a function as an integral doesn't make the connection between derivatives and antiderivatives (indefinite integrals). What makes this connection is showing that the derivative of F(x), i.e., F'(x), is actually f(x). In short, differentiating an antiderivative results in the same function that is the integrand. In this way, the operations of differentiation and antidifferentiation are essentially inverse operations.
QUOTE="Sho Kano"]

Then the other part of the fundamental theorem can be easily attained from here.[/QUOTE]
You shouldn't think of the definite integral only as representing an area. Although that's how the definite integral is most often represented when you first learn about it, there are many, many other applications for integration that have nothing to do with area. fresh_42 listed three of them, and there are tons more.

9. May 22, 2017

### Sho Kano

Got it, It's a sum first and foremost; I shouldn't get tripped up on the area business
I see, so they are setting $F(x)$ equal to that definite integral. So that the x's will be the x's that will satisfy the relation. I should think of it this way right?

He gets the second part from the first part here:

10. May 22, 2017

### Staff: Mentor

What you wrote isn't clear. For a given value of x, you get a value of F(x), or $\int_a^x f(t)dt$. What the first part of the Fundamental Theorem of Calculus shows is that F'(x) = f(x), where f is the function in the integrand.

11. May 23, 2017

### Stephen Tashi

To illustrate the finite analog of the Fundamental Theorem of Calculus, let $F(x)$ be some function and define another function $f(x)$ by $f(x) = F(x+1) - F(x)$.

Consider the summation $\sum_{k=1}^n f(k)$.

$\sum_{k=1}^n f(k) = \sum_{k=1}^n (F(k+1) - F(k))$
$= ( F(2) - F(1)) + (F(3) - F(2)) + ...(F(n) - F(n-1)) + (F(n+1) - F(n))$
$= F(n+1) - F(1)$
by cancellation of terms in the "telescoping sum".

So a summation of $f(x)$ can be expressed in terms of $F(x)$. We can approach doing a finite sum of $f(x)$ by asking "What function $F(x)$ satisfies $F(x+1) - F(x) = f(x) \$?". Instead of asking about an anti-derivative, we ask about an "anti-difference".

For example, the familiar result $\sum_{k=1}^n k = \frac{n(n+1)}{2}$ can derived by observing that the "anti-difference" function for $f(k) = k$ is the function $F(k) = (1/2)k^2 - (1/2)k$.
So $F(n+1) - F(1) = \frac{n^2 + n}{2} - 0 = \frac{n(n+1)}{2}$

Newton and many of is predecessors were probably aware of this point of view and also the use of finite expressions to approximate the concepts of derivatives and integrals. So it isn't surprising that they would seek a continuous analog of "anti-differencing" to do integrals.

(If you look a examples in calculus texts where results like $\int_0^a x \ dx = a^2/2 - 0^2/2$ are worked out "the long way" by computing areas of rectangles, whose bases are each of length $h$, you see that the trick to getting the answer involves knowing how to find a closed form expression for a finite sum. )

Last edited: May 23, 2017
12. May 23, 2017

### Sho Kano

Sorry about that, what I have right now is that he is simply defining $F(x)$ as equal to that integral, and he can do that because that integral is a function. Then he goes on to show that the derivative and the integral are inverses operations. Is this right?

13. May 23, 2017

### Staff: Mentor

More or less. Strictly speaking, integration and differentiation aren't quite inverse operations. If you differentiate a function, you get a single function, but if you antidifferentiate a function, you get a whole family of functions.

For example, if f(x) = x2, then f'(x) = 2x. But $\int 2x dx = x^2 + C$, where C is an arbitrary constant, so differentiating x2, and then finding the antiderivative doesn't result in the function you started with, f(x) = x2.

14. May 23, 2017

### Sho Kano

Yep, what it really is is the relationship between the derivative and the limit/sum right?

15. May 23, 2017

### Staff: Mentor

No, the first part of the FTC shows the relationship between differentiation and antidifferentiation. I.e.,
If $F(x) = \int_a^x~f(t)~dt$, then $F'(x) = f(x)$.
The second part shows how to use the antiderivative of a function to evaluate a definite integral. I.e.,
If F is any antiderivative of f (that is, F' = f), then $\int_a^b~f(t)~dt = F(b) - F(a)$.

16. May 23, 2017

### Sho Kano

My reasoning is $\int _{ a }^{ x }{ f(t)dt }$ is a definite integral, so doesn't the first part show that the definite integral and the derivative are inverses? And then finally, second part connects the definite and the indefinite integral.

17. May 23, 2017

### Staff: Mentor

The second part gives you $\int_a^xf(t)dt = F(x) - F(a) \neq_{i.g.} F(x)\,$so your "theory" about inverses fails. Both are closely related processes, but to call it "inverse" has to be considered false, as inverses are precisely defined objects. The first part results in $F'(x)=f(x)$ which indicates, that the differentiation process loses information, and therefore cannot be inverted.

18. May 23, 2017

### Sho Kano

Thanks, I get that differentiation and anti-differentiation aren't precisely inverses. To sum up right now I have this: The proof connects integral and diff calculus by differentiating a definite integral to get $F'(x)=f(x)$ which ultimately means the operations are inverses (but not precisely). The problem I have with this is isn't it obvious? There's the concept of the derivative and the anti-derivative which undoes differentiation. What's so special about this theorem?

https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
says "The first part of the theorem, sometimes called the first fundamental theorem of calculus, is that the indefinite integral of a function is related to its antiderivative, and can be reversed by differentiation.[Note 1] This part of the theorem guarantees the existence of antiderivatives for continuous functions.[2]"
I'm thinking the antiderivative IS the indefinite integral, and moreover the antiderivative already exists, because it is defined as undoing differentiation; why does there need to be a proof of this. Maybe I don't know the difference between an antiderivative and an indefinite integral.

19. May 23, 2017

### Stephen Tashi

The simplest example that I've been able to look-up is the Thomae function https://en.wikipedia.org/wiki/Thomae's_function which is integrable on the interval [0,1] but has no anti-derivative.

If you think the theorem is obvious, why do you think that its obvious that $F'(x) = lim_{h \rightarrow 0} \frac{\int_0^{x+h} f(x)\ dx - \int_0^x f(x) \ dx}{h} = f(x)$?

Is there a reason the equality should hold for "nice" functions?

20. May 24, 2017

### Sho Kano

Thanks for your reply Stephen, here is where I am right now

By taking the derivative of that definite integral, and by showing that in the end, $F'=f$, means that I took a derivative of something and that something is the anti-derivative of $f$, the original function (what I started with). What does this guarantee?

Maybe it's like this. The FTC is the first "validation" that continuous functions have anti-derivatives.

Last edited: May 24, 2017
21. May 24, 2017

### PeroK

You seem to me to be looking for "more" than just the mathematics. And, no matter how many times something is explained, you want to extend what you have into more and more explanations. My advice is to focus on the maths.

For example:

Let $g(x) = \int_a^x f(t) dt$

Then $g'(x) = f(x)$

That's it. That is the mathematics. You don't need anything else about inverses or "sort of" inverses or "does that mean that ...".

If you do want an explanation, I would say that the above says: "the rate of change of area under a curve is the function value at that point". Which is fairly obvious, I guess. There's no great mystery to it in any case.

22. May 24, 2017

### Stephen Tashi

Yes, we are taking the derivative of a function $F(y) = \int_{0}^{y} f(x) \ dx$ that is defined as a definite integral of another function $f(x)$. (That's somewhat clearer than saying we are taking "the derivative of definite integral", which makes it sound like we are taking the derivative of a single numerical value.)

The proof shows $F'(y) = f(y)$. (This is not a simple consequence of the definition of $F$. The definition of $F$ does not require that $F'(y) = f(y)$.)

Now suppose we need to compute $\int_a^b f(x) \ dx$. From the properties of integration, we have:
$\int_a^b f(x)\ dx = \int_0^b f(x)\ dx - \int_0^a f(x)\ dx$
In terms of areas, this expresses the area $\int_a^b f(x)\ dx$ as the difference between two overlapping areas.

Writing things in terms of $F(y)$ instead of $f(x)$ we have:
$\int_a^b f(x) \ dx = F(b) - F(a)$ where $F$ has the property that $F'(b) = f(b)$ and $F'(a) = f(a)$.

For example, suppose if we are faced with the problem of finding $\int_a^b sin(x) dx$ and nobody had given us any information about what function $F(y) = \int_0^y sin(x)\ dx$ is. We know that $F(y)$ has the property that $F'(y) = f(y)$. So if we know rules for taking the derivatives of trigonometric functions, we may be able to find a specific function $F(y)$ whose derivative is $sin(x)$ without actually approximating the area $\int_0^y sin(x) \ dx$ as the area of rectangles.

An intuitive way of seeing why $F'(y) = f(y)$ is to begin by thinking of the integrals involved as being approximated by Riemann sums where the base of the rectangles each have the same length $h$. (This is not rigorous math because the modern definition of Riemann sum does not require the base of rectangles each have same length and it certainly does not require that when approximating two different integrals, we use the same length base in each approximation.)

We have, by the definition of a derivative, $F'(y) = \lim_{h \rightarrow 0} \frac{ \int_0^{x+h} f(x)\ dx - \int_0^x f(x)\ dx}{h}$

Suppose we approximate each of the integrals as Riemann sums using rectangles with base $h$ and $x = nh$ for some integer $n$. Then the sum for $\int_0^{x+h} f(x)\ dx \approx \sum_{k=1}^{n+1} h f(kh) = S(n+1)$ contains one more rectangle than the sum for $\int_0^x f(x)\ dx \approx \sum_{k=1}^n h f(kh) = S(n)$.

So $\frac{ lim_{h \rightarrow 0} \int_0^{x+h} f(x)\ dx - \int_0^x f(x)\ dx}{h} \approx \frac{ S(n+1) - S(n)}{h}$
$= \frac { (\ S(n) + h f((n+1)h)\ ) - S(n)} {h} = \frac{ h f((n+1)h)}{h} = f((n+1)h)$.

If $f$ is a continuous function , then $f((n+1)h)$ is nearly equal to $f(nh) = f(x)$ when $h$ is small.

23. May 24, 2017

### FactChecker

Some simple reasons to be interested in definite integrals: change of velocity is the integral of acceleration; change of position is the integral of velocity;
As you continue studying science, you will run into a lot of other uses for the integral, but position and velocity are two immediate uses.

24. May 24, 2017

### Sho Kano

Sorry, as you guys can tell I am pretty slow when it comes to math and I'm not sure if this thread should drag on, but my outcome is to understand what the FTC actually does. Wikipedia says it proves that all continuous functions have anti-derivatives. So my question is: is there any other statement or anything else that does what the FTC says, that is "all continuous functions have anti-derivatives." The problem is the FTC doesn't seem all that ground-breaking to me.

Another website said the FTC actually connects differentiation and anti-differentiation... that can't be true because anti-differentiation is already defined as reversing differentiation. WHY DO WE NEED THE FTC FOR THAT? It's like saying 1+1+1-1=2, when there's already 1+1=2. Does anyone understand where I'm coming from?

WHAT does the FTC do uniquely, and WHY is it important, exactly?

To clarify I am talking about part I of the FTC, not the one for evaluating definite integrals.

Last edited: May 24, 2017
25. May 24, 2017

### FactChecker

The FTC may seem obvious, but this subject can be treacherous. There are continuous functions that increase although their derivative is 0 almost everywhere. As long as a function, f, is an integral of, g, it is also an antiderivative of g. If it can not be obtained by integration, that is not true.
Furthermore, it provides the main way to determine closed-form formulas of integrals -- just make a large table of functions and their derivatives. That is much easier than trying to calculate integrals.