# Integrating x-squared

• I
OK, I admit: this will be the most idiotic question I have ever asked (maybe: there could be more)

So, I am aware of the differential calculus (derivatives) and the integral calculus (integrals).

And separate from that, there is the first fundamental theorem (FFT) of the calculus which relates the two processes as inverses of each other. So far, so good.

Now I would like to integrate, say, x-squared. HOWEVER, I would like to do it without the FFT.

I mean the following: yes, I know that (1/3)x-cubed is the answer (let's not quibble over constants or boundaries, or definite or indefinite). But I know that is the answer because when I take its derivative, I get x-squared. But that is using my knowledge of the FFT.

Can someone explain to me how to integrate x-squared without using the FFT? I am lost.

How did one do integrals BEFORE the FFT revealed it to be the inverse of differentiation?

Or am I suffering from OCD and barking up the wrong tree?

Actually it's not as dumb a question as you think. (Thank goodness, right?)

From: https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#History
The fundamental theorem of calculus relates differentiation and integration, showing that these two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that these two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration. The origins of differentiation likewise predate the Fundamental Theorem of Calculus by hundreds of years; for example, in the fourteenth century the notions of continuity of functions and motion were studied by the Oxford Calculators and other scholars. The historical relevance of the Fundamental Theorem of Calculus is not the ability to calculate these operations, but the realization that the two seemingly distinct operations (calculation of geometric areas, and calculation of velocities) are actually closely related.
So, what you may be interested in is infinitesimals.

I searched for "integrate using infintesimals" and found this, which is interesting: https://en.wikipedia.org/wiki/Non-standard_calculus

-Dave K

Actually it's not as dumb a question as you think. (Thank goodness, right?)

From: https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#History

So, what you may be interested in is infinitesimals.

I searched for "integrate using infintesimals" and found this, which is interesting: https://en.wikipedia.org/wiki/Non-standard_calculus

-Dave K
Whew! I had a feeling there is something there.

So can someone please integrate something for me? How about f(x) = x. I don't care. I would just like to see an integration performed without the FFT.

How was integration done before FFT?

Whew! I had a feeling there is something there.

So can someone please integrate something for me? How about f(x) = x. I don't care. I would just like to see an integration performed without the FFT
I don't know much about it, but I did find this..it doesn't look pretty. It would be awesome to understand it though:

I don't know much about it, but I did find this..it doesn't look pretty. It would be awesome to understand it though:

OK; so that video basically reduced the integratoin to a numerical method and presented a code.

THEN it did an actual integration, but with specific bounds in order to get a NUMBER.

So now I am wondering if it was EVER possible to prove that the integral of x is 0.5* x-squared AS A FUNCTION, but without using the first fundamental theorem of the caculus.

Anyone?

Mark44
Mentor
OK, I admit: this will be the most idiotic question I have ever asked (maybe: there could be more)

So, I am aware of the differential calculus (derivatives) and the integral calculus (integrals).

And separate from that, there is the first fundamental theorem (FFT) of the calculus which relates the two processes as inverses of each other. So far, so good.

Now I would like to integrate, say, x-squared. HOWEVER, I would like to do it without the FFT.
Why? The first part of the FTC relates the operations of differentiation and antidifferentiation, and says that the two operations are essentially inverses of one another. If you have a definite integral, you can evaluate it as the limit of a Riemann sum, but you can't do this if you're working with an indefinite integral. To find an antiderivative of ##x^2## it suffices to notice that ##\frac d {dx}(\frac{x^3}{3}) = x^2##, so ##\int x^2 dx = \frac{x^3}{3} + C##.
observer1 said:
I mean the following: yes, I know that (1/3)x-cubed is the answer (let's not quibble over constants or boundaries, or definite or indefinite). But I know that is the answer because when I take its derivative, I get x-squared. But that is using my knowledge of the FFT.

Can someone explain to me how to integrate x-squared without using the FFT? I am lost.

How did one do integrals BEFORE the FFT revealed it to be the inverse of differentiation?

Or am I suffering from OCD and barking up the wrong tree?
BTW, FFT the acronym for Fast Fourier Transform, and FTC is the one usually used for Fundamental Theorem of Calculus.

Why? The first part of the FTC relates the operations of differentiation and antidifferentiation, and says that the two operations are essentially inverses of one another. If you have a definite integral, you can evaluate it as the limit of a Riemann sum, but you can't do this if you're working with an indefinite integral. To find an antiderivative of ##x^2## it suffices to notice that ##\frac d {dx}(\frac{x^3}{3}) = x^2##, so ##\int x^2 dx = \frac{x^3}{3} + C##.

BTW, FFT the acronym for Fast Fourier Transform, and FTC is the one usually used for Fundamental Theorem of Calculus.
OK... so you seem to be saying that to perform a definite integration, you can do it as a Riemann sum, and that avoids the FTC. But it is NOT possible to do the indefinite integratoin without being cognizant of, at least, the result of the FTC

##\frac d {dx}(\frac{x^3}{3}) = x^2##, so ##\int x^2 dx = \frac{x^3}{3} + C##

OK; so that video basically reduced the integratoin to a numerical method and presented a code.

THEN it did an actual integration, but with specific bounds in order to get a NUMBER.

So now I am wondering if it was EVER possible to prove that the integral of x is 0.5* x-squared AS A FUNCTION, but without using the first fundamental theorem of the caculus.

Anyone?
Wish I knew more. I would look into it if I had time, because I'm interested in historical math. But often it was ugly.

Mark44
Mentor
OK... so you seem to be saying that to perform a definite integration, you can do it as a Riemann sum, and that avoids the FTC. But it is NOT possible to do the indefinite integratoin without being cognizant of, at least, the result of the FTC

##\frac d {dx}(\frac{x^3}{3}) = x^2##, so ##\int x^2 dx = \frac{x^3}{3} + C##
Right.

Thank you both Mark44 and dkotscessaa

This really helped me put things in order.

Thank you!

pasmith
Homework Helper
So now I am wondering if it was EVER possible to prove that the integral of x is 0.5* x-squared AS A FUNCTION, but without using the first fundamental theorem of the caculus.
You can, using basic algebra, find a Riemann sum for $\int_a^b x\,dx$ which telescopes to $\tfrac12(b^2 - a^2)$: On the subinterval $[x_i,x_{i+1}]$ take $\zeta_i = \frac12(x_i + x_{i+1})$ so that $$\sum_{i=0}^{n-1} f(\zeta_i)(x_{i+1} - x_i) = \sum_{i=0}^{n-1} \tfrac12 (x_{i+1} + x_i)(x_{i+1} - x_i) = \tfrac12\sum_{i=0}^{n-1} (x_{i+1}^2 - x_i^2) = \frac12 (x_n^2 - x_0^2) = \tfrac12(b^2 - a^2).$$ Then you can have the insight that you can define a function $F: \mathbb{R} \to \mathbb{R}$ by $$F(t) = \int_0^t x\,dx = \tfrac12t^2.$$ (And it's then easy to show from the formal definition that $F'(t) = t$.)

But of course that's just an application of the idea behind the proof that $\int_a^b F'(x)\,dx = F(b) - F(a)$: By the mean value theorem there's a $\zeta_i \in (x_i,x_{i+1})$ such that $$F'(\zeta_i) = \frac{F(x_{i+1}) - F(x_i)}{x_{i+1} - x_i}.$$

And that is quite aside from the geometric proof: the graph of $y = x$, the $x$-axis, and the line $x = t$ define a triangle, whose area is half that of a square of side $t$.

Delta2, FactChecker and dkotschessaa
You can, using basic algebra, find a Riemann sum for $\int_a^b x\,dx$ which telescopes to $\tfrac12(b^2 - a^2)$: On the subinterval $[x_i,x_{i+1}]$ take $\zeta_i = \frac12(x_i + x_{i+1})$ so that $$\sum_{i=0}^{n-1} f(\zeta_i)(x_{i+1} - x_i) = \sum_{i=0}^{n-1} \tfrac12 (x_{i+1} + x_i)(x_{i+1} - x_i) = \tfrac12\sum_{i=0}^{n-1} (x_{i+1}^2 - x_i^2) = \frac12 (x_n^2 - x_0^2) = \tfrac12(b^2 - a^2).$$ Then you can have the insight that you can define a function $F: \mathbb{R} \to \mathbb{R}$ by $$F(t) = \int_0^t x\,dx = \tfrac12t^2.$$ (And it's then easy to show from the formal definition that $F'(t) = t$.)

But of course that's just an application of the idea behind the proof that $\int_a^b F'(x)\,dx = F(b) - F(a)$: By the mean value theorem there's a $\zeta_i \in (x_i,x_{i+1})$ such that $$F'(\zeta_i) = \frac{F(x_{i+1}) - F(x_i)}{x_{i+1} - x_i}.$$

And that is quite aside from the geometric proof: the graph of $y = x$, the $x$-axis, and the line $x = t$ define a triangle, whose area is half that of a square of side $t$.

Very interesting....

It all seems logical.

But this works in only a small few cases and cannot be generalized, can it?

I think students are taught that differentiation and integration are reverse processes.
I think students are taught how to differentiate.
However, I think calculus books then USE: 1) differentiation and 2) FTC to make integration useful and tractable.

And this process, I wonder, hobbles a learner.

I think when calculus is taught with the idea that differentiation and integration are revrese of each other, that textbooks should more actively discuss how integratoin was really done in the past with boundaries (definite integrals), producing numbers, not functions (indefinite integrals). For if such care is not taken, then the learner feels as if they were not given a proper treatement of integratoin. For I have noticed that students have more difficulty time learning integratoin than differentiation.

Very interesting....

It all seems logical.

But this works in only a small few cases and cannot be generalized, can it?

I think students are taught that differentiation and integration are reverse processes.
I think students are taught how to differentiate.
However, I think calculus books then USE: 1) differentiation and 2) FTC to make integration useful and tractable.

And this process, I wonder, hobbles a learner.

I think when calculus is taught with the idea that differentiation and integration are revrese of each other, that textbooks should more actively discuss how integratoin was really done in the past with boundaries (definite integrals), producing numbers, not functions (indefinite integrals). For if such care is not taken, then the learner feels as if they were not given a proper treatement of integratoin. For I have noticed that students have more difficulty time learning integratoin than differentiation.
Well, you're actually somewhat unique here. I don't think most students would be interested in this. Most of the time they complain about even having to learn about limits and take derivatives the long way, because they heard about this thing called the power rule which makes everything so much easier. If we tried to teach infinitesimals....

It would be more appropriate for doing math history, which is a "hobby" of mine, and which involves not just learning history, but knowing how things were *done* historically. (Try doing arithmetic with Egyptian unit fractions some time. FUN!)

If you are really interested in that, you should pursue it, but it's a somewhat lonely venture. If it's OK to share this here, I moderate a math history community on Google+ that you might want to check out. It's got a lot of people in it, but it's actually fairly quiet, and usually the posts are more biographical/historical than mathematical.

A good math history textbook will actually have exercises in it, so you can do things the way they were done in the past.

I personally am an advocate for putting historical stuff into math because it makes things more dramatic, but not everyone is into drama.

-Dave K

Mark44
Mentor
I think students are taught that differentiation and integration are reverse processes.
I think students are taught how to differentiate.
However, I think calculus books then USE: 1) differentiation and 2) FTC to make integration useful and tractable.

And this process, I wonder, hobbles a learner.
I don't think it does.
observer1 said:
I think when calculus is taught with the idea that differentiation and integration are revrese of each other, that textbooks should more actively discuss how integratoin was really done in the past with boundaries (definite integrals), producing numbers, not functions (indefinite integrals).
Many textbooks do this, using various techniques to approximate integrals by rectangles, trapezoids, and other techniques.

Courses in numerical methods go into the details of numerical integration more deeply.
observer1 said:
For if such care is not taken, then the learner feels as if they were not given a proper treatement of integratoin. For I have noticed that students have more difficulty time learning integratoin than differentiation.
That's because integration is objectively harder than integration. For differentiation, there are a variety of rules (product rule, quotient rule, chain rule, etc.), and these make differentiation basically a step-by-step process. For integration, there aren't as many rules -- integration by parts is essentially the reverse of the product rule of differentiation, and integration by substitution is the reverse of the differentiation chain rule.

How to integrate x2 without using the FFT ? Let's go back to the definition of Riemann integration, even the simplest case (apologies for my naïve typography):

0b f(x) dx = limn→∞k=1n f(xk) Δxn,​

where

xk = 0 + k(b-0)/n​

(note that this is k/nth of the way between 0 and b)

and

Δxn = (b-0)/n.​

(1/nth of the distance between 0 and b). Now let's see what the right-hand side is for f(x) = x2:

limn→∞k=1n (kb/n))2 b/n,​

and doing a little algebra we get:

limn→∞ ((b2/n3) ∑k=1n k2)​

At this point it is helpful to know a formula for the sum of the first n square numbers. Let's take it on faith that, by using mathematical induction, this can be shown to be

k=1n k2 = 12 + 22 + . . . + n2 = n(n+1)(2n+1)/6.​

How this is obtained can be easily googled, if you are interested.)

Substituting, we get:

0b f(x) dx = limn→∞ (b2/n3) n(n+1)(2n+1)/6.​

At this point, I suspect you can take the limit of the right-hand side as n → ∞ to see why the answer is b3/3. (In any case, it would be a good exercise.)

We have found a certain *definite* integral of x2. But if you know the relationship between definite integrals and antiderivatives (also known as indefinite integrals), then you can see from this that

∫ f(x) dx = x3/3 + C,​

where C is an arbitrary constant.

nrqed
FactChecker
Gold Member
But this works in only a small few cases and cannot be generalized, can it?
Yes. There are many functions that are not practical (or impossible) to find the symbolic formula of the integral. Powers of x are possible, but I have to admit that I don't think I could do them without FTC. This gives you all polynomials and Taylor series. It is not unusual for a process to be easy to do one way and not the other, so something like the FTC is very important. Now you are seeing how important the FTC is. Even if you can not use FTC to solve many problems, it is extremely important for understanding the theory of integrals and derivatives and how they relate to each other.
I think when calculus is taught with the idea that differentiation and integration are revrese of each other, that textbooks should more actively discuss how integratoin was really done in the past with boundaries (definite integrals), producing numbers, not functions (indefinite integrals). For if such care is not taken, then the learner feels as if they were not given a proper treatement of integratoin. For I have noticed that students have more difficulty time learning integratoin than differentiation.
In practice, there are many integrations that are too difficult to solve symbolically. There are entire books of integrals. Also there are symbolic manipulation tools like Maple that can do a lot. You will find that some very innocent looking functions have awful integral formulas. If you can't find your answer in books or tools like Maple, use numerical techniques.

pasmith
Homework Helper
Yes. There are many functions that are not practical (or impossible) to find the symbolic formula of the integral. Powers of x are possible, but I have to admit that I don't think I could do them without FTC.
You just need to know that $b^{n+1} - a^{n+1} = (b-a)(b^n + ab^{n-1} + \dots + a^{n-1}b + a^n)$.

Now if $0 \leq x_{i} < x_{i+1}$ (as it is, since we're computing $\int_0^t x^n\,dx$ for $t > 0$ and then using substitution to handle negative $t$) then $$x_i^n < \frac{x_{i+1}^n +x_ix_{i+1}^{n-1} + \dots + x_i^{n-1}x_{i+1} + x_i^n}{n+1} < x_{i+1}^n$$ so $$x_i < \zeta_i = \left( \frac{x_{i+1}^n +x_ix_{i+1}^{n-1} + \dots + x_i^{n-1}x_{i+1} + x_i^n}{n+1}\right)^{1/n} < x_{i+1}$$ and $$\zeta_i^n(x_{i+1} - x_i) = \frac{x_{i+1}^{n+1} - x_i^{n+1}}{n+1}$$ as required.

Last edited:
FactChecker
FactChecker wrote: "Powers of x are possible, but I have to admit that I don't think I could do them without FTC. This gives you all polynomials and Taylor series. It is not unusual for a process to be easy to do one way and not the other ..."

There actually are formulas F(n; k) for the sum of the first n kth powers of integers, i.e. where

F(n; k) = 1k + 2k + 3k + ... + nk,​

for any positive integers n and k, so the powers of x can be dealt with without using the Fundamental Theorem of calculus. (This was historically a very interesting problem in the time of the Bernoullis. E.g.,

F(n; 1) = n(n+1)/2

F(n; 2) = n(n+1)(2n+1)/6

F(n; 3) = n2(n+1)2/4 = F(n; 1)2

For the general case, see Faulhaber's formula at https://en.wikipedia.org/wiki/Faulhaber's_formula. This shows that the formula is always a polynomial in n with coefficients that involve the Bernoulli numbers.

* * *

The question of why it's relatively easy to differentiate but so hard to integrate is a fascinating one. I believe that is it related to the general principle that it is difficult to create but easy to destroy. In a sense, integration, being a generalization of addition, is a form of building something up, i.e., creation. By contrast, differentiation focuses on isolating an infinitesimal portion of a graph and so is in a way a form of destruction.

The question of why it's relatively easy to differentiate but so hard to integrate is a fascinating one. I believe that is it related to the general principle that it is difficult to create but easy to destroy. In a sense, integration, being a generalization of addition, is a form of building something up, i.e., creation. By contrast, differentiation focuses on isolating an infinitesimal portion of a graph and so is in a way a form of destruction.
So, I like this analogy. Now let me ask you...

How does the FTC make integration the INVERSE of differentitation?

Now I do realize that the integral of a function is the one whose derivative is the integrand... yes, Obvious.

But by casting the workd "inverse" so casually, do we undermine student learning?

For I can say adding is the inverse of subtraction.... that is VISIBLE to me...

If I add x to y, I get z
If I subctract x from z, I get y back.

I sense and feel that as an inverse.

I do not get the FEELING that differentiatoin and integration are inverses.... YOUR answer helps a bit.

I am not so sure I even understand what I am saying.

FactChecker
Gold Member
The question of why it's relatively easy to differentiate but so hard to integrate is a fascinating one. I believe that is it related to the general principle that it is difficult to create but easy to destroy. In a sense, integration, being a generalization of addition, is a form of building something up, i.e., creation. By contrast, differentiation focuses on isolating an infinitesimal portion of a graph and so is in a way a form of destruction.
I would put that another way. The derivative is determined by just the local slope at a single x value whereas the integral is determined by function values along an entire interval in the X axis. You see that the derivative is just the limit of a simple ratio with two evaluations of the function. The integral is the limit of a summation involving a growing number of function values all along the x interval.
How does the FTC make integration the INVERSE of differentitation?
Now I do realize that the integral of a function is the one whose derivative is the integrand... yes, Obvious.
But by casting the word "inverse" so casually, do we undermine student learning?
The use of the word "inverse" is precisely defined. Differentiation is an operation on functions; so is integration. Suppose we denote the differentiation operator as D and the integration operator as I and a function by f. The FTC says that D( I( f ) ) = f. That is, starting with the function f, if we integrate it to get I( f), and than we differentiate that result to get D( I( f ) ), we end up back at the original function f. You would say that D and I are inverse operators. There are some small issues to deal with, but that is the idea of calling them inverses of each other.

I would put that another way. The derivative is determined by just the local slope at a single x value whereas the integral is determined by function values along an entire interval in the X axis. You see that the derivative is just the limit of a simple ratio with two evaluations of the function. The integral is the limit of a summation involving a growing number of function values all along the x interval.
The use of the word "inverse" is precisely defined. Differentiation is an operation on functions; so is integration. Suppose we denote the differentiation operator as D and the integration operator as I and a function by f. The FTC says that D( I( f ) ) = f. That is, starting with the function f, if we integrate it to get I( f), and than we differentiate that result to get D( I( f ) ), we end up back at the original function f. You would say that D and I are inverse operators. There are some small issues to deal with, but that is the idea of calling them inverses of each other.

thank you!

FactChecker
Gold Member
The very fact that it is not intuitively obvious that differentiation and integration are inverses of each other shows that the FTC is a profound statement.

"There are some small issues to deal with ..."

Mainly the fact that indefinite integration comes with (at least one) arbitrary constant, so must be defined as something like (again, please pardon my typography)

I(f) = ∫0x f(t) dt + C

No matter how the constant C is chosen, we always have

D(I(f)) = f

but not necesssarily

I(D(f)) = f,

(where here I depends on C so ought to be called IC).

There are infinitely many arbitrary constants to choose from, so maybe this issue isn't small!

According to the above, we may assert that IC is a right inverse for D. But it is not usually a left inverse. (And when we do find a constant C that makes IC a left inverse for some particular function, that C won't work for almost any other function.)

For example, let

C = 0.

Then:

I0(D(f)) for the function f given by

f(x) ≡ x2

is

I0(D(f))(x) ≡ x2,

but for the function given by

f(x) ≡ 1

we get

I0(D(f))(x) ≡ 0 ≠ f(x).

To fix this for the function f(x) ≡ 1 we'd need to set C = 1:

I1(D(f))(x) ≡ 1 ≡ f(x).

But then going back to

f(x) ≡ x2

we get

I1(D(f))(x) ≡ x2 + 1 ≠ f(x).