Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Integrating x-squared

  1. Dec 15, 2016 #1
    OK, I admit: this will be the most idiotic question I have ever asked (maybe: there could be more)

    So, I am aware of the differential calculus (derivatives) and the integral calculus (integrals).

    And separate from that, there is the first fundamental theorem (FFT) of the calculus which relates the two processes as inverses of each other. So far, so good.

    Now I would like to integrate, say, x-squared. HOWEVER, I would like to do it without the FFT.

    I mean the following: yes, I know that (1/3)x-cubed is the answer (let's not quibble over constants or boundaries, or definite or indefinite). But I know that is the answer because when I take its derivative, I get x-squared. But that is using my knowledge of the FFT.

    Can someone explain to me how to integrate x-squared without using the FFT? I am lost.

    How did one do integrals BEFORE the FFT revealed it to be the inverse of differentiation?

    Or am I suffering from OCD and barking up the wrong tree?
     
  2. jcsd
  3. Dec 15, 2016 #2
    Actually it's not as dumb a question as you think. (Thank goodness, right?)

    From: https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#History
    So, what you may be interested in is infinitesimals.

    I searched for "integrate using infintesimals" and found this, which is interesting: https://en.wikipedia.org/wiki/Non-standard_calculus

    -Dave K
     
  4. Dec 15, 2016 #3
    Whew! I had a feeling there is something there.

    So can someone please integrate something for me? How about f(x) = x. I don't care. I would just like to see an integration performed without the FFT.

    How was integration done before FFT?
     
  5. Dec 15, 2016 #4
    I don't know much about it, but I did find this..it doesn't look pretty. It would be awesome to understand it though:

     
  6. Dec 15, 2016 #5
    OK; so that video basically reduced the integratoin to a numerical method and presented a code.

    THEN it did an actual integration, but with specific bounds in order to get a NUMBER.

    So now I am wondering if it was EVER possible to prove that the integral of x is 0.5* x-squared AS A FUNCTION, but without using the first fundamental theorem of the caculus.

    Anyone?
     
  7. Dec 15, 2016 #6

    Mark44

    Staff: Mentor

    Why? The first part of the FTC relates the operations of differentiation and antidifferentiation, and says that the two operations are essentially inverses of one another. If you have a definite integral, you can evaluate it as the limit of a Riemann sum, but you can't do this if you're working with an indefinite integral. To find an antiderivative of ##x^2## it suffices to notice that ##\frac d {dx}(\frac{x^3}{3}) = x^2##, so ##\int x^2 dx = \frac{x^3}{3} + C##.
    BTW, FFT the acronym for Fast Fourier Transform, and FTC is the one usually used for Fundamental Theorem of Calculus.
     
  8. Dec 15, 2016 #7
    OK... so you seem to be saying that to perform a definite integration, you can do it as a Riemann sum, and that avoids the FTC. But it is NOT possible to do the indefinite integratoin without being cognizant of, at least, the result of the FTC

    ##\frac d {dx}(\frac{x^3}{3}) = x^2##, so ##\int x^2 dx = \frac{x^3}{3} + C##
     
  9. Dec 15, 2016 #8
    Wish I knew more. I would look into it if I had time, because I'm interested in historical math. But often it was ugly.
     
  10. Dec 15, 2016 #9

    Mark44

    Staff: Mentor

    Right.
     
  11. Dec 15, 2016 #10
    Thank you both Mark44 and dkotscessaa

    This really helped me put things in order.

    Thank you!
     
  12. Dec 15, 2016 #11

    pasmith

    User Avatar
    Homework Helper

    You can, using basic algebra, find a Riemann sum for [itex]\int_a^b x\,dx[/itex] which telescopes to [itex]\tfrac12(b^2 - a^2)[/itex]: On the subinterval [itex][x_i,x_{i+1}][/itex] take [itex]\zeta_i = \frac12(x_i + x_{i+1})[/itex] so that [tex]
    \sum_{i=0}^{n-1} f(\zeta_i)(x_{i+1} - x_i) = \sum_{i=0}^{n-1} \tfrac12 (x_{i+1} + x_i)(x_{i+1} - x_i) = \tfrac12\sum_{i=0}^{n-1} (x_{i+1}^2 - x_i^2) = \frac12 (x_n^2 - x_0^2) =
    \tfrac12(b^2 - a^2).[/tex] Then you can have the insight that you can define a function [itex]F: \mathbb{R} \to \mathbb{R}[/itex] by [tex]
    F(t) = \int_0^t x\,dx = \tfrac12t^2.[/tex] (And it's then easy to show from the formal definition that [itex]F'(t) = t[/itex].)

    But of course that's just an application of the idea behind the proof that [itex]\int_a^b F'(x)\,dx = F(b) - F(a)[/itex]: By the mean value theorem there's a [itex]\zeta_i \in (x_i,x_{i+1})[/itex] such that [tex]F'(\zeta_i) = \frac{F(x_{i+1}) - F(x_i)}{x_{i+1} - x_i}.[/tex]

    And that is quite aside from the geometric proof: the graph of [itex]y = x[/itex], the [itex]x[/itex]-axis, and the line [itex]x = t[/itex] define a triangle, whose area is half that of a square of side [itex]t[/itex].
     
  13. Dec 15, 2016 #12

    Very interesting....

    It all seems logical.

    But this works in only a small few cases and cannot be generalized, can it?

    Please consider the reason I am even asking this question.

    I think students are taught that differentiation and integration are reverse processes.
    I think students are taught how to differentiate.
    However, I think calculus books then USE: 1) differentiation and 2) FTC to make integration useful and tractable.

    And this process, I wonder, hobbles a learner.

    I think when calculus is taught with the idea that differentiation and integration are revrese of each other, that textbooks should more actively discuss how integratoin was really done in the past with boundaries (definite integrals), producing numbers, not functions (indefinite integrals). For if such care is not taken, then the learner feels as if they were not given a proper treatement of integratoin. For I have noticed that students have more difficulty time learning integratoin than differentiation.
     
  14. Dec 15, 2016 #13
    Well, you're actually somewhat unique here. I don't think most students would be interested in this. Most of the time they complain about even having to learn about limits and take derivatives the long way, because they heard about this thing called the power rule which makes everything so much easier. If we tried to teach infinitesimals....

    It would be more appropriate for doing math history, which is a "hobby" of mine, and which involves not just learning history, but knowing how things were *done* historically. (Try doing arithmetic with Egyptian unit fractions some time. FUN!)

    If you are really interested in that, you should pursue it, but it's a somewhat lonely venture. If it's OK to share this here, I moderate a math history community on Google+ that you might want to check out. It's got a lot of people in it, but it's actually fairly quiet, and usually the posts are more biographical/historical than mathematical.

    A good math history textbook will actually have exercises in it, so you can do things the way they were done in the past.

    I personally am an advocate for putting historical stuff into math because it makes things more dramatic, but not everyone is into drama.

    -Dave K
     
  15. Dec 15, 2016 #14

    Mark44

    Staff: Mentor

    I don't think it does.
    Many textbooks do this, using various techniques to approximate integrals by rectangles, trapezoids, and other techniques.

    Courses in numerical methods go into the details of numerical integration more deeply.
    That's because integration is objectively harder than integration. For differentiation, there are a variety of rules (product rule, quotient rule, chain rule, etc.), and these make differentiation basically a step-by-step process. For integration, there aren't as many rules -- integration by parts is essentially the reverse of the product rule of differentiation, and integration by substitution is the reverse of the differentiation chain rule.
     
  16. Dec 15, 2016 #15
    How to integrate x2 without using the FFT ? Let's go back to the definition of Riemann integration, even the simplest case (apologies for my naïve typography):

    0b f(x) dx = limn→∞k=1n f(xk) Δxn,​

    where

    xk = 0 + k(b-0)/n​

    (note that this is k/nth of the way between 0 and b)

    and

    Δxn = (b-0)/n.​

    (1/nth of the distance between 0 and b). Now let's see what the right-hand side is for f(x) = x2:

    limn→∞k=1n (kb/n))2 b/n,​

    and doing a little algebra we get:

    limn→∞ ((b2/n3) ∑k=1n k2)​

    At this point it is helpful to know a formula for the sum of the first n square numbers. Let's take it on faith that, by using mathematical induction, this can be shown to be

    k=1n k2 = 12 + 22 + . . . + n2 = n(n+1)(2n+1)/6.​

    How this is obtained can be easily googled, if you are interested.)

    Substituting, we get:

    0b f(x) dx = limn→∞ (b2/n3) n(n+1)(2n+1)/6.​

    At this point, I suspect you can take the limit of the right-hand side as n → ∞ to see why the answer is b3/3. (In any case, it would be a good exercise.)

    We have found a certain *definite* integral of x2. But if you know the relationship between definite integrals and antiderivatives (also known as indefinite integrals), then you can see from this that

    ∫ f(x) dx = x3/3 + C,​

    where C is an arbitrary constant.
     
  17. Dec 15, 2016 #16

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    Yes. There are many functions that are not practical (or impossible) to find the symbolic formula of the integral. Powers of x are possible, but I have to admit that I don't think I could do them without FTC. This gives you all polynomials and Taylor series. It is not unusual for a process to be easy to do one way and not the other, so something like the FTC is very important. Now you are seeing how important the FTC is. Even if you can not use FTC to solve many problems, it is extremely important for understanding the theory of integrals and derivatives and how they relate to each other.
    In practice, there are many integrations that are too difficult to solve symbolically. There are entire books of integrals. Also there are symbolic manipulation tools like Maple that can do a lot. You will find that some very innocent looking functions have awful integral formulas. If you can't find your answer in books or tools like Maple, use numerical techniques.
     
  18. Dec 16, 2016 #17

    pasmith

    User Avatar
    Homework Helper

    You just need to know that [itex]b^{n+1} - a^{n+1} = (b-a)(b^n + ab^{n-1} + \dots + a^{n-1}b + a^n)[/itex].

    Now if [itex]0 \leq x_{i} < x_{i+1}[/itex] (as it is, since we're computing [itex]\int_0^t x^n\,dx[/itex] for [itex]t > 0[/itex] and then using substitution to handle negative [itex]t[/itex]) then [tex]x_i^n < \frac{x_{i+1}^n +x_ix_{i+1}^{n-1} + \dots + x_i^{n-1}x_{i+1} + x_i^n}{n+1} < x_{i+1}^n[/tex] so [tex]
    x_i < \zeta_i = \left( \frac{x_{i+1}^n +x_ix_{i+1}^{n-1} + \dots + x_i^{n-1}x_{i+1} + x_i^n}{n+1}\right)^{1/n} < x_{i+1}[/tex] and [tex]\zeta_i^n(x_{i+1} - x_i) = \frac{x_{i+1}^{n+1} - x_i^{n+1}}{n+1}[/tex] as required.
     
    Last edited: Dec 17, 2016
  19. Dec 16, 2016 #18
    FactChecker wrote: "Powers of x are possible, but I have to admit that I don't think I could do them without FTC. This gives you all polynomials and Taylor series. It is not unusual for a process to be easy to do one way and not the other ..."

    There actually are formulas F(n; k) for the sum of the first n kth powers of integers, i.e. where

    F(n; k) = 1k + 2k + 3k + ... + nk,​

    for any positive integers n and k, so the powers of x can be dealt with without using the Fundamental Theorem of calculus. (This was historically a very interesting problem in the time of the Bernoullis. E.g.,

    F(n; 1) = n(n+1)/2

    F(n; 2) = n(n+1)(2n+1)/6

    F(n; 3) = n2(n+1)2/4 = F(n; 1)2

    For the general case, see Faulhaber's formula at https://en.wikipedia.org/wiki/Faulhaber's_formula. This shows that the formula is always a polynomial in n with coefficients that involve the Bernoulli numbers.

    * * *

    The question of why it's relatively easy to differentiate but so hard to integrate is a fascinating one. I believe that is it related to the general principle that it is difficult to create but easy to destroy. In a sense, integration, being a generalization of addition, is a form of building something up, i.e., creation. By contrast, differentiation focuses on isolating an infinitesimal portion of a graph and so is in a way a form of destruction.
     
  20. Dec 18, 2016 #19
    So, I like this analogy. Now let me ask you...

    How does the FTC make integration the INVERSE of differentitation?

    Now I do realize that the integral of a function is the one whose derivative is the integrand... yes, Obvious.

    But by casting the workd "inverse" so casually, do we undermine student learning?

    For I can say adding is the inverse of subtraction.... that is VISIBLE to me...

    If I add x to y, I get z
    If I subctract x from z, I get y back.

    I sense and feel that as an inverse.

    I do not get the FEELING that differentiatoin and integration are inverses.... YOUR answer helps a bit.

    I am not so sure I even understand what I am saying.
     
  21. Dec 18, 2016 #20

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I would put that another way. The derivative is determined by just the local slope at a single x value whereas the integral is determined by function values along an entire interval in the X axis. You see that the derivative is just the limit of a simple ratio with two evaluations of the function. The integral is the limit of a summation involving a growing number of function values all along the x interval.
    The use of the word "inverse" is precisely defined. Differentiation is an operation on functions; so is integration. Suppose we denote the differentiation operator as D and the integration operator as I and a function by f. The FTC says that D( I( f ) ) = f. That is, starting with the function f, if we integrate it to get I( f), and than we differentiate that result to get D( I( f ) ), we end up back at the original function f. You would say that D and I are inverse operators. There are some small issues to deal with, but that is the idea of calling them inverses of each other.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted