Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Interpretation of the integral

  1. Jul 18, 2014 #1
    So I'm struggling if I should interpret the integral as a sum of infinitesimally small quantities or as just the antiderivative. I know these two things are equivalent but I can't think of those two things at the same time when I'm doing an integral.

    The reason I find this troublesome is because it's giving a headache to mentally transform all the integrals I see into antiderivatives, instead of just thinking of it as an operation that adds differentials.

    Specifically I'm calculating work done by variable forces along non-straight lines using integrals. If I consider these integrals as just sums then they're no big deal. But if I see them as antiderivatives of some mysterious function then it gives me a headache.

    What do you think about this? What's your interpretation of the integral? If I don't think of it as an antiderivative then the limits of my integral seem (although not entirely) somewhat arbitrary.

    Thanks


    EDIT: Also, in a simple fashion: why does Leibniz notation allow algebra-like manipulations, and provide valid statements that way?
     
    Last edited: Jul 18, 2014
  2. jcsd
  3. Jul 18, 2014 #2

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Both interpretations are useful, but not both are useful all the time. When doing physics problem, then seeing an integral as summing infinitesimal quantities is usually the best interpretation and the only one which is really important. However, for some other problems (usually mathematical inclined) seeing it as an antiderivative or an area is more useful than seeing it as an infinite sum.

    The key is knowing which interpretation to follow which time. It can be confusing at first, but you'll get used to it.
     
  4. Jul 18, 2014 #3

    mathman

    User Avatar
    Science Advisor
    Gold Member

    One way to clarify: Think of an antiderivative as a means of calculating an integral.
     
  5. Jul 18, 2014 #4
    Here's a way to link both derivatives, infinitesimals and integrals: (if you have a strong intuition on derivatives, then you can extend it to integrals)

    ZhB8N.png

    ##{\rm A}(x)## is the area under the curve from ##0## to ##x##, the brown region. It can be interpreted as the sum of all those infinitesimally small quantities.

    ##{\rm A}{(x+\mathrm dx)}## is the area under the curve from ##0## to ##x+\mathrm dx##, the brown + gray.

    For really small ##\mathrm dx##, we can consider the gray region to be a rectangle of side width ##\mathrm dx## and height ##f(x)##.

    Thus ##\dfrac{{\rm A}(x+\mathrm dx)−{\rm A}(x)}{\mathrm dx}=f(x)##.

    As ##\mathrm dx\to0##, we see that ##{\rm A}^\prime(x)=f(x)## by the definition of the derivative.

    It's very intuitive if you think about it, the rate of change (or derivative) of the area under the curve is just the curve! And this area is just the sum of all of those infinitesimal quantities, i.e. the sum of all those infinitesimally thin rectangles that make up the area.

    Further insight can be gained through the study of the Riemann integral.
     
    Last edited: Jul 18, 2014
  6. Jul 18, 2014 #5
    In response to your last question: you must go through non-standard calculus and non-standard analysis, which deals with algebraic manipulations of infinitesimals based on Leibniz's notation, this framework was first rigorously developed by Abraham Robinson, who proved that Leibniz's notation never produces contradictions (under NSA).

    I didn't study non-standard analysis let alone Abraham's proof so I can not say something precise. That's why I can only give a very broad example based on standard analysis that shows how I see it, hoping that it will clears things to you, if you have two functions say ##f:x\mapsto f(x)## and ##g:x\mapsto g(x)##, and you were given their rate of change ##\Delta f## and ##\Delta g##, then for instance: $$\dfrac{\Delta f}{\Delta g}=\dfrac{\Delta f}{\Delta x}\dfrac{\Delta x}{\Delta g}=\text{whatever manipulations you can and are allowed to do,}$$ and as you take the limit as ##\Delta x\to0##, then you get differentials, represented by Leibniz's notation. If this interpretation is a bit ambiguous then please correct me.
     
    Last edited: Jul 18, 2014
  7. Jul 19, 2014 #6

    verty

    User Avatar
    Homework Helper

    I think you are talking about something like this: ##\int_C \; P \; dx + Q \; dy##, how to think about it. You should think of it as defining a problem to be solved. The problem is to calculate the line integral. Don't think of it as some kind of sum, think of it as the name of a problem. If you know what the problem is, you apply the method and solve it.

    What I'm saying is, focus on the structure of the problem, how these problems relate to each other or are constituted, rather than on the notation which is less important. If you know what problem is being described and can always relate the notation to the problem in a systematic way, you are good to go.

    In fact, I believe this sum notation is just the most convenient way of writing the gradient and the variables, this is information needed to solve the problem. It may be as simple as that, that it is just the simplest way of naming the problem.
     
    Last edited: Jul 19, 2014
  8. Jul 19, 2014 #7

    verty

    User Avatar
    Homework Helper

    Just to follow up, this notation ##\int_C P \; dx + Q \; dy## contains 3 pieces of information: it gives the curve C, the gradient <P, Q>, the variables x,y, and the plus sign is a reminder that the axes must be orthogonal. So it is a very neat notation that describes the problem it is describing, and that is what is important.
     
    Last edited: Jul 19, 2014
  9. Jul 19, 2014 #8

    Claude Bile

    User Avatar
    Science Advisor

    Essentially, it boils down to calculating a sum by summing all the individual elements, or summing all the CHANGES in value (starting from the initial value).

    Antiderivatives are an elegant shortcut when using option #2.

    Claude.
     
  10. Jul 19, 2014 #9

    pasmith

    User Avatar
    Homework Helper

    Line integrals are not "antiderivatives" unless the integrand is a conservative field.

    Of course, when it comes to evaluating integrals between arbitrary endpoints, we can really only do so analytically by recognising the integrand as the derivative of some function.
     
    Last edited: Jul 19, 2014
  11. Jul 20, 2014 #10

    TSC

    User Avatar

    Hakim : right!
     
  12. Jul 20, 2014 #11
    Hakim: If Abraham Robinson proved Leibniz notation never produces contradictions why do I keep hearing physicists say things like "Now a mathematician won't like this but I'm gonna treat this as a quotient (talking about dy/dx)" if treating it as a quotient is line with mathematics?

    Edit: What did you mean by "under NSA" ?
    Thanks.
     
  13. Jul 22, 2014 #12
    Because most mathematicians are sticking with standard analysis. And "under NSA" is an acronym for "under non-standard analysis". In other words, if manipulated correctly, Leibniz notation will never produce contradictions.
     
  14. Jul 24, 2014 #13
  15. Jul 24, 2014 #14
    I always just thought of the integral as the limit sum and the anti-derivative is just the way to calculate it.

    It appears that you might be missing a good intuition for the fundamental theorem of calculus. Half of it has been covered already. The other half is that you when you do an integral by taking an anti-derivative, you can think of it as adding stuff of to find the total change.

    For a constant rate of change, you just multiply rate of change times time. For example, velocity times time for constant velocity gives the change in position. So, when you do the anti-derivative, in essence, you are approximating the velocity by assuming that it's nearly constant and then jumps up to the next value in the next time interval. So, when you add up the velocity times the length of each time interval, you should get the total change in position. So, adding up velocities over time gives you the total change in position. And we know velocity is the derivative of position. So, somewhat counter-intuitively, we can work backwards. The problem is to "add" up (integrate) the velocities. What this tells us is that we can do that by just finding the change in position, which we can get because we know that the position is the anti-derivative.

    Still, in the end, if I were doing a problem involving a line integral, I would really just think of the integral as a summation to set up the problem and from there, I'd just turn the crank and do the integral, just thinking of the anti-derivative as a way to calculate that sum. If you don't want to just turn the crank, the idea is what I've explained.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook