I Integrals are harder than derivatives, why?

wolly
Messages
49
Reaction score
2
I understand the concept of derivatives but when it comes to integrals and their uses I do not understand what they do and where you use them.In derivatives you can understand how a function changes but in integration everything is so illogical.Can someone explain me the use of integrals in calculus?I mean all I could understand is that there is some +C which is a constant but I have no idea where that come from.What does this +C even mean?When I look at derivatives I can see that the function changes but when I look at a integral I have no idea what a function does in that specific function.All I know is that I learned(more memorized) and I couldn't understand the complexity of them.
I have a math book full of exercises and it doesn't explain at all how a integral works.It just shows me some integrals that I learned in high school and most of them don't even show the proof behind them.
 
Physics news on Phys.org
It is just the opposite direction.
If f'(x) is the derivative of f(x), then f(x) is an integral function of f'(x).

You already encountered cases of this much earlier: If 5+6=11, then 11-6=5. Subtracting 6 is the inverse action of adding 6. If 3*4=12, then 12/4=3. Dividing by 4 is the inverse of multiplying by 4.

If you add a constant to a function, its derivative does not change: f(x) and f(x)+3 have the same derivative. If you are only given the derivative, you don't know if it is the derivative of f(x), of f(x)+6423, or something else. Therefore the +c is added to keep all options. Every value of c leads to a possible integral function.
 
  • Like
Likes Sorcerer
wolly said:
when it comes to integrals and their uses I do not understand what they do and where you use them.
.

Begin by making a distinction between "antiderivative" and "definite integral". Some people use the term "integral" to refer to both those concepts, but the two concepts have different definitions. Informally, a "definite integral" of a function refers to an area "under" its graph. The antiderivative of a function f(x) is another function F(x) such that F'(x) = f(x). For example, if f(x) = 2x, then both F_a(x) = x^2 and F_b(x) = x^2 + 6 are antiderivatives of f(x).

The connection between definite integrals and antiderivatives is given by The Fundmental Theorem of Calculus, which you should know if you have taken a calculus course. It's normal for a student not to have an intuitive understanding of why The Fundamental Theorem of Calculus should be true, but there's no excuse for not knowing what that theorem says and how it connects the idea of definite integral with the idea of antiderivative. Do you understand what the Fundamental Theorem of Calculus says?
 
  • Like
Likes mathwonk
That is not what I asked.I asked why is integration harder than differentiation.I said that in derivatives the function of f(x) changes over time df(x)/dx.That is easy to understand but understanding why the integral of some functions are the derivatives of that function is not.
 
wolly said:
the integral of some functions are the derivatives of that function is not.

You probably mean "...are the antiderivatives...".

You didn't say whether you know what the Fundamental Theorem Of Calculus says.
 
The truth is that you can derivate functions but not all functions are the derivative of a function.How does that make sense?For ex the integral of e^x/x has no derivative.Now I'm really confused.I just don't see the logic of integrals in derivatives.Can you explain me that?
 
wolly said:
That is not what I asked.I asked why is integration harder than differentiation.I said that in derivatives the function of f(x) changes over time df(x)/dx.That is easy to understand but understanding why the integral of some functions are the derivatives of that function is not.
If you dig to the bottom of this question, the answer will be: because multiplication is harder than addition.
To build a derivative, we attach a ruler at some point of the object, which is a linear, i.e.an additive thing. To integrate, we have to calculate a volume, which is a multiplication, or many small ones in this case. The formula ##e^x \cdot e^y = e^{x+y}## illustrates the difficulty: the derivative corresponds to the addition in the exponent, while the integration is the multiplication on the base line. That's why most differential equation are solved by searching for something like ##c(x)e^{a(x)}##.
 
I do understand calculus but when it comes to integrals the theory doesn't make any sense.In differentiation you use product rules as an exemple but in integration by parts you don't.How does that even work?
 
wolly said:
I do understand calculus but when it comes to integrals the theory doesn't make any sense.In differentiation you use product rules as an exemple but in integration by parts you don't.How does that even work?
Differentiation for many functions is very straightforward -- you use the product rule, chain rule, etc., and you might have to use several rules in succession in a certain order. Going backward (antidifferentiating) is much less straightforward, since the procedure is much less "cookbook" than differentiation.

Whether you know it or not, integration by parts is the reverse operation of the product rule. If h(x) = f(x) * g(x), then h'(x) = f(x) * g'(x) + f'(x) g(x). This equation can be transformed to f(x) * g'(x) = h'(x) - f'(x) * g(x). If you multiply both sides by dx and antidifferentiate both sides you get ##\int f(x) g'(x) dx = h(x) - \int f'(x) g(x) dx##, which is integration by parts.
 
  • #10
wolly said:
The truth is that you can derivate functions
In English we say you can differentiate functions.
Derivate is a word, but not one that's used in discussions of calculus.
 
  • #11
wolly said:
The truth is that you can derivate functions but not all functions are the derivative of a function.

Have you taken a calculus course? You seem unfamiliar with standard terminology. We "differentiate" functions , not "derivate" them.

For ex the integral of e^x/x has no derivative.
If you understood the concepts of integral and derivative, you'd realize you haven't given a specific example.

Now I'm really confused.I just don't see the logic of integrals in derivatives.Can you explain me that?

I have to guess at your level of understanding. The fact that you are carless with terminology - such as writing "derivative" when you mean "antiderivative" suggests that your are approaching calculus as set of procedures for manipulating symbols - i.e. as a more sophisticated version of manipulations that are done in elementary algebra. You appear frustrated because the manipulations permitted for antidifferentiation are more complicated that those permitted for differentiation. You won't fully understand why certain procedures of manipulating symbols are pemitted unless you understand the verbal concepts that underlie the manipulations. If you are using a standard calculus text, there are verbal explanations for the concepts of definite integral and antiderivative. So my guess is that you are ignoring the explanations in your text materials. People on this forum can present their own versions of what's aleady in your textbooks and perhaps the personal attention will motivate you to pay closer attention to these ideas.
 
  • Like
Likes DQNOK
  • #12
Mark44 said:
In English we say you can differentiate functions.
Derivate is a word, but not one that's used in discussions of calculus.
I struggle between derivative and derivation. It's the same at its core, but then again different in context.
 
  • #13
Sorry I meant differentiate.
 
  • #14
fresh_42 said:
I struggle between derivative and derivation. It's the same at its core, but then again different in context.
I'm sure it's confusing, but then English is not known for being logical.

As far as derivative and derivation, it's just something you have to remember, since the usage isn't logical at all.

Differentiation is the operation used to calculate a derivative. Or, you can differentiate a function to get its derivative, but you don't derive a function to get its derivative.

The word derivate is a noun, so it's not an action word (you can't derivate something). From Oxford Dictionaries, "something derived, especially a product obtained chemically from a raw material." I've never seen "derivate" used in any calculus textbook, at least any written by English speakers.

You can start with a quadratic equation, and use completing the square to derive the Quadratic Formula. Or, the English word "cotton" is derived from the Spanish word "algodon," which is almost identical to the Arabic word "al godon."
 
  • #15
While it may be true what you say but when it comes to integration from -inf to +inf then integrals can be hard.I mean even in limits you can solve that but can you solve integrals that tend from -inf to 0 and from 0 to +inf?Is that even possible?
 
  • #16
wolly said:
Sorry I meant differentiate.
No problem and no need for an apology. I wasn't trying to be critical. I know you're not a native speaker of English. I was just trying to help you out.
 
  • #17
What I meant in the thread is that I want to find the complexity of integrals over derivatives.If you can look at integrals that tend to inf just like limits you can see that integrals can't be solved like limits.Another fact that I do not understand is why a limit is placed in a solution of a integral.At least in limits I remember that you couldn't use such a rule like this one.
 
  • #18
Uh no one knows?
 
  • #19
Mark44 said:
I'm sure it's confusing, but then English is not known for being logical.
No that isn't it. I've studied a lot about Lie algebras, and they have derivations, which are linear mappings defined by ##d([a,b])=[d(a),b]+[a,d(b)]##. This is just a version of the Leibniz rule, as the Jacobi identity is, because it is deduced - or should I say derived - from derivatives. It's basically the same thing, just not calculus. And as I'm more used to write derivation, it occasionally slips into derivatives. It would be equally difficult in German, but the German word for derivative is "Ableitung". It's the literal translation of the Latin one, so it's a bit easier because of that.
 
  • #20
wolly said:
Uh no one knows?
I do. Post number seven.
 
  • #21
wolly said:
While it may be true what you say but when it comes to integration from -inf to +inf then integrals can be hard.I mean even in limits you can solve that but can you solve integrals that tend from -inf to 0 and from 0 to +inf?Is that even possible?
Sure, these kinds of definite integrals come up fairly often. For example, ##\int_0^\infty e^{-x}dx## is a fairly simple example of a convergent improper integral whose value is 1.
In contrast, the improper integral ##\int_{-\infty}^0 e^{-x}dx## diverges.
 
  • #22
But how can someone use limits in integrals(antidifferentiation functions)?I can understand that in limits but not in integrals.I thought that integrals can't have limits because that would make no sense and break the rules of calculus.
 
  • #23
wolly said:
But how can someone use limits in integrals(antidifferentiation functions)?I can understand that in limits but not in integrals.I thought that integrals can't have limits because that would make no sense and break the rules of calculus.
I don't quite understand. Integrals are limits. The limit of the sums of tiny rectangular volume elements needed to fill a curved area. The tinier the rectangles, the better the approximation. That's where integrals came from.
 
  • #24
There are many math problems that become a lot more difficult when turned into inverse problems. As an example not related to calculus, think about finding the prime factors of a large integer, compared to multiplying known prime factors to get a large number.
 
  • Like
Likes symbolipoint, StoneTemplePython and FactChecker
  • #25
Regarding the use of integrals:
Some real-world examples may be helpful. The integral of a function is the accumulation of the function values over a range of its inputs. If you know the velocity of a car as a function of time, v(t), then you can integrate it to keep track of the position of a car whose initial position at time 0 is ##p_0##, ##p(t) = p_0 + \int_0^t v(s)ds##. (You would need to do that for every dimension that the car can travel in.) One step further, you can accumulate the accelerations of a car to keep track of its velocity. This is a very common application because accelerometers are very easy to include in a vehicle.

Regarding the relative difficulty of determining a closed-form equation for an integral:
A function with a derivative at a point is very well behaved at that point and around it. The derivative is only a local property and the behavior of the function some distance from that point are irrelevant. The function will be continuous at the point where it has a derivative.
Very bazarre functions have integrals. The functions may be extremely discontinuous. Furthermore, since the integral is an accumulation of the function values over a range of input values, it depends on the behavior of the function over the entire range. That is a much more difficult thing to handle than the local property of a derivative.
 
  • Like
Likes Luiz Felipe Netto and Jehannum
  • #26
wolly said:
I understand the concept of derivatives but when it comes to integrals and their uses I do not understand what they do and where you use them.In derivatives you can understand how a function changes but in integration everything is so illogical.Can someone explain me the use of integrals in calculus?I mean all I could understand is that there is some +C which is a constant but I have no idea where that come from.What does this +C even mean?When I look at derivatives I can see that the function changes but when I look at a integral I have no idea what a function does in that specific function.All I know is that I learned(more memorized) and I couldn't understand the complexity of them.
I have a math book full of exercises and it doesn't explain at all how a integral works.It just shows me some integrals that I learned in high school and most of them don't even show the proof behind them.

There are two basic problems in calculus. (1) Find the slope of a line tangent to a curve at a given point on that curve, and (2) find the area under the curve between two points. Both problems arose out of practical work done by Newton and others in dealing with the physics of motion, including planetary motion. Euclidean geometry was inadequate to deal with such problems.

Both problems involve limits. (1) To calculate the slope of the tangent line, we start with the slope of a line between two points that are close to each other on the curve, then we calculate the limit of the slope of the line as one point approaches the other. (2) To calculate the integral, you could say we divide the area under the curve into narrow strips and add up the areas of those strips. As the width of the strips becomes smaller, and approaches zero, the sum of the areas approaches the integral.

If you can get a copy of Bob Miller's Calc for the Clueless, volume I, you will find a good introduction to this. He was (and maybe still is?) a high school math teacher, and he knows how to explain basic calculus.

He also explains how (1) and (2) are related, as well as what they are good for.Best wishes.
 
  • #27
you are confusing integrals with antiderivatives. integrals are actually pretty intuitive and natural, but antiderivatives, the trick for calculating them, is hard for the same reason that square roots are harder than squares and dividing is harder than multiplying and subtracting is harder than adding.
 
  • Like
Likes Demystifier
  • #28
mathwonk said:
you are confusing integrals with antiderivatives.
Integrals are not antiderivatives?What does that mean?Do you even understand what you're saying?
 
  • #29
wolly said:
Integrals are not antiderivatives?What does that mean?Do you even understand what you're saying?
The meteorological version is somehow strange, too: There is always some place on Earth where there is no wind.
 
  • #30
Well can someone prove that integrals are not antiderivatives?Please explain!
 
  • #31
wolly said:
Well can someone prove that integrals are not antiderivatives?Please explain!
I only can guess, too. Maybe @mathwonk meant by integrals the linear operator ##\int ## on the vector space of integrable functions, and by antiderivative the result of an integration. Thus ##\int (\alpha f + \beta g) = \alpha \int f +\beta \int g ## is easy, while ##\int \sqrt{1-x^3} \cos x \,dx ## is hard.
 
  • #32
wolly said:
Integrals are not antiderivatives?What does that mean?Do you even understand what you're saying?
I would appreciate some clarification too, but my impression is that @mathwonk certainly knows what he is talking about.
 
  • #33
fresh_42 said:
I only can guess, too.
As explained in post #3, in the USA, the term "integral" is often used to mean "definite integral". So a definite integral is a number (computed in a particular way), not an operator. e.g. ##\int_0^1 x^2 dx## is an "integral", but not an antiderivative.
 
  • #34
Also, going back to the OP, differentials are easy to calculate, but very hard to use in physics because they are extremely sensitive to noise in the data (example: Take a look at \cos(t)+\frac{sin(1000t}{100}. Looks nice and smooth, doesn't it? Then check out the derived function).

Integrals, however, tend to remove noise - even very noisy functions can have a smooth integral:
1200px-White_noise.svg.png

The integral of this data set is very close to 0!
 

Attachments

  • 1200px-White_noise.svg.png
    1200px-White_noise.svg.png
    29.7 KB · Views: 757
  • Like
Likes PhDeezNutz
  • #35
Long topic or thread, but this quote gets to the more basic idea:
hilbert2 said:
There are many math problems that become a lot more difficult when turned into inverse problems. As an example not related to calculus, think about finding the prime factors of a large integer, compared to multiplying known prime factors to get a large number.
I remember a time when a young remedial Mathematics student asked me, "Why is division so complicated but multiplication is so easy to understand?" We were dealing with long-divison of whole numbers and of decimal numbers. For sure, Calculus was to be a long, long, way off from this; but there is the general idea.
 
  • #36
this is for @wolly , in particular the question in post #30.
sorry for the confusion. i forgot that in many elementary books the word "integrate" is used as a synonym for the word "antidifferentiate", and "integral" is equated with "antiderivative". this in my opinion is very harmful to the student. the reason of course for this practice is the theorem that if a function f is continuous, then its integral, in the correct sense of a limit of sums, is a differentiable function of the upper limit, and that derivative is f. An integral however is by definition a limit of sums, and the antiderivative is merely a tool, or trick, for calculating it. For this reason, we all start out early using only this trick and forgetting largely the definition of the integral as a limit. The one exception is the excellent book of Apostol where integrals are treated first and at length, before introducing the derivative and its use in computing integrals.

The harm comes for several reasons. First of all, the theorem has a hypothesis, continuity of f. So what do we do when f is not continuous?. In that case the integral may not be differentiable as a function of the upper limit. E.g. it is a basic theorem following from the mean value theorem that if f is a derivative, then it has the intermediate value property. In particular a step function is not a derivative, hence its integral is not an antiderivative, at least not in the usual sense, but the integral of a step function is easily computed using sums. Thus the most basic Riemann sums used to approximate integrals, although certainly integrable, are not themselves directly antidifferentiable.

This problem raises its head again in complex variables, where again one defines path integrals in terms of limits of sums, and then proceeds to prove many fundamental theorems like the Cauchy integral theorem and residue theorem, which may be lost on most students who think of integrals merely in terms of antiderivatives. I.e. in complex variables most integrands do not have antiderivatives, or else all path integrals would equal zero. E.g. the first interesting integral one meets is that of dz/z taken around the unit circle. One wants the antiderivative to equal the logarithm but there is no way to define the log function on any set containing the unit circle. This same situation comes up in vector calculus, since most differentials are not exact, and even closed differentials are exact only locally. indeed dz/z is a closed, locally exact, differential that is not exact on any neighborhood of the origin. The problem of course is that given a (starting point p and a) point q, the antiderivative can only make sense if it has a unique value at q, whereas the path integral is defined in terms of a path from p to q. The integral makes sense for all paths, but the antiderivative only makes sense if the values for all choices of paths are the same. I think now this is why my complex class just stared uncomprehendingly, throughout the discussion of path integrals and their properties. The word "integral" may have just had no meaning for them without the crutch of antiderivatives.

Another matter, which really is at the heart of the question here I think, is that differentiation is an operation that strictly shrinks the class of functions one is working on, i.e. it takes functions and tends to make them more elementary. Thus a very abstract and sophisticated function, like the logarithm, can have a very elementary looking function like 1/x. This makes it hard to go backwards, since the antiderivative of an easy function tends to be more difficult, or more abstract. Indeed according to the fundamental theorem quoted above, the only reason we believe that a continuous function should have an antiderivative is that one can be constructed, or at least approximated, by its limiting sums. Thus this direction, starting from the integral as a limit of sums, and using that to try to find an antiderivative, is the only direction that will always work. I.e. trying to work backwards, and just guess or somehow cook up an antiderivative, and use that to compute an integral, will only work in special very easy cases.

oh yes, the presence of the C that is worrying the student comes from the theorem that the derivative of a constant C is zero, so the antiderivative of zero is only pinned down at best to being some constant C. Thus for any continuous function f, since f = f+0, its antiderivative can only be pinned down to within a constant C. I.e. if g is one antiderivative, then g+C is another one for every constant C.

Notice this only applies to continuous functions, so e.g. if a book claims that the general antiderivative of 1/x is ln(|x|) +C, this is wrong, since 1/x is not continuous. I.e.
we could take ln(|x|) + C1 for x<0, and ln(|x|) +C2, for x>0, where C1 and C2 are different constants.
Note too that in the complex domain this C is what saves you in some cases. I.e. for the various different choices of a logarithm on the punctured plane, any 2 differ by a constant, so they all have the same derivative! Thus even though the antiderivative is not well defined, its derivative is! Or backwards, even though the integrand is well defined, hence also its (path) integral, nonetheless the antiderivative may not be.

so this is roughly what i was thinking of, and i hope it helps someone, apologies if it does not.Remark: If you are aware of Lebesgue integration you know that continuity of the integrand can be dispensed with at the cost of dealing with functions which are differentiable almost everywhere. e.g. in the case of the step functions, we can "antidifferentiate them" by using piecewise linear functions, in the sense that a suitable piecewise linear function is continuous, and has a derivative at all but a finite set of points, where it is still continuous, and at all other points it is differentiable and its derivative equals the step function. Such an "almost everywhere" antiderivative can then be used to compute the (definite) integral of a step function. For example, the absolute value function is a good antiderivative of the function that equals -1 for negative x and +1 for positive x, and anything at x=0. (We don't care about the value at zero since it cannot affect the value of the integral.) I.e. that step function has a definite integral on any interval and the absolute value function can be used to calculate it. But the point again is that one really uses here the definition of the integral as a limit of sums to find the antiderivative, not the other way around.Svein's post too is very interesting, to me especially the point that the integral is a "smoothing operation".

As some of you probably know, integrals can even be used to define derivatives of functions that are not differentiable in the usual sense anywhere! i.e. if f is any locally integrable function, then it acts on smooth (infinitely differentiable) compactly supported functions g by integrating their product fg. And integration is so sensitive that knowing these integrals for all g determines f almost everywhere. So even if f is nowhere differentiable in the usual sense, we can still determine what Df should be by telling what its value on every smooth compactly supported g is. For by the formula for integration by parts, we should have that the integral of gDf + fDg should be zero (since g and hence Dg are suported on a finite interval), hence we can define the integral of gDf to be minus the integral of fDg. This is called the "distribution derivative" of f. We don't get it immediately as a function, but we do know how a function representing it should act on all smooth (compactly supported) functions. This is useful even in the case of functions f that do have a derivative, in fact one can solve some differential equations in two stages, first by finding the distribution derivative or distribution solution, and then proving that solution is actually represented by a function.

Note that the basis for this use of integrals to define derivatives is that precisely the opposite of the original complaint is true, i.e. integrals are far easier, at least theoretically, than derivatives, e.g. far larger classes of functions can be integrated than can be differentiated, and as Svein observed, the integrals have better properties. E.g. a locally integrable function has of course an integral by definition, even though it may be very rough or noisy, but it takes this very clever stratagem to even begin to define its derivative.

I will try now to stop adding to this very pregnant discussion topic. But I suggest for wolly, that a perusal, or better, a careful study, of the first part of Apostol, where he does integrals before derivatives, could be very instructive, since you seem to seek understanding as opposed to memorizing.
 
Last edited:
  • #37
can't resist one more example of why integrals are "easier than" derivatives. in fact among all continuous functions, most of them do not have derivatives anywhere, but they all have integrals, and it is easy to approximate the values of those integrals by estimating them by step functions. hence most are "easy to integrate" but impossible to differentiate. the most famous example, due to weierstrass, was even given as a Fourier series, in fact as a limit of just cosines:

https://en.wikipedia.org/wiki/Weierstrass_function

Note too that since this function is continuous, hence locally integrable, it does have a "distribution derivative", in the sense that it operates on smooth compactly supported functions.
 
Last edited:
  • #38
to elaborate the basic idea of symbolipoint's post: recall that subtraction is defined in terms of addition, i.e. a-b is defined as that number c such that b+c = a. similarly a/b is defined as that number c such that bc = a, and the antiderivative of f is defined as that function g such that g' = f, etc... this means that to calculate a-b from the definition, you have to add all possible number to b and wait until you get a as an answer, and similarly in the other cases. it is amazing when there is any procedure at all that helps find the "opposite" of an operation. that's why solving equations is hard. these tiny examples amount to solving the equations bx = a, or b+x = a, or y' = f ...

If anyone takes the time to process these posts I will enjoy any feedback. the main point is that although it is nice to use an antiderivative to compute an integral, in general this is impossible and the only way to find most antiderivatives is to use an integral, i.e. a "definite" integral, or limit of sums, but computed with a variable upper limit. moreover, most functions can be integrated but not differentiated, and even for almost all elementary functions that one can write down, the antiderivative given by the integral, is a totally unfamiliar function you cannot write down any other way than as a limit of sums, and have never seen before or heard of.

Just for fun, you know the absolute value function is continuous, so it must have an antiderivative. Can you write it down? Hint: it's not hard, and you can work on one side of the y-axis at a time. Just make sure your final answer is continuous.
 
Last edited:
  • #39
wolly said:
Well can someone prove that integrals are not antiderivatives?Please explain!
EDIT I don't know if this is what you are looking for, but there is, e.g., the Volterra function V , which is everywhere differentiable but ##\int V' \neq V ## . Maybe the concept that comes into play here is absolute continuity. I think a standard example is that of the Cantor function C which has the same property. Since C is a.e. 0 , C'=0 a.e., but , as with the Volterra function V ## \int C' \neq C ## ; but V' is not even integrable , so we do not always recover the function as the integral of the derivative , so these terms are not , strictly -speaking, inverses of each other. EDIT: What I mean is that , only within the class of absolutely continuous functions, the two operations are inverses of each other.
 
Last edited:
  • Like
Likes mfb, mathwonk and FactChecker
  • #40
these are very nice and illustrative examples. they may be examples of the logically opposite statement however, since they are examples of antiderivatives that are not integrals. in all cases of lebesgue integrable functions f, the integral of f is an antiderivative (almost everywhere) of f, by lebesgue's theorem. of course it depends on your definition of antiderivative. i.e. even in the riemann case, unless f is continuous, it is not necessarily true that an integral is an everywhere differentiable antiderivative. but for every reimann integrable f, it is a (Lipschitz continuous, hence also absolutely continuous) function which is a differentiable antiderivative of f almost everywhere.

obviously this is a somewhat complicated topic, and i am not an expert, i.e. i am a geometer and not an analyst.

but from the combination of the posts of WWGD and these remarks, it seems that using lebesgue integration, every (lebesgue) integral is an antiderivative (a.e.), but some antiderivatives are not integrals, only absolutely continuous ones are. thank you WWGD, this clarifies things at least for me. In particular, integrals are not the same thing as antiderivatives. I.e. an integrable function can have many antiderivatives, but only those which are also absolutely continuous can occur as integrals.

His Cantor example shows this, since it is a con tinuous (but not absolutely continuous) antiderivative of the zero function. The only integral however of the zero function is the zero function. To put it another way, among all the antiderivatives of an integrable function, the integral picks out the unique (up to a constant) absolutely continuous one. The key lemma is the one that in the classical case follows from the mean value theorem: if a continuous function has derivative zero almost everywhere, is it a constant? The answer is not necessarily, unless the function is also absolutely continuous.

To @wolly, this relates to your question about the constant C. I.e. if you define an antiderivative of f as a function that is differentiable everywhere and the derivative equals f everywhere, then any two differ by a constant C. But if we define an antiderivative of f as a continuous function with derivative almost everywhere equal to f, then two of these can differ by a function like the Cantor function! I.e. it is n ot necessarily a constant but it does have derivative zero almost everywhere, But if we define an antiderivative as an absolutey continuous function whose derivative equals f almost everywhere then any two of these do differ by a constant C.

In the Riemann case it is harder to provide inverse operations. I.e. every riemann differentiable function f is continuous almost everywhere, and its integral is lipschitz continuous and differentiable everywhere f was continuous, hence almost everyhere, and the value of that derivative equals f almost everywhere. But if we start with a lipschitz continuous function g which is also differentiable a.e. (in fact that is guaranteed) , then it is not always true that its derivative g' is riemann integrable, although it will be lebesgue integrable, with lebesgue integral equal to g + C, for some constant C. So in the riemann case we do not even know a condition that guarantees a function is the integral of some riemann integrable function, i.e. lipschitz continuity is necessary but apparently not sufficient. in the lebesgue case, we do know that every absolutely continuous function is the integral of its derivative, hence is an integral and an antiderivative.

since I am not an expert, i recommend reading a book by an analyst like Sterling Berberian, on integration and measure theory, or for the ambitious, the book Functional Analysis, by Riesz-Nagy.
 
Last edited:
  • Like
Likes WWGD
  • #41
by the way, it is not expected that someone with a basic question on this can understand all this fairly sophisticated stuff that has been posted. it is only meant as an attempt to provoke discussion. all and any questions on any aspect of it are welcome. so just take a sentence or two, process them and ask away.

my goal i achieved if someone has been given more examples to think more about whether an integral is or is not the same as an antiderivative, and what that question means, which is the same as the OP's question in #30.
 
  • #42
Just to follow up on what I said earlier about integration, it may help to understand it from a computational viewpoint. It's actually amazingly simple in that sense.

Here is a reference from my library which is a bit dated but still quite useful. "BASIC Programs for Scientists and Engineers" by Alan Miller. It's a very easy introduction to many useful topics, including numerical integration. (To be followed up perhaps by those who need it with the classic Numerical Recipes in C (or if you must C++) by Press et al.)

Miller provides three different ways of computing the integral, using just 27 pages (Ch. 9) . Compare the simplicity of writing a program to do numerical integration with a program to solve partial differential equations!

In general, I've found that programming something, or at least learning how it's done, makes some things much clearer. This is especially true when you are writing the program, because, as the saying goes, you really understand something when you can tell a computer how to do it.
 
  • Like
Likes mathwonk
  • #43
Going from mathematics to physics (electronics) - here is an integrator:
integrater15-300x194.png

This integrator has a frequency response like this:
integrater19.png

You can also create a differentiator:
op-amp-differentiator-circuit-01.gif

The frequency response of such a circuit is something like this:
upload_2018-7-23_21-13-26.png
 

Attachments

  • upload_2018-7-23_21-13-26.png
    upload_2018-7-23_21-13-26.png
    2.7 KB · Views: 511
  • integrater15-300x194.png
    integrater15-300x194.png
    2 KB · Views: 512
  • integrater19.png
    integrater19.png
    2.8 KB · Views: 512
  • op-amp-differentiator-circuit-01.gif
    op-amp-differentiator-circuit-01.gif
    1.3 KB · Views: 626
  • #44
mathwonk said:
by the way, it is not expected that someone with a basic question on this can understand all this fairly sophisticated stuff that has been posted. it is only meant as an attempt to provoke discussion. all and any questions on any aspect of it are welcome. so just take a sentence or two, process them and ask away.

.
That is my usual approach: to answer the big questions, start early. You will likely not understand it the first time around , but you will start breaking it down in the back of your mind. But I have been chided here by some moderators for this. They seem to think I am too over the top .
 
  • #45
the best professors i have had have answered questions in a way i have not understood sometimes for years. they really give you a lot, but you have to make an effort.
 
  • #46
I didn't really understand these subtleties until a couple years after my first calculus sequence. Applications in physics helped, but what helped the most was when I finally got to a numerical analysis course and learned how to compute just about any derivative or integral numerically (and also how to integrate differential equations numerically. Somehow the computational approach (as opposed to the pencil and paper analytical approach) was the missing piece for my conceptual understanding.
 
  • Like
Likes Auto-Didact
  • #47
wolly said:
I understand the concept of derivatives but when it comes to integrals and their uses I do not understand what they do and where you use them.In derivatives you can understand how a function changes but in integration everything is so illogical.Can someone explain me the use of integrals in calculus?I mean all I could understand is that there is some +C which is a constant but I have no idea where that come from.What does this +C even mean?When I look at derivatives I can see that the function changes but when I look at a integral I have no idea what a function does in that specific function.All I know is that I learned(more memorized) and I couldn't understand the complexity of them.
I have a math book full of exercises and it doesn't explain at all how a integral works.It just shows me some integrals that I learned in high school and most of them don't even show the proof behind them.
Two examples might clear it up.

Say you know you are filling a tank at 1 gallon per minute. How much water do you have after 15 minutes? You have added 15 gallons to the tank +C, the amount that was in the tank.

Say you are traveling West at 10 miles per hour. How far West of Washington are you after 15 hours? You have traveled 150 miles west, +C, the miles west of Washington you started.

In both cases it was a dx/dt = k. Integrating let's you quantify the total, but only if you know that starting condition C.

I integrate times and distances in my head when I drive. But I use my assumed average speed. If I'm 60 miles from home and expect to drive at 40 mph on average, I know I'll be home in an hour and a half. I have integrated my velocity function.
v=dx/dt=40 miles per hour
x= integral of dx/dt = 40t + C.
And I know C is 0 miles and x is 60 miles. I am arbitrarily setting my location as the origin, and my home as the destination, and solving for the time.

As to why it is harder to integrate than differentiate ... it just is. It is easy to break the egg, and impossible to put it together again. Things don't have to be identical in effort in the reverse direction.
 
  • #48
The equation for the derivative is the limit of a simple divided difference. And as the value of ##\Delta x## goes to the zero limit, it remains in a simple form. The equation for the integral is the limit of a sum of several terms, where the number of terms increase as the interval is divided more finely. That is much more difficult. The vast majority of integrals with a closed form equation are those where the integral and integrand are an antiderivative / derivative pair (i.e. the integrand is obtained from a relatively simple integral formula through differentiation).
 
  • #49
I don't know if this explains why integration is harder than differentiation, or is just another way of saying it, but...

In calculus, we tend to create complicated functions by composition of simpler functions. For example, from ##e^x## and ##sin(x)## we can get ##e^{sin(x)}##. If you have two functions ##f(x)## and ##g(x)## and you know ##f'## and ##g'##, then you can combine that knowledge to get ##\frac{d}{dx} f(g(x))##: It's equal to ##g'(x) f'(g(x))##.

In contrast, if you know the integral of ##f## and you know the integral of ##g##, there is no simple way to combine those to get the integral of ##f(g(x))##.

So in the case of differentiation, it's enough to know how to differentiate the basic functions, and that tells us how to differentiate much more complex functions. Integration doesn't work that way.
 
Last edited:
  • Like
Likes FactChecker
  • #50
stevendaryl said:
If you have two functions ##f(x)## and ##g(x)## and you know ##f'## and ##g'##, then you can combine that knowledge to get ##\frac{d}{dx} f(g(x))##: It's equal to ##g'(x) f'(g(x))##.

In contrast, if you know the integral of ##f## and you know the integral of ##g##, there is no simple way to combine those to get the integral of ##f(g(x))##.
That's a very good point which may get to the heart of the matter.
That, together with the fact that integrals raise the power of ##x^n## rather than lowering it, give the derivative a great advantage when applied to two very basic operations.
 

Similar threads

Back
Top