# Can I get the differential out of the integration

1. Jun 1, 2012

### Marwan_H

Hi all..

If I can think about the differential dt in the integration ∫f(t)dt as a const infinitesimal element, Can I get it out of the integration (i.e. write dt∫f(t))??

& if this is not valid, please explain why (is this invalid when we work with infinitesimals??)

2. Jun 1, 2012

### theorem4.5.9

Because the $\int$ symbol doesn't mean anything without an infinitesimal.

I think you are purposely trying to avoid analysis with this question. It's the role of analysis to take all ambiguity out of calculus.

Another route would be to study nonstandard analysis, but I know only an $\epsilon$ of this.

3. Jun 2, 2012

### Marwan_H

I'd thought the answer gonna be intuitive untill some one told me the same as your answer: "analysis is reqd. to resolve calculus ambiguities & concepts that we'd taken for granted in high school & even college".. one of these ambiguities also is what happpens to the differential dx when translated to discrete equations..

I've found many subcategories under the mathematical analysis field. So, which ones do you think I should refer to (besides nonstandard analysis) ??

4. Jun 2, 2012

### algebrat

the integral of f(x)dx. Let's see...

The integral is a sum in a sense. You add up all the infinitesimal contributions f(x)dx. Each one is like hieght times width, and we add it at each x value. dx is sort of a constant width, like in the riemann sum aproximation to the integral. So in a sense, you could pull it out. but there is no need, and noone ever does. additionally, in some situations, dx is erhaps not a constant (if you are changing variables). Outside of having some conceptual understanding of what dx is, you should follow everyone else's convention so that we can all understand each other's work more easily. i don't think you'll really get much out of moving it, or let us know what parcticallity you have found in doing this.

In fact, i've seen many physicicsts write it differently, they write ∫dx f(x), instead of
∫f(x) dx

so i don't think it's a matter of finding an area of analysis where you'd understand this.

the ∫ symbol is analogous to Ʃ, which stands for sigma, which is like the letter s, as in short for sum. so ∫ is a request to add up all the f(x) dx's at each x.

5. Jun 2, 2012

### theorem4.5.9

I don't really agree with algebrat. Even in the riemann sum where you use constant width, you shouldn't think it's OK to pull the $dx$, or more precisely $\Delta x$ outside of the sum. The reason for this is then once you take the limit of the sum, you have zero times infinity, which is undefined.

Perhaps it is best to stop thinking about $dx$ as an infinitesimal number, and instead as the density at some point. If you take $dx$ to be constant sections of the real line, you have constant density. If you want to use a change of variable, its kind of like you started with some nonuniform density, wrote it as a function with respect to some constant density line, and then rewrite your function in terms of that constant density line. This is of course just interpretation and your mileage may very.

I like thinking about $dx$ as a density because it ties in nicely with measure theory. In my humble opinion, the Lebesgue integral is much more intuitive than the Riemann-Stieltjes integral, though it should be first presented intuitively (just like the Riemann integral is first presented intuitively as the sum of rectangles). Really the only difference is that in Lebesgue integration you chop up the range, drop lines down to form lots of (non-uniform width) rectangles, and multiply by their "density" $dx$. I find this far more intuitive than the Riemann-Stieltjes integral.

6. Jun 2, 2012

### algebrat

I didn't want someone to think they found a counterexample when finding a finite riemann sum approximation, becasue computationally, it's often convenient to factor out the $\Delta x$. That is probably the only time I find it practical to pull the "constant" out.

On the other hand, the limit is not necessarily 0 times infinity just because you pulled the Δx out.

7. Jun 2, 2012

### chiro

Hey Marwan_H and welcome to the forums.

The integration itself is a sum as other have pointed out, and the dt term refers more or less to how the general summation is calculated, and often is noted in relation to a measure.

The thing about dt is that it is not a constant, but instead a limit and you can't treat it as a constant. There are different measures used in integration that are treated as constants and in these situations you can simplify the integration under these measures as summations (like sigma summations), but in general this is not the case.

The reason we get the anti-derivative we get when using infinitesimal measure is related to the idea of summing things using the infinitesimal limit.

So think about integral f(x)dx from a to b = [F(a+h) - F(a)] + [F(a+2h) - F(a_h)] + ... + [F(b) - F(b-h)] = F(b) - F(a) using the Riemann sum. You can see this geometrically as well as algebraically, but this is why we get our result.

When we have more general measures, we need to consider the definition of those and similar kinds of arguments will be used with respect to that measure if a simplification (like we have in the anti-derivative case) does exist: it will be proved from the definition of the measure and the integral just like the Riemann-Integration formula has been.

8. Jun 3, 2012

### Marwan_H

The answer isn't naive indeed!!

I think I should see the Lebesgue integration, measure theory & the concept of "density"..

thanks all,,

9. Jun 3, 2012

### theorem4.5.9

I'm not aware of any textbooks that approach the Lebesgue integral on an intuitive level. You may want to ask a professor or someone who knows the theory.

The concept of "density" is something I made up to describe how I think of things, indeed "density" means something entirely different in measure theory. And finally, measure theory is pretty tough as it reconciles the Banach-Tarski paradox (which is not a paradox, just an unintuitive truth).

10. Jun 3, 2012

### Skrew

Your question is literally meaningless.

The integral notation is notation and nothing else. dx * integral means nothing unless you have defined it to mean something.

11. Jun 3, 2012

### theorem4.5.9

I disagree with this as well. Especially with Leibniz, the notation suggests manipulations that make sense on some level. The integral isn't just a number that is defined to certain functions (in which case notation is indeed just definition), it's a mechanical method for determining something physical. Furthermore the question is probing the limits of intuition, which in my opinion is the most important question one can ask in math. I think the question is very meaningful given the qualifications Marwan_H made, and worth answering.

12. Jun 3, 2012

### pwsnafu

I don't think the question is meaningless, especially as we have the theory of differential forms to fall back on.

There's a book by an author whose name is Burk (or was it Burke?). I remember finding it easy to follow, even though I was only in first year at the time.

13. Jun 3, 2012

### Skrew

Define what dx * integral is, because no one has yet to do so in this thread and I suspect the OP is simply refering to Riemann integration in R.

14. Jun 3, 2012

### theorem4.5.9

I think this thread has answered and resolved the OP's questions.

15. Jun 4, 2012

### Marwan_H

I asked this question because I'd seen that Leibniz treated the derivative as a ratio of differentials & so I thought I can treat the differential as a separate entity (not a part of the notation of the integration ∫..dx)..

16. Jun 4, 2012

### algebrat

Yes, there are subjects spent treating dx as a separate entity for sure, in smooth manifolds. Maybe in measure theory of real analysis as well.

In smooth manifolds, I'd say the dx element is more straightforward (looks pretty much like that in calculus), though abstracting it to multivariable gets a bit involved. For instance, in any calculus book, look up surface area and volume integrals, and you'll find integrals of dS and dV, which can involve cross products and determinants.

In measure theory, things get very strange, dx comes from a very weird measure of the x-axis; by that I mean we can measure subsets of the x-axis much more complicated than just intervals like (1,3). This is to treat integrals of f dx where for instance f may be nowhere continuous. But I can't rememeber if dx is considered separately from the integral, as they do in smooth manifolds (measure theory may be considering spaces too general to do this).

But to reiterate all the above discouragements, I can't think of a time when you're colleagues would be okay with writing the dx in front of the command to add, that is, ∫ (except for the case I mentioned a few days ago, above).

Last edited: Jun 4, 2012
17. Aug 15, 2012

### pondhockey

Follow-up question:

Thermodynamics texts routinely/commonly (maybe even universally) use expressions like:

W = ∫P dV

In the contexts that I see this same expression, it looks to me like P is often a function of V.

One such expression continues like this:

W = ∫P dV (lower limit "state1" upper limit "state2")
= ∫P(t) (dV/dt) dt (lower limit t1 upper limit t2)

I presume that the rigorous way to defend this statement is to invoke the substitution formula, in which case it seems to me that it really should be written

= ∫P(V(t)) (dV/dt) dt (lower limit t1 upper limit t2)

Am I reading correctly? More to the point, is it common practice in thermodynamics to hide the variable dependency in formulas such as W = ∫P dV ?