# Rationale behind seperation and integration

1. Jun 2, 2004

### Gza

Ever since my first physics class, i've noticed many apparently sacred rules of calculus being broken before my innocent eyes. since calc I i was told that (dy/dx) was not a ratio, so don't treat it like one. Yet when solving various simple differential equations, it's common to seperate and integrate the little differentials by "multiplying through" by the denominator of the dy/dx ratio (yeah i said ratio, so sue me ). I was wondering where the mathematical basis for this comes from.

Last edited: Jun 2, 2004
2. Jun 2, 2004

### Hurkyl

Staff Emeritus
The process of "multiplying through by dx" is actually that of passing to the land of "differential forms". A differential form is, more or less, something to that you can integrate.

For example,

$$\frac{dy}{dx} dx = dy$$

seems strange and mysterious, but if we stick an integral sign out front, we get

$$\int \frac{dy}{dx} dx = \int dy$$

which we recognize as simply the substitution rule for integrals

3. Jun 2, 2004

### e(ho0n3

They aren't magic tricks. There is a reason why they can be done, although I'm still not sure since I haven't done any differential geometry. Most books I've read tell you that you can do these kinds of manipulations, but they don't explain why since the explanation will have you wondering "why am I even bothering with this in the first place?" You'll have to take it as is for the most part (be very careful though).

e(ho0n3

4. Jun 4, 2004

### arildno

This technique is merely a shorthand notation for exploiting the chain rule of differentiation.
Consider the differential equation:
$$\frac{dy}{dx}=\frac{g(x)}{f(y(x))}, \int{f}(y)dy=F(y)+C$$

That is, F is an antiderivative of f.
Then we have:
$$\int_{x_{0}}^{x_{1}}f(y(x))\frac{dy}{dx}dx=\int_{x_{0}}^{x_{1}}g(x)dx$$

Invoking the chain rule, we have:
$$\int_{x_{0}}^{x_{1}}f(y(x))\frac{dy}{dx}dx=\int_{x_{0}}^{x_{1}}\frac{d}{dx}F(y(x))dx=$$
$$F(y(x_{1}))-F(y(x_{0}))=F(y_{1})-F(y_{0})$$
by the fundamental theorem of calculus.
But this yields the same as the formal trick:
$$\int_{x_{0}}^{x_{1}}f\frac{dy}{dx}dx=\int_{y_{0}}^{y_{1}}fdy$$

5. Jun 4, 2004

### Gza

Thank you arildno, now I no longer have to wash my hands after seperating and integrating

6. Jun 4, 2004

### reilly

dy/dx Not to worry

A derivative is the limit of a ratio -- as the denom goes to zero. All the operations that effectively treat a deriv. as a ratio are fine, as long as the appropriate limits are OK, which they usually are. For all practical purposes, think of the dx as 10 to the minus a zillion. For all practical purposes, a derivative is a ratio. If you review your deltas and epsilons you'll see that you do not have to worry. R. Atkinson

7. Jun 5, 2004

### Icarus

Berkeley famously castigated differentials as "the ghosts of departed quantities". In his time, there was no solid mathematical basis for them. Today we have a number of justifications such as those mentioned above that work in various limited circumstances, but none that completely covers the cavalier treatment you will see them subjected to in your physics classes.

Differential forms are the most common means of modeling differentials, but they have severe restrictions, for instance, the differential of arclength

$$ds = \sqrt{dx^2 + dy^2 + dz^2}$$

is a nonsensical equation in differential forms - the squaring under differential forms does not behave like a simple multiplication, and the square root can only be interpreted as a notational convention for the equation$$ds^2 = dx^2 + dy^2 + dz^2$$. In particular, you cannot define a differential form $$ds$$ by the equation above. But in physics, this sort of thing is done all the time.

There are other models for differentials than differential forms. I once heard about a definition developed by Solomon Leader, based on the Generalized Riemann Integral, that allowed such algebraic manipulations in a natural fashion. It was interesting, but I never heard any more about it.

The fact is, you will see many hard and fast mathematical rules violated freely by physicists, who get away with it, and leave mathematicians scrambling to figure out why. Another classic example is the Dirac delta function. This is a "function", $$\delta(x)$$, defined by the following properties:

$$\delta(x) = 0, x \neq 0$$
$$\int_{-\infty}^\infty \delta(x)dx = 1$$

Clearly no actual function has these properties. The first is sufficient to require the integral to equal zero, even if $$\delta(0)$$ is taken to be infinitely large. Yet the uses the delta function is put to in physics work, and are so useful that the function is also widely used in mathematics. But it required 50 years for mathematicians to come up with a fully functional justification for it.

Some other tricks you will see if follow physics long enough: The derivation of finite values for quantities defined by divergent series, and an integral over an infinite dimensional space which cannot possibly exist, but none-the-less still provides useful answers. Last I heard, not even a weak mathematical justification has been found for these.

8. Jun 10, 2004

### JohnDubYa

If it bothers you, just begin with finite size elements at the very beginning, then take the limit once you finish all the mathematical manipulations.

9. Jun 10, 2004

### Gza

Doesn't doing that somehow "break" the definition of the derivative, since no limits are involved with that procedure? I mean this philosophically, i'm sure computationally it would work, but i'm trying to learn why it works at a deeper level.

10. Jun 11, 2004

### JohnDubYa

This procedure works precisely because no limits are involved. I am not sure what philosophy you are referring. After all, you can choose to shrink the finite size increases to infinitisimal size at any step you wish. What compels you to so at the outset?

11. Jun 11, 2004

### arildno

You can't just go about switching the order in which you apply limiting processes. In general, it's just wrong.
That it works under fairly mild restrictions is beside the issue.

12. Jun 11, 2004

### JohnDubYa

I tell you what, why not provide a concrete example where this notion fails. If it's convincing, I'll concede.

13. Jun 11, 2004

### master_coda

This depends on what you mean by changing the limiting process. Given a situation where you have something like:

$$\lim_{x\rightarrow a}[\text{stuff}][/itex] if you mean that you can rearrange and manipulate things inside of "stuff" and then just apply the limit to the rearranged version of stuff, then you are right. What you can't do is take something like the above limit and arbitrarily move stuff inside and outside of the limit. So for example you can't rearrange stuff, apply the limit to part of stuff, then rearrange the rest of stuff and apply the limit to that, except within certain restrictions. 14. Jun 11, 2004 ### JohnDubYa RE: "If you mean that you can rearrange and manipulate things inside of "stuff" and then just apply the limit to the rearranged version of stuff, then you are right." That is exactly what I mean. And in physics that is essentially what we do when we multiply an equation through by [tex]dx$$.

Physicists can get sloppy with the notation, but the mathematics is almost always sound.

15. Jun 12, 2004

### arildno

One example in which switching the order of limiting processes can't be done.
Define a sequence of functions
$$f_{n}=\frac{1}{n}\sin(n^{2}x)$$
Clearly, we have, for all x:
$$\lim_{n\to\infty}{f}_{n}=0=f(x)$$

Hence, the following expression yields the derivative of f(x):
$$\lim_{h\to{0}}\lim_{n\to\infty}\frac{f_{n}(x+h)-f(x)}{h}=0$$

Try switching the h and n limiting processes; it doesn't work.

16. Jun 12, 2004

### arildno

Note that differentiation and integration are 2 limiting processes.
What you're saying is that a "dx" in the differentiation process always can be cancelled by a "dx" in the integration process.

17. Jun 12, 2004

### JohnDubYa

Well, that IS the Fundamental Theorem of Calculus. But I am really saying is that physicists manipulate $$dx$$ when in fact they are manipulating $$\Delta x$$. The method is sound, just the notation is sloppy. They just hate taking explicit limits of every mathematical statement because it bogs down the discussion.

18. Jun 12, 2004

### arildno

I am perfectly aware that the rationale behind the technique is the fundamental theorem of calculus.
I would like to remind you that the original poster asked for an explanation of the separation technique.
To call anything else other than an EXPLICIT reference to the FToC for an EXPLANATION, is simply a misnomer.

19. Jun 12, 2004

### JohnDubYa

Sorry, but I cannot parse your last sentence. No matter, because I think we are now off-topic.

20. Jul 24, 2004

### mathwonk

the usual mathematician's idea of a differential is that it represenst a function whose values are linear functions. for example the differential dx represents the function whose value at any point a is the identity linear function. The differential f(x)dx has value at a equal to the linear function: f(a) times the identity function.

Then if we have two differentials, say f(x)dx and g(x)dx, we can divide them and get a function whose value at a is f(a)dx/g(a)dx = f(a)/g(a). This is a number so f(x)dx/g(x)dx is a function.

Hence the quotient of two differentials is a function, and a function times a differential is a differential.

since the quotient of two differentials is a function, each differential is a function times any other differential (except where the one on the bottom vanishes). In particular df is df/dx times dx or f'(x)dx. This is the justification for saying if dy/dx = g(x) then dy = g(x)dx, where we think of dy here as df.