# Playing around with dt, dx,

1. Feb 19, 2006

### cliowa

Physicists do it all the time: playing around with those symbols like $$\frac{dx}{dt}$$. They just treat them like ordinary variables, which they certainly are not. Let me give you an example: If you want to change an integration variable from, say, x to r and you know that $$r^{2}=a^{2}+R^{2}-2Rx$$ they would simply differentiate, rearrange the stuff and write down $$r\cdot dr=-R\cdot dx$$.
Now, I don't believe this to be wrong, but I don't understand why (under what circumstances) one is allowed to operate like that. I didn't find any good answers so I thought I'd ask here.
Thanks alot in advance. Best regards

Cliowa

2. Feb 19, 2006

### Pseudo Statistic

Hmm... maybe I haven't gotten too far into physics to see the real playing around, but my physics teacher does stuff like multiplying one side by dx/dx or something to manipulate equations and all.
I guess nothing like that r dr =-R dx thing you did, though. :)

3. Feb 19, 2006

### arildno

A lot of the time, they are using the implicit function theorem, along with the inverse function theorem of differentiation.
Let us rewrite what you have a bit, assuming a and R to be constants:
Define a function of two variables $F(x,r)=r^{2}-(a^{2}+R^{2})+2Rx$

Now, regard the equation for the zeroes of F:
$$F(x,r)=0$$
Evidently, only a few of the points in the (x,r)-PLANE will be the solutions of this equation. If we try to change the x-value from a known zero $(x_{0},r_{0})$ we must in general ALSO change the value of "r" in order for F(x,r)=0 to remain true.

Now, the implicit function theorem states that (under fairly general conditions) in the vicinity of a zero $(x_{0},r_{0})$ there will exist a zero-point curve $(x,\hat{r}(x))$, so that we have:
$$F(x,\hat{r}(x))=0$$ over some x-interval.
But on this interval, this equation holds for ALL x, and we may differentiate it, with respect to x:
$$\frac{\partial{F}}{\partial{x}}+\frac{\partial{F}}{\partial{r}}\frac{d\hat{r}}{dx}=0,\to\frac{d\hat{r}}{dx}=-\frac{\frac{\partial{F}}{\partial{x}}}{\frac{\partial{F}}{\partial{r}}}$$
That is, we've managed to express the slope of the zero-pointcurve $\hat{r}$ in terms of the negative ratio of F's partial derivatives!
$$\frac{\partial{F}}{\partial{x}}=2R,\frac{\partial{F}}{\partial{r}}=2r=2\hat{r}(x),\to\frac{d\hat{r}}{dx}=-\frac{R}{\hat{r}(x)}$$

The condition that this is legitimate in he vicinity of a zero $(x_{0},r_{0})$ is that
$$\frac{\partial{F}}{\partial{r}}|_{(x_{0},r_{0})}\neq{0}$$
In your case, it is therefore seen that this is legitimate at any zero where r is different from zero.

I'll let you digest this before proceeding.

Last edited: Feb 19, 2006
4. Feb 19, 2006

### HallsofIvy

Staff Emeritus
Strictly speaking, $\frac{dy}{dx}$ is not a fraction. But it is the limit of a fraction: $\frac{f(x+h)-f(x)}{h}$. That means that we can treat it like a fraction: for example, the chain rule says $\frac{dz}{dx}= \frac{dz}{dy}\frac{dy}{dx}$ looks like we are "cancelling" the "dy" terms. We aren't really but we can go back before the limit, cancel terms there, then take the limit. That's why, after defining the derivative, $\frac{dy}{dx}$ we then define the differentials, dx and dy separately- so that we have a notation allowing us to treat the derivatives as if it were a fraction.

5. Feb 19, 2006

### arildno

Now, let us regard the above from a somewhat different point of view that is closer to how you have been presented this:
$$r^{2}=a^{2}+R^{2}-2Rx (A)$$

If we look at two close-lying, solutions of (A) how is the change in "x" value from solution 1 to sol. 2 related to the change of "r" value from solution 1. to sol.2?

Let solution 1 be called $(x,r)$ (for simplicity), solution 2 $(x+\bigtriangleup{x},r+\bigtriangleup{r})$

Since, by assumption, BOTH are solutions to (A), we have:
$$r^{2}=a^{2}+R^{2}-2Rx (1)$$
and:
$$(r+\bigtriangleup{r})^{2}=a^{2}+R^{2}-2R(x+\bigtriangleup{x})(2)$$
(2)-(1) yields, when ignoring the quadratic term in the change of r:
$$2r\bigtriangleup{r}=-2R\bigtriangleup{x}$$
Or, in the "limit", we get $2rdr=-2Rdx$
This is seen to reproduce your cited result.
It is crucial that you see that this follows when we restrict ourselves to a curve of solutions to (A), which is basically the same as saying there must exist a zero-point curve $\hat{r}$ so that $F(x,\hat{r})=0$

The "physicists"' view gets away with a slightly less amount of notation than what I used in the previous post.

6. Feb 19, 2006

### cliowa

Wow, thanks alot guys. That was quite helpful.

I think I understand most of this, except for one thing: The existence of a zero-point curve is only guaranteed for in a vicinity of $$(x_{0},r_{0})$$, right? When I'm doing this change of variables thing later on, i.e. when integrating, I might not confine my integration limits to a vicinity of $$(x_{0},r_{0})$$. How can I be sure the whole thing still works then?

7. Feb 19, 2006

### cliowa

You know, I get the general picture, but I feel what we're doing isn't quite as easy as you described it: We're not looking at the fraction, rearranging the fraction and then taking the limit, but we're taking the limit on one side of the equal sign and then rearranging (with the other side) as though one wasn't a limit, but as if it were variables. That's what bothers me: We don't seem to be treating the two sides of the equal sign the same way! Please do correct me if I'm wrong on this one.

8. Feb 19, 2006

### Hurkyl

Staff Emeritus
Since this thread is to bring out what we're "really doing" in this field, I feel it's necessary to nitpick this: the dx and dy in the expression dy/dx are not differentials. (and / isn't division either: it's just one giant symbol)

Although it is true that df = f' dx, that's just because df happens to be a multiple of dx. You can't really "divide" them. (very much like one vector can happen to be a multiple of another vector)