- 2,786
- 13
Hey Mathwonk, just a quickie. When I think of a differential, it is written as y=y(t), and dy= dy/dt * dt. When I see this, I think of it as we want to deal with only the variable y. But y(t) is a function of t. So we set y=y(t), so that y is a variable by itself. As a consequence, we usually want to integrate with resepct to y. But if we take the derivative of y(t), we get dy/dt. But we want the change in y, not y(t) with respect to t, which is why we multiply by an increment of dt. This leaves a change in dy only. I know dy/dt is not a fraction, but In a sense, this is what is happening. Is this interpretation wrong? I also know that dy/dt is the slope, and multiplying it by dt, gives you the value dt away, but it seems like they are two ways of looking at the same thing.
Thanks,
Cyrus
Thanks,
Cyrus