Treating the Derivative like a Quotient

In summary, Wikipedia's discussion on the Principle of Virtual Work says to treat the derivative as a quotient when working with vectors, but to not do so when working with continuous functions.
  • #1
Trying2Learn
373
57
TL;DR Summary
When can you split the derivative?
Good Morning

I have read that it is not justified to split the "numerator" and "denominator" in the symbol for, say, dx/dt

However, when I look at Wikipedia's discussion on the Principle of Virtual Work, they do just that. (See picture, below).

I was told it is OK in 1D cases, but note the bold terms (vectors) and dot product: this is not assumed to be 1D

How do they get way with inserting a dt in the numerator (on the third term) and inserting a dt at the tail?

See picture, below...

Just to be clear: I do understand what is happening with the physics -- I am just confused with how they "almost whimisically" treat the derivative dr/dt as a quotient when I have read, many times, to see the derivative as ONE symbol, not as a ratio.
Skjermbilde.JPG
 
Last edited:
Physics news on Phys.org
  • #3
Thanks, but unless I am reading too much into this, it did not address the question...

Yes, I see that the first equation is from the line integral. I have no issue with that. I even said I understand where the terms come from

I clicked on the link you provided and saw, in the blue box, at the the top, the EXACT question I have.

How do they get away with treating the derivative as a quotient? It is a single notation.

That is my question. How do they get away with these two contradictory items:
  • The blue box in your link
  • The statement to never treat the derivative symbol as a quotient where you can cancel differentials
 
  • #4
For non-negative integers ##i\le n## define ##t_i=t_0 +\frac in (t_1-t_0)## and
##\mathbf r_i = \mathbf r(t_i)##. Then

\begin{align*}
W&=
\int_{\mathbf r(t_0)}^{\mathbf r(t_1)}
\mathbf F\cdot d\mathbf r
=
\lim_{n\to\infty}
\sum_{i=1}^n
\mathbf F\cdot
\left(\mathbf r_i - \mathbf r_{i-1}\right)
=
\lim_{n\to\infty}
\mathbf F\cdot
\sum_{i=1}^n
(t_i - t_{i-1})
\frac{\left(\mathbf r_i - \mathbf r_{i-1}\right)}{t_i - t_{i-1}}
\end{align*}

Given certain smoothness properties, we can copy that limit operator to inside the dot product, sum and multiplication operators, to apply it to the factor
##\frac{\left(\mathbf r_i - \mathbf r_{i-1}\right)}{t_i - t_{i-1}}## and thereby replace it by ##\mathbf v(t_i)##

We then reverse the above process to write that as
$$\int_{t_0}^{t_1}
\mathbf F\cdot \mathbf v(t)\,dt
$$

I don't know off-hand what smoothness properties are needed, but I expect they include the facts that the dot product, multiplication and addition functions through which we are moving the limit operator are all smooth (infinitely differentiable), and that both the old and new limits exist at all points in the domain. Formally, we would then be applying the various first-year calculus theorems about limits to validate the manipulations.

Generally, where all functions are smooth (we may also need them to be analytic, but the above functions are also analytic) and nothing goes to infinity, we can move limit operators about almost wherever we like, and that enables us to treat the derivative symbol as a ratio.

But one should always pause to consider whether all relevant smoothness conditions hold, before charging ahead.

In physics problems they are more likely to hold than in maths problems, where we often work with discontinuous functions. But even in physics discontinuities can arise - eg singularities in general relativity.
 
  • Like
Likes PeroK and Trying2Learn
  • #5
andrewkirk said:
For non-negative integers ##i\le n## define ##t_i=t_0 +\frac in (t_1-t_0)## and
##\mathbf r_i = \mathbf r(t_i)##. Then

\begin{align*}
W&=
\int_{\mathbf r(t_0)}^{\mathbf r(t_1)}
\mathbf F\cdot d\mathbf r
=
\lim_{n\to\infty}
\sum_{i=1}^n
\mathbf F\cdot
\left(\mathbf r_i - \mathbf r_{i-1}\right)
=
\lim_{n\to\infty}
\mathbf F\cdot
\sum_{i=1}^n
(t_i - t_{i-1})
\frac{\left(\mathbf r_i - \mathbf r_{i-1}\right)}{t_i - t_{i-1}}
\end{align*}

Given certain smoothness properties, we can copy that limit operator to inside the dot product, sum and multiplication operators, to apply it to the factor
##\frac{\left(\mathbf r_i - \mathbf r_{i-1}\right)}{t_i - t_{i-1}}## and thereby replace it by ##\mathbf v(t_i)##

We then reverse the above process to write that as
$$\int_{t_0}^{t_1}
\mathbf F\cdot \mathbf v(t)\,dt
$$

I don't know off-hand what smoothness properties are needed, but I expect they include the facts that the dot product, multiplication and addition functions through which we are moving the limit operator are all smooth (infinitely differentiable), and that both the old and new limits exist at all points in the domain. Formally, we would then be applying the various first-year calculus theorems about limits to validate the manipulations.

Generally, where all functions are smooth (we may also need them to be analytic, but the above functions are also analytic) and nothing goes to infinity, we can move limit operators about almost wherever we like, and that enables us to treat the derivative symbol as a ratio.

But one should always pause to consider whether all relevant smoothness conditions hold, before charging ahead.

In physics problems they are more likely to hold than in maths problems, where we often work with discontinuous functions. But even in physics discontinuities can arise - eg singularities in general relativity.
WOW! Thank you! This is exactly what I am looking for!

You did an excellent job showing me how I CAN go from dr to dt. Great.

Now can I ask you to supplement what you just wrote with state ment like this (which could be wrong but I am flailing now because I am so close). Please reword what I write below and make it precise...

Normally, we should never treat the derivative like a quotient, but in some cases of smooth behavior over time, we can "manipulate" the terms, and, unfortunately, the manipulation APPEARS as if we are treating the derivative like quotient, but we must never do that.
 
  • #6
Trying2Learn said:
Thanks, but unless I am reading too much into this, it did not address the question...

Yes, I see that the first equation is from the line integral. I have no issue with that. I even said I understand where the terms come from

I clicked on the link you provided and saw, in the blue box, at the the top, the EXACT question I have.

How do they get away with treating the derivative as a quotient? It is a single notation.

That is my question. How do they get away with these two contradictory items:
  • The blue box in your link
  • The statement to never treat the derivative symbol as a quotient where you can cancel differentials
At some stage you will have to learn the calculus of line integrals. See the above link. You need to go through the previous two pages in that section as well if you want to prove the equation. Basically, it's the line integral version of the usual integration by substitution rule (and is proved in a similar way):
$$\int_a^b f(r(t))r'(t)dt = \int_{r(a)}^{r(b)} f(r)dr$$
And, if you write that in differential notation, you have the heuristic "differential as a quotient":
$$\int_a^b f(r(t))\frac{dr}{dt}dt = \int_{r(a)}^{r(b)} f(r)dr$$
 
  • Like
Likes Delta2
  • #7
PeroK said:
At some stage you will have to learn the calculus of line integrals. See the above link. You need to go through the previous two pages in that section as well if you want to prove the equation. Basically, it's the line integral version of the usual integration by substitution rule (and is proved in a similar way):
$$\int_a^b f(r(t))r'(t)dt = \int_{r(a)}^{r(b)} f(r)dr$$
And, if you write that in differential notation, you have the heuristic "differential as a quotient":
$$\int_a^b f(r(t))\frac{dr}{dt}dt = \int_{r(a)}^{r(b)} f(r)dr$$
OK...

So I have studied the line integral before, and winced whenever I saw this. But I approach it from a mechanical engineer who wants to get it down.

So are you saying that the very subtle issue of treating the derivative notation as if it were a quotient is part and parcel of performing a line integral?

  1. I have seen line integrals done before.
  2. I have read NEVER to treat the derivative like a quotient

SO why is it that when I read about line integrals, I have NEVER read a statement like this...

"OK, my children, we are now walking into a new area. Yes, we know not to treat the derivative like a quotient where terms cancel, but in some cases, such as the line integral (or even a 1D integral over ONE axis), with given smoothness, we can rework the integral so it APPEARS as if we are cancelling terms."

You see, I follow what is being written here. But no one is inserting the crystal that acknowledges that a too casual interpreation could lead to errors. I am not sure of what I am saying. I just cannot reconcile 1 and 2, above, in my comment.
 
  • #8
Here is another example from dynamics

a = dv/dt
v = ds/dt

And then we can solve some problems using

ads=vdv

And this sends shivers down my spine. But it is possible. It is possible in 1D motion

Are you all saying that in the special case of a line integral (where 1D motion is like a line integral: parameterized by some other variable), that you CAN do nonsense like this, above?

I cannot reconcile this.
 
  • #9
Trying2Learn said:
OK...

So I have studied the line integral before, and winced whenever I saw this. But I approach it from a mechanical engineer who wants to get it down.

So are you saying that the very subtle issue of treating the derivative notation as if it were a quotient is part and parcel of performing a line integral?

  1. I have seen line integrals done before.
  2. I have read NEVER to treat the derivative like a quotient

SO why is it that when I read about line integrals, I have NEVER read a statement like this...

"OK, my children, we are now walking into a new area. Yes, we know not to treat the derivative like a quotient where terms cancel, but in some cases, such as the line integral (or even a 1D integral over ONE axis), with given smoothness, we can rework the integral so it APPEARS as if we are cancelling terms."

You see, I follow what is being written here. But no one is inserting the crystal that acknowledges that a too casual interpreation could lead to errors. I am not sure of what I am saying. I just cannot reconcile 1 and 2, above, in my comment.

1) You can prove the integration by substitution rule. It'll be in Paul's online notes somewhere. More formal proofs will be in any real analysis textbook.

2) You can likewise prove the similar rule for parameterised line integrals. Again, there is an informal proof in Paul's notes.

Once you have proved these, then you are free to use the "derivative as a quotient" in those cases. I don't know about "never treat the derivative as a quotient". I would say: it isn't really a quotient but in cases like integration by substitution it may act like one. So you have to be careful and know what you are doing.

One final thing. The equation in the blue box seems to me so natural that it feels like something you must be able to prove. Thinking about the properties of the tangent vector it shouldn't look like something that is pulled out of a hat. For physically meaningful vector fields it must be true.
 
  • Like
Likes Trying2Learn
  • #10
PeroK said:
1) You can prove the integration by substitution rule. It'll be in Paul's online notes somewhere. More formal proofs will be in any real analysis textbook.

2) You can likewise prove the similar rule for parameterised line integrals.

Once you have proved these, then you are free to use the "derivative as a quotient" in those cases. I don't know about "never treat the derivative as a quotient". I would say: it isn't really a quotient but in cases like integration by substitution it may act like one. So you have to be careful and know what you are doing.

One final thing. The equation in the blue box seems to me so natural that it feels like something you must be able to prove. Thinking about the properties of the tangent vector it shouldn't look like something that is pulled out of a hat. For physically meaningful vector fields it must be true.
THANK YOU!

This statement of yours is what I was looking for

Once you have proved these, then you are free to use the "derivative as a quotient" in those cases. I don't know about "never treat the derivative as a quotient". I would say: it isn't really a quotient but in cases like integration by substitution it may act like one. So you have to be careful and know what you are doing.Could you extend your thougths to my other post, above?

How do mechanical engineers get away with this: ads=vdv
 
  • #11
Trying2Learn said:
How do mechanical engineers get away with this: ads=vdv
All these things ultimately come back to the chain rule. Let's assume that everything here is a well defined function of time. First, we define a new function:
$$f(t) = v(s(t))$$ And by the chain rule we have:
$$f'(t) = v'(s(t))s'(t)$$ Now we do two things. We identify ##f## with ##v## (this is the dodgy bit that physicists and engineers get away with if they are careful) and rewrite this equation in differential form: $$\frac{dv}{dt} = \frac{dv}{ds}\frac{ds}{dt}$$ Finally, we identify the differential $$dv = \frac{dv}{ds}ds$$ so that: $$ ads = \frac{dv}{dt}ds = \frac{ds}{dt}\frac{dv}{ds}ds = \frac{ds}{dt}dv = vdv$$
 
  • #12
OK, this is helping a LOT

I can now see that while I "appreciated" the line integral, I did not "understand it" I can now accept that facet of the explanation.

But I still need some help on the ads=vdv

I do not follow the RED statement below. Coud you elaborate on it? As it now is, by itself, it seems you are treating the derivative like quotient. How did you "cancel" the dt? (Unless you mean something more when you say "identify")

PeroK said:
All these things ultimately come back to the chain rule. Let's assume that everything here is a well defined function of time. First, we define a new function:
$$f(t) = v(s(t))$$ And by the chain rule we have:
$$f'(t) = v'(s(t))s'(t)$$ Now we do two things. We identify ##f## with ##v## (this is the dodgy bit that physicists and engineers get away with if they are careful) and rewrite this equation in differential form: $$\frac{dv}{dt} = \frac{dv}{ds}\frac{ds}{dt}$$ Finally, we identify the differential $$dv = \frac{dv}{ds}ds$$ so that: $$ ads = \frac{dv}{dt}ds = \frac{ds}{dt}\frac{dv}{ds}ds = \frac{ds}{dt}dv = vdv$$
 
  • #14
PeroK said:
The line in red is a definition. There was a post about this yesterday:

https://www.physicsforums.com/threads/basic-doubts-in-vector-and-multi-variable-calculus.995320/
Oh wow! Of course. Now I see it.

Thank you so much everyone. This tiny little issue has caused me so much grief.

I feel really good right now.

I do KNOW that as an engineer, there are still things I am assuming, like when you wrote "this is the dodgy bit that physicists and engineers get away" but I find I can put some of my ignorance in a box, when I know what the box resembles.

Thank you!
 
  • #15
Trying2Learn said:
Oh wow! Of course. Now I see it.

Thank you so much everyone. This tiny little issue has caused me so much grief.

I feel really good right now.

I do KNOW that as an engineer, there are still things I am assuming, like when you wrote "this is the dodgy bit that physicists and engineers get away" but I find I can put some of my ignorance in a box, when I know what the box resembles.

Thank you!
The dodgy bit is that when you use ##v##, say, as a function of ##t## and a function of ##s##, then it is the same physical quantity, but the mathematical functions are different. For example: $$v(t) = A\omega\cos(\omega t), \ \ s(t) = A\sin(\omega t)$$ Then expressing ##v## as a function of ##s## gives: $$v(s) = \pm A\omega\sqrt{1 -\frac{ s^2}{A^2}}$$ And you can see that we have two different mathematical functions for ##v## in this case: ##v(t)## is a very different function from ##v(s)##.

I'll leave it to you as an exercise to confirm in this case we do indeed have: $$\frac{dv}{dt} = \frac{dv}{ds}\frac{ds}{st}$$
 
  • Like
Likes Delta2
  • #16
Thank you!

This has been great!
 
  • Like
Likes PeroK

What is the concept of treating the derivative like a quotient?

Treating the derivative like a quotient is a method used in calculus to simplify the process of finding the derivative of a function. It involves rewriting the derivative as a fraction, with the numerator being the change in the function and the denominator being the change in the independent variable.

Why is treating the derivative like a quotient useful?

This method is useful because it allows us to apply the familiar rules of fractions to find the derivative. It also helps to simplify complex functions and make them easier to differentiate.

What are the steps for treating the derivative like a quotient?

The steps for treating the derivative like a quotient are as follows: 1) Rewrite the derivative as a fraction with the change in the function as the numerator and the change in the independent variable as the denominator. 2) Simplify the fraction using algebraic rules. 3) Take the limit of the fraction as the change in the independent variable approaches zero. 4) This limit is the derivative of the function.

Can any function be treated like a quotient to find its derivative?

No, not all functions can be treated like a quotient. This method only works for functions that can be written as a ratio of two functions, such as polynomials, rational functions, and some trigonometric functions.

Are there any limitations to treating the derivative like a quotient?

Yes, there are some limitations to this method. It may not work for more complex functions that cannot be easily written as a ratio of two functions. It also does not work for finding higher order derivatives, such as the second or third derivative.

Similar threads

Replies
13
Views
1K
Replies
12
Views
3K
Replies
9
Views
4K
Replies
4
Views
2K
  • Mechanics
Replies
1
Views
474
Replies
1
Views
537
  • Classical Physics
Replies
5
Views
1K
Replies
3
Views
2K
Replies
40
Views
6K
Replies
19
Views
3K
Back
Top