Integration operation, and its relation to differentials

In summary: In practice, letting the differentials act as separate entities in a quotient leads to correct answers, like in the above example, because the quotient is acting as a shortcut for finding the derivative of each differential.
  • #1
Mr Davis 97
1,462
44
I need to get a few things straight about the integration operation (as an intro calc student). I understand that integration is a process that takes a function and returns its antiderivative. We can think of it as an operator, where ##\displaystyle \int...dx## is kind of like an opening and a closing bracket for an input function. This is how I interpret integration: the summa and the differential (the closing "bracket") are inseparable, since they are part of the same notation. However, my professor confused me with his derivation of velocity under constant acceleration:

$$\displaystyle \frac{dv}{dt} = a$$
$$\displaystyle v\frac{dv}{dt} = va$$
$$\displaystyle v\frac{dv}{dt} = a\frac{dx}{dt}$$
Then he "cancels" the $dt$
$$vdv = adx$$
The next part is what confuses me:
$$\int_{v_1}^{v_2}vdv = a\int_{x_1}^{x_2}dx$$
which comes out to be
$$\frac{1}{2}v_2^2 - \frac{1}{2}v_1^2 = a(x_2 - x_1)$$

My primary question is, how did the two summas appear, if there were no corresponding differentials at the time of application? This is disconcerting because the same operation is supposed to be applied to both sides of the equation, and those look like two different operations in terms of two different variables. Shouldn't he have done something like ##\displaystyle \int_{t_1}^{t_2}vdv~dt = \int_{t_1}^{t_2}adx~dt##?

If it's no trouble, I have two additional questions. Why does he use ##\displaystyle \int_{v_1}^{v_2}...dv## rather than ##\displaystyle \int...dv##? Also, what justifies that he "cancels" the dt differentials?
 
Physics news on Phys.org
  • #2
Mr Davis 97 said:
$$\displaystyle v\frac{dv}{dt} = a\frac{dx}{dt}$$
Then he "cancels" the ##dt##
$$vdv = adx$$
...
what justifies that he "cancels" the dt differentials?
That last formula ##vdv = adx## is, strictly speaking, not meaningful. It's just a shorthand for a formula that is meaningful, which is :
$$\int_{v_1}^{v_2}vdv = \int_{x_1}^{x_2}adx $$
provided that ##v_1\equiv v(t_1); v_t\equiv v(t_2);x_1\equiv x(t_1); x_2\equiv x(t_2)##.

The justification for this is that, since ## v\frac{dv}{dt} = a\frac{dx}{dt}##, we have

$$\int_{t_1}^{t_2}v\frac{dv}{dt}\,dt=
\int_{t_1}^{t_2}a\frac{dx}{dt}\,dt$$

We then apply the rule for change of integration variable, ##t\to v## on the LHS and ##t\to x## on the RHS, to obtain
$$\int_{v_1}^{v_2}vdv = \int_{x_1}^{x_2}adx $$
He then pulls the ##a## outside the integral, which is valid if ##a## is constant. The post doesn't say whether that is the case.
 
  • #3
andrewkirk said:
That last formula ##vdv = adx## is, strictly speaking, not meaningful. It's just a shorthand for a formula that is meaningful, which is :
$$\int_{v_1}^{v_2}vdv = \int_{x_1}^{x_2}adx $$
provided that ##v_1\equiv v(t_1); v_t\equiv v(t_2);x_1\equiv x(t_1); x_2\equiv x(t_2)##.

The justification for this is that, since ## v\frac{dv}{dt} = a\frac{dx}{dt}##, we have

$$\int_{t_1}^{t_2}v\frac{dv}{dt}\,dt=
\int_{t_1}^{t_2}a\frac{dx}{dt}\,dt$$

We then apply the rule for change of integration variable, ##t\to v## on the LHS and ##t\to x## on the RHS, to obtain
$$\int_{v_1}^{v_2}vdv = \int_{x_1}^{x_2}adx $$
He then pulls the ##a## outside the integral, which is valid if ##a## is constant. The post doesn't say whether that is the case.
So is it better to do it like he did, since it is faster, or do it the rigorous way, since it acts as a sanity check?
 
  • #4
Mr Davis 97 said:
Why does he use ##\displaystyle \int_{v_1}^{v_2}...dv## rather than ##\displaystyle \int...dv##?
The former - the 'definite integral' - is a real number. The latter - the 'indefinite integral' - is an equivalence class of functions. In physics one generally needs to get to a number sooner or later rather than just a function.

The sense in which integration is an 'inverse' of differentiation is that:

$$\frac{d}{dx}\bigg(\int_a^x f(u)\,du\bigg)=f(x)$$

provided that ##a\leq x##. Note that we needed to use a definite integral to write this.
 
  • #5
Mr Davis 97 said:
So is it better to do it like he did, since it is faster, or do it the rigorous way, since it acts as a sanity check?
It depends on the context. Personally I think it is poor practice to do it like that when teaching, because it sows confusion and encourages sloppy thinking. On the other hand, when doing your own derivations it does no harm, and can speed things up, as long as you keep awareness of what you're about. You can always go back and make such steps rigorous later on, if the branch you are exploring turns out to be fruitful.
 
  • Like
Likes Mr Davis 97
  • #6
andrewkirk said:
It depends on the context. Personally I think it is poor practice to do it like that when teaching, because it sows confusion and encourages sloppy thinking. On the other hand, when doing your own derivations it does no harm, and can speed things up, as long as you keep awareness of what you're about. You can always go back and make such steps rigorous later on, if the branch you are exploring turns out to be fruitful.
Okay, that makes sense. I have another question. In introductory calculus we are told that dy/dx is one entity, a derivative, and not a quotient of differentials. However, in practice, why does turn out that letting the differentials act as separate entities in a quotient lead to correct answers, like in the above case?
 
  • #7
Mr Davis 97 said:
However, in practice, why does turn out that letting the differentials act as separate entities in a quotient lead to correct answers, like in the above case?
I think it's because, while ##\frac{dy}{dx}## is not a ratio, it is the limit of a ratio (as the denominator tends to zero).

So any manipulation that could be performed on the ratio inside the limit and then validly moved outside the limit - using the many useful properties of limits - ends up making things look like the ##\frac{dy}{dx}## is a ratio.

A nice example of this is the chain rule: ##\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}## where ##y## is a function of ##x##, which is a function of ##t##.

We write

$$\frac{dy}{dt}=\lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t))}{h}\right)
= \lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t)}{x(t+h)-x(t)}\cdot \frac{x(t+h)-x(t)}{h}\right)\\
= \lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t)}{x(t+h)-x(t)}\right)\cdot \lim_{h\to 0}\left(\frac{x(t+h)-x(t)}{h}\right)\\
=\frac{dy}{dx}\frac{dx}{dt}
$$
We wrote the derivatives as limits of ratios, manipulated the ratios inside the limit, then moved the multiplication outside the limit, using the 'product of limits' theorem.
 
  • Like
Likes Mr Davis 97
  • #8
andrewkirk said:
I think it's because, while ##\frac{dy}{dx}## is not a ratio, it is the limit of a ratio (as the denominator tends to zero).

So any manipulation that could be performed on the ratio inside the limit and then validly moved outside the limit - using the many useful properties of limits - ends up making things look like the ##\frac{dy}{dx}## is a ratio.

A nice example of this is the chain rule: ##\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}## where ##y## is a function of ##x##, which is a function of ##t##.

We write

$$\frac{dy}{dt}=\lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t))}{h}\right)
= \lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t)}{x(t+h)-x(t)}\cdot \frac{x(t+h)-x(t)}{h}\right)\\
= \lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t)}{x(t+h)-x(t)}\right)\cdot \lim_{h\to 0}\left(\frac{x(t+h)-x(t)}{h}\right)\\
=\frac{dy}{dx}\frac{dx}{dt}
$$
We wrote the derivatives as limits of ratios, manipulated the ratios inside the limit, then moved the multiplication outside the limit, using the 'product of limits' theorem.
Excellent answer. Thanks.
 
  • #9
andrewkirk said:
I think it's because, while ##\frac{dy}{dx}## is not a ratio, it is the limit of a ratio (as the denominator tends to zero).

So any manipulation that could be performed on the ratio inside the limit and then validly moved outside the limit - using the many useful properties of limits - ends up making things look like the ##\frac{dy}{dx}## is a ratio.

A nice example of this is the chain rule: ##\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}## where ##y## is a function of ##x##, which is a function of ##t##.

We write

$$\frac{dy}{dt}=\lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t))}{h}\right)
= \lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t)}{x(t+h)-x(t)}\cdot \frac{x(t+h)-x(t)}{h}\right)\\
= \lim_{h\to 0}\left(\frac{y(x(t+h))-y(x(t)}{x(t+h)-x(t)}\right)\cdot \lim_{h\to 0}\left(\frac{x(t+h)-x(t)}{h}\right)\\
=\frac{dy}{dx}\frac{dx}{dt}
$$
We wrote the derivatives as limits of ratios, manipulated the ratios inside the limit, then moved the multiplication outside the limit, using the 'product of limits' theorem.
I actually have one last question. If we can consider the differential at the end of an integral as simply a closing bracket that is convenient because it tells with respect to which variable one is integrating, why is it so important when it comes to u-substitution? Why does it just seem as a placeholder with simple integrals, and becomes a part of teh mathematics with more complex ones?
 
  • #10
Mr Davis 97 said:
I actually have one last question. If we can consider the differential at the end of an integral as simply a closing bracket that is convenient because it tells with respect to which variable one is integrating, why is it so important when it comes to u-substitution? Why does it just seem as a placeholder with simple integrals, and becomes a part of teh mathematics with more complex ones?

I think you're assuming that just because the variable of integration comes at the end of the expression, it doesn't serve any real purpose. That's no so.

When the integrand (that's the stuff following the integral sign) is developed, the differential serves to indicate the variable with respect to which the integration is being performed. Now, when using u-substitution to simplify finding an antiderivative, you want the original integral ∫ f(x) dx = ∫ g(u) du. Since integration is the limit of a sum, you're pretty much stuck with ensuring that f(x) dx = g(u) du, also.
 
  • #11
Mr Davis 97 said:
I actually have one last question. If we can consider the differential at the end of an integral as simply a closing bracket that is convenient because it tells with respect to which variable one is integrating, why is it so important when it comes to u-substitution? Why does it just seem as a placeholder with simple integrals, and becomes a part of teh mathematics with more complex ones?
It's not just a closing bracket. That becomes really clear when you do Riemann-Stieltjes integrals, which are important in measure theory and especially in probability theory.

Again it helps to think in terms of limits (of sums in this case, rather than ratios), to which end recall that the integral sign is just an elongated 'S' for 'sum'. A simplified version of the Riemann integral can be written as:

$$\int_a^b f(x)dx\equiv \lim_{n\to\infty}\sum_{k=1}^n f(x(k))\delta x$$

where ##\delta x\equiv \frac{b-a}{n}## and ##x(k)\equiv a+(b-a)\cdot\frac{k}{n}##.

The ##dx## in the integral corresponds to the ##\delta x## in the sum inside the limit.

If we substitute ##u=2x## in these then, for a given value of ##n##, we have ##\delta u= \frac{2b-2a}{n}=2\frac{b-a}{n}=2\delta x##. This is reflected in the integral as follows:

$$\int_a^b f(x)dx\equiv \lim_{n\to\infty}\sum_{k=1}^n f(x(k))\delta x\\
=\lim_{n\to\infty}\sum_{k=1}^n f(\frac{u(k)}{2})\frac{\delta u}{2}$$
[where ##u(k)\equiv 2x(k)##]
and this is equal to
$$
\int_{2a}^{2b} \frac{1}{2}f(\frac{u}{2})du
$$
 
  • #12
OP: you reeally want to do nonstandard calculus! Check out Keisler's free book: https://www.math.wisc.edu/~keisler/calc.html
It will explain what ##dx## really is and why it behaves so damn nice. It will definitely open your eyes and convince you that the nonrigorous proof in your OP actually is pretty rigorous!
 

1. What is the definition of integration?

Integration is a mathematical operation that represents the inverse of differentiation. It involves finding the function that gives rise to a given derivative.

2. How does integration relate to the area under a curve?

The fundamental theorem of calculus states that integration is equivalent to finding the area under a curve. This is because the integral of a function represents the accumulation of its values over a given interval.

3. What are the different types of integration?

There are two main types of integration: indefinite and definite. Indefinite integration involves finding the general antiderivative of a function, while definite integration involves finding the specific numerical value of the area under a curve.

4. How is integration used in real-world applications?

Integration has a wide range of applications in various fields such as physics, engineering, economics, and biology. It is used to calculate areas, volumes, work, and many other quantities that are essential in these disciplines.

5. What is the relationship between integration and differentials?

Differential calculus involves finding the derivative of a function, while integral calculus involves finding the antiderivative of a function. Therefore, the relationship between integration and differentials is that they are inverse operations of each other.

Similar threads

Replies
2
Views
941
  • Calculus
Replies
6
Views
1K
Replies
12
Views
1K
Replies
16
Views
2K
  • Calculus
Replies
4
Views
2K
Replies
19
Views
3K
Replies
4
Views
360
Back
Top