Problem with the concept of differentiation

In summary: It's difficult to give a physical interpretation to this. But the derivative, the instantaneous change, is the slope of the tangent line at that point. If you look at your graph, the tangent line has a slope, and that slope is the instantaneous change (or velocity, or rate of change, as you prefer) at that point. You don't get to pick the point, the point is fixed and you're looking at the value at that point. But the slope of the tangent line at that point is a well-defined number.We never take h=0, we take the limit as h approaches zero.This is correct. We are not interested in what happens at h=0, but what
  • #1
rudransh verma
Gold Member
1,067
95
TL;DR Summary
What is the change actually at some time ##t_0##?
We define differentiation as the limit of ##\frac{f(x+h)-f(x)}h## as ##h->0##. We find the instantaneous velocity at some time ##t_0## using differentiation and call it change at ##t_0##. We show tangent on the graph of the function at ##t_0##. But after taking h or time interval as zero to find the instantaneous change there is no two points left for change to occur. Change at ##t_0## is meaningless in the same way as the slope at a point. How can there be a change at say 5 sec? We have to take two points to talk about change in between them.
 
Physics news on Phys.org
  • #2
We never take h=0, we take the limit as h approaches zero. This is a well-defined process which is easy to understand. You say, "there is no two points left for change to occur". Remember that no matter how small the interval between two points, there are always an infinite number of points between them. So we can continue calculating f(x+h) - f(x) no matter how small h is.
 
  • Like
Likes SolarisOne, WWGD, russ_watters and 2 others
  • #3
phyzguy said:
We never take h=0, w
https://www.feynmanlectures.caltech.edu/I_08.html
See this part below of velocity at t in the link above. Clearly when finding limit we take ##\epsilon= 0##
Let’s take an example. Find lim x->3 ##\frac{x^2-9}{x-3}##
Lim ##(x-3)(x+3)/(x-3)##
Taking x=3, limit is 6.
 

Attachments

  • BABEE8CB-16A0-47C9-8E38-4E202DB57D32.jpeg
    BABEE8CB-16A0-47C9-8E38-4E202DB57D32.jpeg
    47.4 KB · Views: 89
  • #4
phyzguy said:
We never take h=0, we take the limit as h approaches zero.
Don’t get me wrong but I am not getting it. We do put value of say ##\epsilon=0## to find the limit. Why do we do that?

Is it has to do with what you said that we can take smaller and smaller values of h and find more accurate value of rate. So we can see where does this value tend if we make h very very small. That will be the value of rate at precisely a point.
 
Last edited:
  • #5
Another definition would simply the slope of the tangent vector of f(x) at point x
 
Last edited:
  • #6
rudransh verma said:
Let’s take an example. Find lim x->3 ##\frac{x^2-9}{x-3}##
Lim ##(x-3)(x+3)/(x-3)##
Taking x=3, limit is 6.
Here you can do that, because the function ##f(x) = \frac{x^2-9}{x-3}## has a removable singularity at ##x=3##. So, in fact, you still take a limit (albeit somewhat implicitly) to demonstrate that ##f##, at first not defined for ##x=3## can be extended to a continuous function ##\tilde{f}## that is defined on the whole real line. Next, you indeed just evaluate ##\tilde{f}## in ##x=3##.

Of course, normally you do not spend this much prose and just write it as you did in the first place.
 
  • #7
phyzguy said:
We never take h=0, we take the limit as h approaches zero. This is a well-defined process which is easy to understand. You say, "there is no two points left for change to occur". Remember that no matter how small the interval between two points, there are always an infinite number of points between them. So we can continue calculating f(x+h) - f(x) no matter how small h is.
This is of course true, but maybe it is good for the OP to know that it took mathematicians quite a bit of time to make this step from the difference quotient to the differential quotient. So, conceptually it is not at all a trivial step.
 
  • #9
to the OP: you are entirely correct. change does not make sense at one point. the derivative is a concept useful for approximating actual changes between 2 points, but without knowing exactly which two points will be chosen. given one point, we want a number which will give us the approximate change from that point to any nearby second point, so that approximation will get better as the second point is chosen closer to the first, (in a precise sense encoded by the definition of limit). I.e. if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval. then the derivative is the unique number doing this job.
 
  • Like
Likes rudransh verma
  • #11
vela said:
https://www.physicsforums.com/threa...antaneous-rate-of-change.1011395/post-6588946
S.G. Janssens said:
Here you can do that, because the function f(x)=x2−9x−3 has a removable singularity at x=3. So, in fact, you still take a limit (albeit somewhat implicitly) to demonstrate that f, at first not defined for x=3 can be extended to a continuous function f~ that is defined on the whole real line. Next, you indeed just evaluate f~ in x=3.
This was missing in my book and nobody tells it. Just go and solve the problem.
mathwonk said:
if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval. then the derivative is the unique number doing this job.
Can you show me an example because all we are saying for example in my post#3(see attached image) is ##v(t_0)=32t_0##. We are screaming that rate of change at ##t_0## is ##32t_0##.
 
  • #12
rudransh verma said:
We define differentiation as the limit of ##\frac{f(x+h)-f(x)}h## as ##h->0##. We find the instantaneous velocity at some time ##t_0## using differentiation and call it change at ##t_0##.
Actually, we call this the velocity or instantaneous velocity at time ##t_0##. A good way to think about this is that the speedometer in a car shows you the instantaneous velocity -- the velocity at a particular moment. This is different from the average velocity, which is calculated by the distance traveled divided by the elapsed time. IOW, like this:
$$\frac{s(t_0 + h) - s(t_0) }h $$

rudransh verma said:
We show tangent on the graph of the function at ##t_0##. But after taking h or time interval as zero to find the instantaneous change there is no two points left for change to occur. Change at ##t_0## is meaningless in the same way as the slope at a point. How can there be a change at say 5 sec? We have to take two points to talk about change in between them.
Slope at a point is not meaningless. Imagine you are a small insect traveling along the path of some curve. The slope of the tangent line at some point on the curve is in exactly the same direction you would be looking.
 
  • #13
rudransh verma said:
But after taking h or time interval as zero to find the instantaneous change there is no two points left for change to occur. Change at ##t_0## is meaningless in the same way as the slope at a point.
You're not looking for a change, though. You're looking for the rate of change. While it doesn't make sense to talk about a change at a single point in time, it does make sense to talk about the rate of change at an instant.
 
  • Like
Likes benorin and Drakkith
  • #14
Let ##x_1< a<x_2## and ##f:(x_1,x_2)\backslash\{a\}\to\mathbb{R}## be a function.

Def. We shall say that ##\lim_{x\to a}f(x)=A## iff for any ##\varepsilon >0## there exists ##\delta>0## such that
$${\bf 0<}|x-a|<\delta\Longrightarrow |f(x)-A|<\varepsilon$$
 
  • Like
Likes benorin
  • #15
wrobel said:
Let me say it again: never never use physics textbooks for studying math.
There's nothing wrong with Feynman here. This is what Feynman says:
The true velocity is the value of this ratio, x/ϵ, when ϵ becomes vanishingly small. In other words, after forming the ratio, we take the limit as ϵ gets smaller and smaller, that is, approaches 0.
He says that ϵ becomes vanishingly small, he says that ϵ gets smaller and smaller, he says that ϵ approaches 0. He never, ever, ever says that we set ϵ equal to 0, so @rudransh verma how is it possible for you to read these words and say:
rudransh verma said:
Clearly when finding limit we take ##\epsilon= 0##
?
 
  • Like
Likes benorin
  • #16
rudransh verma said:
mathwonk said:
if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval.
Can you show me an example because all we are saying for example in my post#3(see attached image) is ##v(t_0)=32t_0##.

Feynman provides two examples for ## t_0 = 5 \ \tt{s} ## in the passage you linked:

We know where the ball was at 5 sec. At 5.1 sec, the distance that it has gone all together is 16(5.1)2=416.16 ft (see Eq. 8.1). At 5 sec it had already fallen 400 ft; in the last tenth of a second it fell 416.16−400=16.16 ft. Since 16.16 ft in 0.1 sec is the same as 161.6 ft/sec, that is the speed more or less, but it is not exactly correct. Is that the speed at 5, or at 5.1, or halfway between at 5.05 sec, or when is that the speed? Never mind—the problem was to find the speed at 5 seconds, and we do not have exactly that; we have to do a better job. So, we take one-thousandth of a second more than 5 sec, or 5.001 sec, and calculate the total fall as
## s=16(5.001)^2=16(25.010001)=400.160016 \tt{ft}. ##
In the last 0.001 sec the ball fell 0.160016 ft, and if we divide this number by 0.001 sec we obtain the speed as 160.016 ft/sec.
 
  • #17
pbuk said:
There's nothing wrong with Feynman here. This is what Feynman says:

I did not say "wrong". I just gave an advise not to study math. by physics textbooks. If this is not obvious compare what (and how) Feynman says and what (and how) Serge Lang says:
 

Attachments

  • Screenshot_20220329_155956.png
    Screenshot_20220329_155956.png
    20.9 KB · Views: 98
Last edited:
  • #18
wrobel said:
Let ##x_1< a<x_2## and ##f:(x_1,x_2)\backslash\{a\}\to\mathbb{R}## be a function.

Def. We shall say that ##\lim_{x\to a}f(x)=A## iff for any ##\varepsilon >0## there exists ##\delta>0## such that
$${\bf 0<}|x-a|<\delta\Longrightarrow |f(x)-A|<\varepsilon$$
I understand that ##0<|x-a|## meaning x is not equal to a but what is the meaning of ##|x-a|<\delta##.
 
  • Like
Likes benorin
  • #19
rudransh verma said:
I understand that ##0<|x-a|## meaning x is not equal to a
Not really, it is equivalent to ## x \ne a ## (when x and a are real numbers) but its meaning is "0 is less than the magnitude of ##x - a##".

rudransh verma said:
what is the meaning of ##|x-a|<\delta##.
"the magnitude of ## x - a ## is less than ## \delta ##".
 
  • #20
pbuk said:
Not really, it is equivalent to ## x \ne a ## (when x and are real numbers) but its meaning is "0 is less than the magnitude of ##x - a##"."the magnitude of ## x - a ## is less than ## \delta ##".
I mean I am unable to correlate it to the definition of limit.
Does it mean if we keep decreasing ##\epsilon## such that f(x) gets closer to A there exist ##\delta## such that x gets closer to a?
 
Last edited:
  • #21
rudransh verma said:
I mean I am unable to correlate it to the definition of limit.

Are you referring to this definition of a limit?

wrobel said:
Def. We shall say that ##\lim_{x\to a}f(x)=A## iff for any ##\varepsilon >0## there exists ##\delta>0## such that
$${\bf 0<}|x-a|<\delta\Longrightarrow |f(x)-A|<\varepsilon$$

What this definition says is that ## A ## is the limit of ## f(x) ## at ## x = a ## if and only if for every value of ## \varepsilon > 0 ## you can find a circle of radius ## \delta ## around ## x = a ## within which ## | f(x) - A | < \varepsilon ##.
 
  • #22
pbuk said:
What this definition says is that A is the limit of f(x) at x=a if and only if for every value of ε>0 you can find a circle of radius δ around x=a within which |f(x)−A|<ε.
Does it mean if we keep decreasing ϵ such that f(x) gets closer to A there exist δ such that x gets closer to a?
 
  • #23
rudransh verma said:
I understand that 0<|x−a| meaning x is not equal to a but what is the meaning of |x−a|<δ.
By other words this means the following. Let $$B_r(x_0):=\{x\mid |x-x_0|<r\};\quad \dot B_r(x_0)=B_r(x_0)\backslash\{x_0\}.$$ Equivalently the definition is sounds as follows
For any ##\varepsilon>0## there exists ##\delta>0## such that
$$f(\dot B_\delta(a))\subset B_\varepsilon(A).$$
 
  • #24
wrobel said:
I did not say "wrong". I just gave an advise not to study math. by physics textbooks. If this is not obvious compare what (and how) Feynman says and what (and how) Serge Lang says:
I think it depends what aspect about the math you're interested in. If you're trying to get an intuitive understanding of what the math means, a physics book may be better. The emphasis on proofs and rigor in math books can obscure what the math means in an intuitive sense.

I started taking a graduate math course in differential geometry when I was in grad school. The professor was trying to convey an intuitive understanding of what he was talking about to the students, and at one point, he stopped and asked if the students understood what he was trying to say. All the math students shook their heads. Then he looked at the physics students and asked if we got what he meant, and we all nodded.

Frankly, I think a combination is the best approach. Physicists can appear to play fast and loose with the math at times, but you do get a sense of what the point of the math is. Then seeing the rigor of mathematicians can help you learn to avoid playing too fast and too loose.
 
  • #25
vela said:
I think it depends what aspect about the math you're interested in. If you're trying to get an intuitive understanding of what the math means, a physics book may be better. The emphasis on proofs and rigor in math books can obscure what the math means in an intuitive sense.
Yes it can. But I think that if one has an intuitive understanding but does not have a formal definition then one has nothing.
 
  • Like
Likes S.G. Janssens
  • #26
vela said:
Frankly, I think a combination is the best approach. Physicists can appear to play fast and loose with the math at times, but you do get a sense of what the point of the math is. Then seeing the rigor of mathematicians can help you learn to avoid playing too fast and too loose.
I thought that this was a mistake but now I think it will be ok. I do that sometimes to avoid the boringness of maths. It is always fun to watch how we get to prove something in physics and to do that we just have to know how it’s done. No need to go into derivations of maths. I don’t like solving maths problems very much because it doesn’t take you anywhere like physics does.
 
  • #27
wrobel said:
By other words this means the following. Let $$B_r(x_0):=\{x\mid |x-x_0|<r\};\quad \dot B_r(x_0)=B_r(x_0)\backslash\{x_0\}.$$ Equivalently the definition is sounds as follows
For any ##\varepsilon>0## there exists ##\delta>0## such that
$$f(\dot B_\delta(a))\subset B_\varepsilon(A).$$
Frankly That definition was better. Can you verify what I am trying to say whether it’s correct?
rudransh verma said:
Does it mean if we keep decreasing ϵ such that f(x) gets closer to A there exist δ such that x gets closer to a?
 
  • #28
@rudransh verma: as an example, if f(t) = 16t^2, then what you are calling the rate of change "at a", is 32a. I am saying that this simply means that for all t near a, the actual rate of change between the 2 points a and t, namely [16t^2-16a^2]/(t-a) = 16(t+a), is approximately 32a. And this approximation has an error of 16(t+a) - 32a = 16(t-a), and this error is as small as you like, for t near enough to a. I.e. for all t near a, the actual rate of change between t and a, is approximately 32a. And 32a is the only number that approximates all those actual rates of change, with an error that goes to zero as t goes to a.
 
  • #29
rudransh verma said:
Does it mean if we keep decreasing ϵ such that f(x) gets closer to A there exist δ such that x gets closer to a?
No. You're missing the key part of it. It says that as we keep decreasing ε such that f(x) gets closer and closer to A, we can always find a δ such that |x-a| < δ implies that |f(x)-A| < ε.
 
  • Like
Likes benorin
  • #30
rudransh verma said:
I understand that ##0<|x-a|## meaning x is not equal to a but what is the meaning of ##|x-a|<\delta##.
It means what it says. That the absolute value of the difference between the numbers ##x## and ##a## is less then the number ##\delta##.
 
  • #31
@rudransh verma: another way to look at the derivative of f at a, is as the coefficient of the best linear approximation to the actual change function f(t)-f(a). I.e. if you can find a number D such that the change function f(t)-f(a) is well approximated by the linear function D.(t-a), then D is the derivative of f at a. By "well approximated" we mean the error term is a function vanishing to higher order than one, which means the change function f(t)-f(a) equals D.(t-a) + e(t).(t-a), where e(t) approaches zero as t approaches a. (Then the product e(t).(t-a) approaches zero faster than order one, as t appoaches a, since both factors approach zero.)

I.e. if we can write the change function f(t)-f(a) = (D+e(t)).(t-a), where D is a number and e(t) is a function such that e(t)-->0 as t-->a, then the change function f(t)-f(a) is well approximated by the linear function D.(t-a), for points t near a, and D is the derivative of f at a. I.e. the derivative of f at a, is the coefficient of the best linear approximation to the changes in f, between a and all points t near a.

Since f(t)-f(a) is well approximated by D.(t-a), also f(t) is well approximated by f(a) + D.(t-a), and the graph of f is well approximated by the graph of f(a) + D.(t-a), which is nothing but the tangent line to the graph of f, at a. So also the derivative of f at a, is the (lead) coefficient of the unique linear function whose graph is tangent to the graph of f, at a.

i have given several precise and meaningful descriptions of the derivative, but one may ask also what is the physical meaning? To me there is in fact no physical meaning to the derivative. I believe it has only mathematical meaning. But some authors wanting to give it some intuitive physical meaning, by extending the familiar meaning of rate of change, may ask us to imagine an infinitely small change in t, and the corresponding infinitely small change in f(t), and call the derivative the corresponding (finite) "infinitesimal rate of change of f at a". To me this is mostly nonsense, but I do not think one should entirely dismiss this sometimes useful "nonsense", but should try to make some attempt to come to terms with it. I am still trying.
 
Last edited:
  • Like
Likes ergospherical
  • #32
phyzguy said:
implies that |f(x)-A| < ε.
That has already been said
phyzguy said:
as we keep decreasing ε such that f(x) gets closer and closer to A
 
  • #33
In spite of my comments above, here is an attempt to give some physical meaning to the derivative of the motion of a particle. Imagine whirling a rock tied to a string, around your head. Then at a certain instant release the string from your hand. In some sense, the derivative of the motion, at the instant of release, is given by the, roughly straight, i.e. linear, path of the rock after being released. I.e. after removing the accelerating force, the path is determined by the "instantaneous velocity" of the rock at the instant of release. (Of course there is still friction, and gravity, but this is the best I can do to render the "instantaneous velocity" visible.)
 
  • #34
mathwonk said:
the derivative is a concept useful for approximating actual changes between 2 points, but without knowing exactly which two points will be chosen.
You mean like v=32t. Here the v at 5sec is 160ft/s and it will be approximately 160 around 5sec with some error. So we will be able to calculate approximate actual changes between two nearby points. we say the velocity of body from say 5sec to 5.000001 sec is approximately 160ft/sec. In this way we can calculate the approximate change of position in between them as 0.00016 ft.
We can take any value of t and approximate the actual change between t and some near point assuming nothing unexpected happens.
mathwonk said:
given one point, we want a number which will give us the approximate change from that point to any nearby second point,
because the velocity keeps on changing. Right? That’s why “approximate change” and not exact change.
mathwonk said:
. I.e. if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval. then the derivative is the unique number doing this job.
Let’s say 160.016 ft/s is good enough. I got that speed by taking my other point as 5.001sec. So now this approximation works for many points near 5.001 hoping there is no any sudden major changes in speed. Points like 5.01sec,5.03sec,etc
In the same way if we take the limit we get precisely 160 at 5. This is very exact than 160.016 ft/s. Now this works for points near 5. Right!
 
Last edited:
  • #35
Yes, I think you are getting it. I tried to think of another way, inspired by your example, of the physical meaning of instantaneous velocity. Suppose an object is dropped on you from rest and from a height of 400 feet above your head. Then when it hits your head 5 seconds later, its instantaneous velocity will be exactly 32t = 32(5) = 160 feet per second, which gives it momentum at that instant of 160 times the mass of the object. Thus the instantaneous velocity at that instant will be reflected in how hard it hits you!
 
<h2>What is differentiation?</h2><p>Differentiation is a mathematical concept that involves finding the rate of change of a function with respect to its independent variable. It can also be thought of as finding the slope of a curve at a specific point.</p><h2>Why is differentiation important?</h2><p>Differentiation is important because it allows us to analyze and understand the behavior of functions. It is used in various fields such as physics, economics, and engineering to model and solve real-world problems.</p><h2>What is the problem with the concept of differentiation?</h2><p>One of the main problems with differentiation is that it assumes that the function is continuous and differentiable at every point. However, in reality, many functions are not continuous or have points where they are not differentiable.</p><h2>How can we overcome the problem with differentiation?</h2><p>To overcome the problem with differentiation, we can use other mathematical tools such as limits and approximations. We can also use numerical methods to approximate the derivative of a function at a point, even if it is not differentiable.</p><h2>What are some common applications of differentiation?</h2><p>Differentiation has many applications, including optimization, curve sketching, and solving differential equations. It is also used in physics to analyze motion and in economics to model supply and demand curves.</p>

What is differentiation?

Differentiation is a mathematical concept that involves finding the rate of change of a function with respect to its independent variable. It can also be thought of as finding the slope of a curve at a specific point.

Why is differentiation important?

Differentiation is important because it allows us to analyze and understand the behavior of functions. It is used in various fields such as physics, economics, and engineering to model and solve real-world problems.

What is the problem with the concept of differentiation?

One of the main problems with differentiation is that it assumes that the function is continuous and differentiable at every point. However, in reality, many functions are not continuous or have points where they are not differentiable.

How can we overcome the problem with differentiation?

To overcome the problem with differentiation, we can use other mathematical tools such as limits and approximations. We can also use numerical methods to approximate the derivative of a function at a point, even if it is not differentiable.

What are some common applications of differentiation?

Differentiation has many applications, including optimization, curve sketching, and solving differential equations. It is also used in physics to analyze motion and in economics to model supply and demand curves.

Similar threads

Replies
46
Views
675
Replies
18
Views
1K
  • Differential Equations
Replies
5
Views
591
  • Calculus and Beyond Homework Help
Replies
2
Views
180
Replies
6
Views
836
Replies
14
Views
1K
Replies
5
Views
1K
Replies
3
Views
1K
  • Differential Equations
Replies
1
Views
700
  • Differential Equations
Replies
1
Views
609
Back
Top