B Problem with the concept of differentiation

Click For Summary
Differentiation is defined as the limit of the difference quotient as h approaches zero, which allows for the calculation of instantaneous change at a specific point. The discussion highlights the misconception that change at a single point is meaningless, emphasizing that while no change occurs at one point, the rate of change can still be defined. It is clarified that the limit process involves considering infinitely small intervals, thus retaining the concept of change through the derivative. Examples are provided to illustrate how limits can be used to extend functions and find instantaneous rates, reinforcing the importance of understanding limits in calculus. Overall, the conversation underscores the nuanced relationship between change, limits, and differentiation in mathematical analysis.
rudransh verma
Gold Member
Messages
1,067
Reaction score
96
TL;DR
What is the change actually at some time ##t_0##?
We define differentiation as the limit of ##\frac{f(x+h)-f(x)}h## as ##h->0##. We find the instantaneous velocity at some time ##t_0## using differentiation and call it change at ##t_0##. We show tangent on the graph of the function at ##t_0##. But after taking h or time interval as zero to find the instantaneous change there is no two points left for change to occur. Change at ##t_0## is meaningless in the same way as the slope at a point. How can there be a change at say 5 sec? We have to take two points to talk about change in between them.
 
Physics news on Phys.org
We never take h=0, we take the limit as h approaches zero. This is a well-defined process which is easy to understand. You say, "there is no two points left for change to occur". Remember that no matter how small the interval between two points, there are always an infinite number of points between them. So we can continue calculating f(x+h) - f(x) no matter how small h is.
 
  • Like
Likes SolarisOne, WWGD, russ_watters and 2 others
phyzguy said:
We never take h=0, w
https://www.feynmanlectures.caltech.edu/I_08.html
See this part below of velocity at t in the link above. Clearly when finding limit we take ##\epsilon= 0##
Let’s take an example. Find lim x->3 ##\frac{x^2-9}{x-3}##
Lim ##(x-3)(x+3)/(x-3)##
Taking x=3, limit is 6.
 

Attachments

  • BABEE8CB-16A0-47C9-8E38-4E202DB57D32.jpeg
    BABEE8CB-16A0-47C9-8E38-4E202DB57D32.jpeg
    47.4 KB · Views: 182
phyzguy said:
We never take h=0, we take the limit as h approaches zero.
Don’t get me wrong but I am not getting it. We do put value of say ##\epsilon=0## to find the limit. Why do we do that?

Is it has to do with what you said that we can take smaller and smaller values of h and find more accurate value of rate. So we can see where does this value tend if we make h very very small. That will be the value of rate at precisely a point.
 
Last edited:
Another definition would simply the slope of the tangent vector of f(x) at point x
 
Last edited:
rudransh verma said:
Let’s take an example. Find lim x->3 ##\frac{x^2-9}{x-3}##
Lim ##(x-3)(x+3)/(x-3)##
Taking x=3, limit is 6.
Here you can do that, because the function ##f(x) = \frac{x^2-9}{x-3}## has a removable singularity at ##x=3##. So, in fact, you still take a limit (albeit somewhat implicitly) to demonstrate that ##f##, at first not defined for ##x=3## can be extended to a continuous function ##\tilde{f}## that is defined on the whole real line. Next, you indeed just evaluate ##\tilde{f}## in ##x=3##.

Of course, normally you do not spend this much prose and just write it as you did in the first place.
 
phyzguy said:
We never take h=0, we take the limit as h approaches zero. This is a well-defined process which is easy to understand. You say, "there is no two points left for change to occur". Remember that no matter how small the interval between two points, there are always an infinite number of points between them. So we can continue calculating f(x+h) - f(x) no matter how small h is.
This is of course true, but maybe it is good for the OP to know that it took mathematicians quite a bit of time to make this step from the difference quotient to the differential quotient. So, conceptually it is not at all a trivial step.
 
to the OP: you are entirely correct. change does not make sense at one point. the derivative is a concept useful for approximating actual changes between 2 points, but without knowing exactly which two points will be chosen. given one point, we want a number which will give us the approximate change from that point to any nearby second point, so that approximation will get better as the second point is chosen closer to the first, (in a precise sense encoded by the definition of limit). I.e. if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval. then the derivative is the unique number doing this job.
 
  • Like
Likes rudransh verma
  • #11
vela said:
https://www.physicsforums.com/threa...antaneous-rate-of-change.1011395/post-6588946
S.G. Janssens said:
Here you can do that, because the function f(x)=x2−9x−3 has a removable singularity at x=3. So, in fact, you still take a limit (albeit somewhat implicitly) to demonstrate that f, at first not defined for x=3 can be extended to a continuous function f~ that is defined on the whole real line. Next, you indeed just evaluate f~ in x=3.
This was missing in my book and nobody tells it. Just go and solve the problem.
mathwonk said:
if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval. then the derivative is the unique number doing this job.
Can you show me an example because all we are saying for example in my post#3(see attached image) is ##v(t_0)=32t_0##. We are screaming that rate of change at ##t_0## is ##32t_0##.
 
  • #12
rudransh verma said:
We define differentiation as the limit of ##\frac{f(x+h)-f(x)}h## as ##h->0##. We find the instantaneous velocity at some time ##t_0## using differentiation and call it change at ##t_0##.
Actually, we call this the velocity or instantaneous velocity at time ##t_0##. A good way to think about this is that the speedometer in a car shows you the instantaneous velocity -- the velocity at a particular moment. This is different from the average velocity, which is calculated by the distance traveled divided by the elapsed time. IOW, like this:
$$\frac{s(t_0 + h) - s(t_0) }h $$

rudransh verma said:
We show tangent on the graph of the function at ##t_0##. But after taking h or time interval as zero to find the instantaneous change there is no two points left for change to occur. Change at ##t_0## is meaningless in the same way as the slope at a point. How can there be a change at say 5 sec? We have to take two points to talk about change in between them.
Slope at a point is not meaningless. Imagine you are a small insect traveling along the path of some curve. The slope of the tangent line at some point on the curve is in exactly the same direction you would be looking.
 
  • #13
rudransh verma said:
But after taking h or time interval as zero to find the instantaneous change there is no two points left for change to occur. Change at ##t_0## is meaningless in the same way as the slope at a point.
You're not looking for a change, though. You're looking for the rate of change. While it doesn't make sense to talk about a change at a single point in time, it does make sense to talk about the rate of change at an instant.
 
  • Like
Likes benorin and Drakkith
  • #14
Let ##x_1< a<x_2## and ##f:(x_1,x_2)\backslash\{a\}\to\mathbb{R}## be a function.

Def. We shall say that ##\lim_{x\to a}f(x)=A## iff for any ##\varepsilon >0## there exists ##\delta>0## such that
$${\bf 0<}|x-a|<\delta\Longrightarrow |f(x)-A|<\varepsilon$$
 
  • Like
Likes benorin
  • #15
wrobel said:
Let me say it again: never never use physics textbooks for studying math.
There's nothing wrong with Feynman here. This is what Feynman says:
The true velocity is the value of this ratio, x/ϵ, when ϵ becomes vanishingly small. In other words, after forming the ratio, we take the limit as ϵ gets smaller and smaller, that is, approaches 0.
He says that ϵ becomes vanishingly small, he says that ϵ gets smaller and smaller, he says that ϵ approaches 0. He never, ever, ever says that we set ϵ equal to 0, so @rudransh verma how is it possible for you to read these words and say:
rudransh verma said:
Clearly when finding limit we take ##\epsilon= 0##
?
 
  • Like
Likes benorin
  • #16
rudransh verma said:
mathwonk said:
if you decide how good an approximation you want, there will be an interval of choices for the second point, such that the approximation will satisfy your requirement for every point in this interval.
Can you show me an example because all we are saying for example in my post#3(see attached image) is ##v(t_0)=32t_0##.

Feynman provides two examples for ## t_0 = 5 \ \tt{s} ## in the passage you linked:

We know where the ball was at 5 sec. At 5.1 sec, the distance that it has gone all together is 16(5.1)2=416.16 ft (see Eq. 8.1). At 5 sec it had already fallen 400 ft; in the last tenth of a second it fell 416.16−400=16.16 ft. Since 16.16 ft in 0.1 sec is the same as 161.6 ft/sec, that is the speed more or less, but it is not exactly correct. Is that the speed at 5, or at 5.1, or halfway between at 5.05 sec, or when is that the speed? Never mind—the problem was to find the speed at 5 seconds, and we do not have exactly that; we have to do a better job. So, we take one-thousandth of a second more than 5 sec, or 5.001 sec, and calculate the total fall as
## s=16(5.001)^2=16(25.010001)=400.160016 \tt{ft}. ##
In the last 0.001 sec the ball fell 0.160016 ft, and if we divide this number by 0.001 sec we obtain the speed as 160.016 ft/sec.
 
  • #17
pbuk said:
There's nothing wrong with Feynman here. This is what Feynman says:

I did not say "wrong". I just gave an advise not to study math. by physics textbooks. If this is not obvious compare what (and how) Feynman says and what (and how) Serge Lang says:
 

Attachments

  • Screenshot_20220329_155956.png
    Screenshot_20220329_155956.png
    20.9 KB · Views: 162
Last edited:
  • #18
wrobel said:
Let ##x_1< a<x_2## and ##f:(x_1,x_2)\backslash\{a\}\to\mathbb{R}## be a function.

Def. We shall say that ##\lim_{x\to a}f(x)=A## iff for any ##\varepsilon >0## there exists ##\delta>0## such that
$${\bf 0<}|x-a|<\delta\Longrightarrow |f(x)-A|<\varepsilon$$
I understand that ##0<|x-a|## meaning x is not equal to a but what is the meaning of ##|x-a|<\delta##.
 
  • Like
Likes benorin
  • #19
rudransh verma said:
I understand that ##0<|x-a|## meaning x is not equal to a
Not really, it is equivalent to ## x \ne a ## (when x and a are real numbers) but its meaning is "0 is less than the magnitude of ##x - a##".

rudransh verma said:
what is the meaning of ##|x-a|<\delta##.
"the magnitude of ## x - a ## is less than ## \delta ##".
 
  • #20
pbuk said:
Not really, it is equivalent to ## x \ne a ## (when x and are real numbers) but its meaning is "0 is less than the magnitude of ##x - a##"."the magnitude of ## x - a ## is less than ## \delta ##".
I mean I am unable to correlate it to the definition of limit.
Does it mean if we keep decreasing ##\epsilon## such that f(x) gets closer to A there exist ##\delta## such that x gets closer to a?
 
Last edited:
  • #21
rudransh verma said:
I mean I am unable to correlate it to the definition of limit.

Are you referring to this definition of a limit?

wrobel said:
Def. We shall say that ##\lim_{x\to a}f(x)=A## iff for any ##\varepsilon >0## there exists ##\delta>0## such that
$${\bf 0<}|x-a|<\delta\Longrightarrow |f(x)-A|<\varepsilon$$

What this definition says is that ## A ## is the limit of ## f(x) ## at ## x = a ## if and only if for every value of ## \varepsilon > 0 ## you can find a circle of radius ## \delta ## around ## x = a ## within which ## | f(x) - A | < \varepsilon ##.
 
  • #22
pbuk said:
What this definition says is that A is the limit of f(x) at x=a if and only if for every value of ε>0 you can find a circle of radius δ around x=a within which |f(x)−A|<ε.
Does it mean if we keep decreasing ϵ such that f(x) gets closer to A there exist δ such that x gets closer to a?
 
  • #23
rudransh verma said:
I understand that 0<|x−a| meaning x is not equal to a but what is the meaning of |x−a|<δ.
By other words this means the following. Let $$B_r(x_0):=\{x\mid |x-x_0|<r\};\quad \dot B_r(x_0)=B_r(x_0)\backslash\{x_0\}.$$ Equivalently the definition is sounds as follows
For any ##\varepsilon>0## there exists ##\delta>0## such that
$$f(\dot B_\delta(a))\subset B_\varepsilon(A).$$
 
  • #24
wrobel said:
I did not say "wrong". I just gave an advise not to study math. by physics textbooks. If this is not obvious compare what (and how) Feynman says and what (and how) Serge Lang says:
I think it depends what aspect about the math you're interested in. If you're trying to get an intuitive understanding of what the math means, a physics book may be better. The emphasis on proofs and rigor in math books can obscure what the math means in an intuitive sense.

I started taking a graduate math course in differential geometry when I was in grad school. The professor was trying to convey an intuitive understanding of what he was talking about to the students, and at one point, he stopped and asked if the students understood what he was trying to say. All the math students shook their heads. Then he looked at the physics students and asked if we got what he meant, and we all nodded.

Frankly, I think a combination is the best approach. Physicists can appear to play fast and loose with the math at times, but you do get a sense of what the point of the math is. Then seeing the rigor of mathematicians can help you learn to avoid playing too fast and too loose.
 
  • #25
vela said:
I think it depends what aspect about the math you're interested in. If you're trying to get an intuitive understanding of what the math means, a physics book may be better. The emphasis on proofs and rigor in math books can obscure what the math means in an intuitive sense.
Yes it can. But I think that if one has an intuitive understanding but does not have a formal definition then one has nothing.
 
  • Like
Likes S.G. Janssens
  • #26
vela said:
Frankly, I think a combination is the best approach. Physicists can appear to play fast and loose with the math at times, but you do get a sense of what the point of the math is. Then seeing the rigor of mathematicians can help you learn to avoid playing too fast and too loose.
I thought that this was a mistake but now I think it will be ok. I do that sometimes to avoid the boringness of maths. It is always fun to watch how we get to prove something in physics and to do that we just have to know how it’s done. No need to go into derivations of maths. I don’t like solving maths problems very much because it doesn’t take you anywhere like physics does.
 
  • #27
wrobel said:
By other words this means the following. Let $$B_r(x_0):=\{x\mid |x-x_0|<r\};\quad \dot B_r(x_0)=B_r(x_0)\backslash\{x_0\}.$$ Equivalently the definition is sounds as follows
For any ##\varepsilon>0## there exists ##\delta>0## such that
$$f(\dot B_\delta(a))\subset B_\varepsilon(A).$$
Frankly That definition was better. Can you verify what I am trying to say whether it’s correct?
rudransh verma said:
Does it mean if we keep decreasing ϵ such that f(x) gets closer to A there exist δ such that x gets closer to a?
 
  • #28
@rudransh verma: as an example, if f(t) = 16t^2, then what you are calling the rate of change "at a", is 32a. I am saying that this simply means that for all t near a, the actual rate of change between the 2 points a and t, namely [16t^2-16a^2]/(t-a) = 16(t+a), is approximately 32a. And this approximation has an error of 16(t+a) - 32a = 16(t-a), and this error is as small as you like, for t near enough to a. I.e. for all t near a, the actual rate of change between t and a, is approximately 32a. And 32a is the only number that approximates all those actual rates of change, with an error that goes to zero as t goes to a.
 
  • #29
rudransh verma said:
Does it mean if we keep decreasing ϵ such that f(x) gets closer to A there exist δ such that x gets closer to a?
No. You're missing the key part of it. It says that as we keep decreasing ε such that f(x) gets closer and closer to A, we can always find a δ such that |x-a| < δ implies that |f(x)-A| < ε.
 
  • Like
Likes benorin
  • #30
rudransh verma said:
I understand that ##0<|x-a|## meaning x is not equal to a but what is the meaning of ##|x-a|<\delta##.
It means what it says. That the absolute value of the difference between the numbers ##x## and ##a## is less then the number ##\delta##.
 

Similar threads

  • · Replies 49 ·
2
Replies
49
Views
6K
  • · Replies 53 ·
2
Replies
53
Views
5K
Replies
18
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
2
Views
1K
Replies
6
Views
1K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K