Understanding the Difference between \Delta x and dx in Differential Calculus

yuiop
Messages
3,962
Reaction score
20
Hi, Simple question here.

What is the difference between \Delta x and dx in differential calculus? Do they both mean the same thing? i.e. x_i-x_{i-1}?
 
Physics news on Phys.org
Yes, it's just a real number. If f:\mathbb R\rightarrow\mathbb R, then we define df:\mathbb R^2\rightarrow\mathbb R by

df(x,h)=f'(x)h

If we write dx instead of h, simplify df(x,h) to dy, and write f'(x)=dy/dx, the equality takes the form

dy=\frac{dy}{dx}dx

My point is that while dx is always just a real number, dy usually refers to something different: a first order estimate of how much f changes as a result of changing x (while Δy would be the actual change).
 
Well, I disagree with Fredrick, they are NOT the same thing. \delta x is a number but is only an approximation to dx. In calculus, "dx" is essentially a "concept" rather than an actual number- and dy is defined by "dy= (dy/dx) dx" where "dy/dx" is, of course, the derivative. In more advanced mathematics we can define "differentials" in terms of "differential forms" and "infinitesmals" a little more precisely.
 
Fredrik said:
My point is that while dx is always just a real number, dy usually refers to something different: a first order estimate of how much f changes as a result of changing x (while Δy would be the actual change).

O.K. as I understand you

\frac{dx}{dy} = \frac{\Delta x}{dy}

is always true, but the following are not necessarily true:

\frac{dx}{dy} = \frac{ x}{\Delta y}

\frac{dx}{dy} = \frac{\Delta x}{\Delta y}

HallsofIvy has just gone and complicated things by introducing \delta x and saying it is not the same thing as dx. So does that mean:

\frac{dx}{dy} = \frac{\delta x}{\delta y}

is also not necessarily true?

I always thought one was just a informal shorthand for the other.

So let's take a practical example that might make things clearer to me.

We have a particle initially at rest at (x1,t1) = (0,0) that accelerates to a final instantaneous velocity of 50 m/s in 10 seconds over a distance of 200m so that (x2,t2) = (200,10).

The average velocity of the particle is:

\frac{\Delta x}{\Delta t} = (200/10) = 20 m/s

The initial instantaneous velocity of the particle is:

\frac{dx_1}{dt_1} = 0 m/s

The final instantaneous velocity of the particle is:

\frac{dx_2}{dt_2} = 50 m/sWhere does \frac{\delta x}{\delta t} fit into this picture?

HallsofIvy seems to be using \delta x as the small interval \Delta x that is sufficiently small that it is a good approximation to the conceptual dx. Is that correct?

I am primarily trying to get the notation right in my head, but the concepts are of interest too.
 
Last edited:
yuiop said:
The initial instantaneous velocity of the particle is:

\frac{dx_1}{dt_1} = 0 m/s

The final instantaneous velocity of the particle is:

\frac{dx_2}{dt_2} = 50 m/s
I would never write it like that. First of all, we have

x_1=x(t_1)

This is a specific number in the range of the function x. So an expression of the form dx1/d(something) doesn't make sense. We want to take the derivative of the function x, not the number x1. The derivative of a number isn't even defined. I would write the velocity at time t1 as x'(t1). I often use "D" for "the derivative of", so Dx=x', and therefore x'(t1)=Dx(t1). Note that "D" is an operator that takes a function to a function. What variable symbol that we're going to use is completely irrelevant. For that reason, I hate seeing that operator written as Dt or d/dt. I will sometimes write Dt, as in x'(t)=D_tx, but when I do, it's a functional that takes a function to a number, e.g. x to x'(t).

I only use the d/dt notation in expressions like

\frac{d}{dt}\frac{at^2}{2}[/itex]<br /> <br /> where the t in the denominator adds essential information, in this case that the function we&#039;re taking the derivative of is the map<br /> <br /> t\mapsto\frac{at^2}{2} . . . . . . .(derivative: z\mapsto az)<br /> <br /> rather than e.g.<br /> <br /> a\mapsto\frac{at^2}{2} . . . . . . .(derivative: z\mapsto t^2/2)<br /> <br /> or<br /> <br /> t^2\mapsto\frac{at^2}{2} . . . . . . .(derivative: z\mapsto a/2)<br /> <br /> or even<br /> <br /> z\mapsto\frac{at^2}{2} . . . . . . .(derivative: z\mapsto 0)<br /> <br /> If I want to use that notation <i>and</i> specify a point in the range of the function, I&#039;ll write something like this:<br /> <br /> x&amp;#039;(t_1)=\frac{d}{dt}\bigg|_{t_1}x(t)<br /> <br /> I don&#039;t think it&#039;s horrible to write<br /> <br /> \frac{dx(t_1)}{dt_1}=x&amp;#039;(t_1)<br /> <br /> but I don&#039;t like it, because it uses a constant symbol as a variable symbol.
 
Simply speaking, dx is a special limiting case of \Delta x in the limit x_{i} is very very very very ....(an infinite chain of very's!) close to x_{i-1}.
Note that strictly speaking we define forward difference operator by
\Delta x=x_{i+1}-x_{i}
and backward difference operator by
\nabla x=x_{i}-x_{i-1}
Also there is a mean difference operator.
There are so many notations in differential calculus and most of them have remained a source of confusion and debate. This particularly true for dy/ dx. Some regard this as (d/dx)y=Dy and thus treating d/dx together. Others prefer it (as also seen from first principle of differentiation) to interpret as quotient of two small numbers (and this is historically why Leibnitz invented this notation). But beware! there is a pitfall here as pointed out by HallsofIvy, differentials actually do not represent any particular number (classically speaking). In fact, as said by Courant in his classic, no rigorous definition can be assigned to this interpretation. Hence, mathematicians avoided this for quite some time (until around 1960) although it remained famous with most of the engineers and physicists who just needed some mechanical device/method for doing(rather using) maths (this is also true for variations notation \delta as pointed by Weinstock, who avoids it in his famous and ironically applied math text).
Now, what happened in 1960?Fortunately enough Abraham Robinson laid the foudation of non standard analysis and proved that what engineers like me have been tempted to use is not so much wrong as his fellow mathematicians were fond of declaring!
And hence, thank you Robinson!:approve:
 
Well, it's you engineers who keep us mathematicians on our toes! Engineer Anton Fourier developed a lovely method of solving all sorts of problems (especially differential equations) by writing functions as sums of sines and cosines. He stated that, given a differentiable function, there existed a "Fourier series" that summed to the function and that, given reasonable restrictions on the coefficients, the "Fourier series" always summed to an integrable function.

The first statement was true, the second was NOT! Mathematicians had to work hard to extend the concept of "integral" and "integral function" to the Lebesque integration theory so that would[\b] be true!
 
I,ve seen your post only today (I was busy with my exams). And last night while preparing for 'Numerical methods and optimization' I chanced to get an interesting relation (which is very relevant here) between D and \Delta in a book.
\Delta \equiv e^{hD} - 1

And about Fourier, those were multifaceted persons. Later on departmentalization became too sharp. But, now fortunately (as I see it) it is relegating fast!
 
Last edited:
Back
Top