Leibniz Notation

1. Dec 20, 2004

Moose352

I'm a bit confused by the leibniz notation for the derivative ie. dy/dx. I've been told that the symbol is not a fraction and can't be split, but I've also seen it split for differentials and the chain rule. Can someone concisely explain what all of it means?

2. Dec 20, 2004

daster

I read this somewhere on these forums a while back:
"It is not a fraction, but can be treated as one in some cases."

3. Dec 21, 2004

NeutronStar

It actually is a fraction in the sense of a limit,...

$$\frac{dy}{dx}=f'(x)\equiv \lim_{\substack{\Delta x\rightarrow 0}} \frac{f(x+\Delta x)-f(x)}{\Delta x}$$

So in a very real sense we can think of it as $dy \doteq f(x+\Delta x)-f(x)$ (i.e. the infinitesimal change in y), and $dx \doteq \Delta x$ (i.e. the infinitesimal change in x). Where I'm using the dot over the equals sign here to indicate that these quantities are not technically equivalent but have a strong relationship.

We only need to keep in mind that this ratio is defined within the framework of the concept of a limit. This is why we can't just claim that it is a normal fraction outright.

It seems to me though, that it should be a legitimate fraction because it is indeed a ratio between $dy$ and $dx$ which can be thought of as actual values albeit arbitrarily small ones.

We only need to be careful that all of the criteria have been satisfied with respect to the formal definitions of a limit and a derivative. But once those criteria have been met this does indeed represent a rate of change, (i.e. a ratio or fraction) by the very definition of a derivative.

Recall that I said above that we can think of it as $dy \doteq f(x+\Delta x)-f(x)$, and $dx \doteq \Delta x$.

But notice that if we do this then $dy$ is dependent on the value of $\Delta x$, and simultaneously we are thinking in terms of $dx \doteq \Delta x$.

Therefore the so-called fraction $\frac {dy}{dx}$ would have a numerator that is dependent on the value of the denominator. So hopefully you can see that this isn't a normal situation for most fractions, and this is why we can't think of it as a normal everyday fraction. We can, however, get away with using it algebraically like a fraction in many situations. In fact, we do this more often than not.

Last edited: Dec 21, 2004
4. Dec 21, 2004

dextercioby

The only problems regarding Leibniz notation come up when making a change of variable when calculating the antiderivative of a function.
Define the differential of a function f(x) (of one variable) through:
$$df=:f'(x) dx$$(1)
,where "dx" is called the differential of the variable "x" and can be seen as an infinitesimal variation in "x".You cannot equate it with 0.
"f'(x)" is the derivative of of the function f(x) is defined by a process involving a limit
$$f'(x)=:\lim_{\epsilon\rightarrow 0} \frac{f(x+\epsilon)-f(x)}{\epsilon}$$
As u can see,by taking (1) as a definition for a differential,it turns out that u can express the derivative of a function by a mere ratio between differentials.And 'ratio' is understood algebraically.This definition is very useful,as it can allow a definition for the antidifferentiation:
$$f(x)+C=:\int df \begin{array}{c} (1)\\= \end{array} \int f'(x) dx$$

That's why 'dx' should never miss when expressing an antidifferential/antiderivative.From (1),u can express the derivative as a ratio of two differentialsne of the function and one of the variable.
$$f'(x)=\frac{df}{dx}$$ (2)

The fact that the derivative can be seen as a regular/normal ratio of differentials allows,for example,integrating differential equations through the method of variable separation.E.g.
$$y'(x)=u_{1}(x) u_{2}(y)$$
,where $u_{1}(x),u_{2}(y)$ are arbitrary functions.Using (2),u can write:
$$\frac{dy}{dx}=u_{1}(x) u_{2}(y) \Rightarrow \frac{dy}{u_{2}(y)}=u_{1}(x) dx \Rightarrow \int \frac{dy}{u_{2}(y)}=\int u_{2}(x) dx$$
,which gives the solution to the equation.

Daniel.

5. Dec 21, 2004

dextercioby

The notation due to Leibniz is very useful.Consider the expression:
$$y(x)=y(u(x))\Rightarrow y'(x)=y'(u(x)) u'(x)$$
It's the chain rule for an univariable function.To prove it,it would be horrible,all those limits... :yuck: Using differentials,its immediately:
$$y(x)=y(u(x))\Rightarrow \frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$
,which becomes obvious,since u can "simplify" through "du".Actually both multiply and divide.This multiplication and division could go/becomes handy for more complicated situations
$$y(x)=y(u_{1}(u_{2}(u_{3}(...(u_{n}(x))...))))\Rightarrow \frac{dy}{dx}=\frac{dy}{du_{1}}\frac{du_{1}}{du_{2}}\frac{du_{2}}{du_{3}}...\frac{du_{n-1}}{du_{n}}\frac{du_{n}}{dx}$$
This Leibniz rule can be extended for multivariable functions.Using the notation of Jacobi (that "d" rond).
$$\frac{\partial f}{\partial x}=\frac{\partial f}{\partial r}\frac{\partial r}{\partial x}+\frac{\partial f}{\partial \theta}\frac{\partial \theta}{\partial x}+\frac{\partial f}{\partial \phi}\frac{\partial \phi}{\partial x}$$

Daniel.

6. Dec 21, 2004

quasar987

So far, only "arguments" in favor of dy/dx being a fraction have been exposed. Are there cases where dy/dx cannot be considered a fraction? Personnally, I don't think so...

I remember quite clearly that my first calculus teacher told a student that dy/dx could not be considered a fraction... I think it was in the context of a student applying the chain rule like this

$$y(x)=y(u(x))\Rightarrow \frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$

and simplyfying the du right away. This of course, leads to no result. And that's probably what led the teacher to said you couldn't do that because the expression dy/dx could not be considered as a fraction. But like I said, I don't think it's true.

7. Dec 21, 2004

arildno

Chew on this delightful identity then:
$$\frac{\partial{x}}{\partial{y}}\frac{\partial{y}}{\partial{z}}\frac{\partial{z}}{\partial{x}}=-1$$

8. Dec 21, 2004

arildno

This is of course ultra-sleazy, since I suppressed the fact that this equation only holds when the variables (x,y,z) is related to each other by a CONSTRAINT, say
G(x,y,z)=0.

To see why it is true then, we take the example:
ax+by+cz=0, where a,b,c are non-zero constants.

We may then solve for one of the variables with respect to the other two:
$$x=X(y,z)=-\frac{b}{a}y-\frac{c}{a}z\to\frac{\partial{X}}{\partial{y}}=-\frac{b}{a}$$
$$y=Y(x,z)=-\frac{a}{b}x-\frac{c}{b}z\to\frac{\partial{Y}}{\partial{z}}=-\frac{c}{b}$$
$$z=Z(x,y)=-\frac{a}{c}x-\frac{b}{c}y\to\frac{\partial{Z}}{\partial{x}}=-\frac{a}{c}$$
Hence,
$$\frac{\partial{X}}{\partial{y}}\frac{\partial{Y}}{\partial{z}}\frac{\partial{Z}}{\partial{x}}=(-\frac{b}{a})(-\frac{c}{b})(-\frac{a}{c})=-1$$
as indicated.
As long as we may use the implicit function theorem on G(x,y,z)=0, the surprising identity holds for general G.

Last edited: Dec 21, 2004
9. Dec 21, 2004

Hurkyl

Staff Emeritus
These sorts of algebraic manipulations with "infinitessimals" are fine for simple things like rational functions, though we already see how treacherous the terrain is, as you've overlooked the fact that the value of:

$$\frac{f(x + \epsilon) - f(x)}{\epsilon}$$

is a function of both x and ?; in other words, the value of this "derivative" depends on your choice of infinitessimal.

Secondly, it is difficult to make sense of most functions when you try to use infinitessimals. For instance, what could $\sin (x + \epsilon)$ or $e^{\epsilon}$ mean? It's even more problematic than you think, since the trig and exponential functions are usually defined via an infinite series, but the usual form of those are more or less useless when infinitessimals are involved!

Even worse, consider this function, one of the standard "weird" examples of calculus:

f(x) = 0 if x is irrational
f(x) = 1/q if x=p/q is rational, where p/q is in lowest terms

How could you possibly extend this function sensibly to a domain with infinitessimal values, let alone use that to prove it's differentiable precisely when x is irrational?

All this can be done (see nonstandard analysis) but it requries a bit of "magic" with formal logic to do right. For a (very) basic introduction to doing ordinary calculus with infintiessimals, see:

http://www.math.wisc.edu/~keisler/calc.html

Last edited: Dec 21, 2004
10. Dec 21, 2004

NeutronStar

Is that technically correct though?

Shouldn't it be,...

$$y(x)=y(u(x))\Rightarrow \frac{d}{du}\left[ y(u(x))\right] \cdot \frac{du}{dx}= \frac{dy(u(x))}{dx} \not = \frac {dy(x)}{dx} = \frac{dy}{dx}$$

I've haven't worked out a concrete example, but I think the question would come down to asking if the following is equal,...

$$\frac{d}{dx}\left[ y(x)\right]=\frac{d}{dx}\left[ y(u(x))\right] ?$$

In other words, does,...

$$\frac{dy(x)}{dx}=\frac{dy(u(x))}{dx} ?$$

In this probably more of a case of sloppy shorthand notation rather than having anything to do with fractions.

Of course if $y(x)=y(u(x))$ then $u(x)$ isn't much of a function is it?

11. Dec 21, 2004

dextercioby

Okay,genious:
$$\frac{dy(u(x))}{dx}=\frac{dy}{du}\frac{du}{dx}$$

$$y(u)=u^{2};u(x)=x^{2}\Rightarrow y(x)=y(u(x))=x^{4}$$
$$\frac{dy}{dx}=4x^{3}$$
$$\frac{dy}{du}=2u=2x^{2}$$
$$\frac{du}{dx}=2x$$
$$\frac{dy}{du}\frac{du}{dx}=4x^{3}=\frac{dy}{dx}$$

I'll let work something more simple:
$$y(u)=\sin u;u(x)=\arcsin x$$

Daniel.

PS.Gottfried Wilhelm Leibniz would be rolling in his grave for this... :tongue2:

12. Dec 21, 2004

matt grime

And in this case y(x) isn't equal to y(u(x)) so your point is what? Neutronstar has a valid point that you seem to have over looked.

13. Dec 21, 2004

mathwonk

leibniz really was a genius. almost anything plausible you can say about his differentials is true and even justifiable in some way.

in manifold theory, a symbol like du or dy or df, refers to the differential of the function u,y,or f. it is an object that assigns to each point p, a linear function on the tangent space at p, i.e. the dual vector du(p) that maps a tangent vector v at p to the directional derivative of the function u in the direction v.

anyway, if the manifold is only one dimensional, then the tangent space ate ach point p, and also the dual space, is also one dimensional. hence any two elements of that one dimensional space define a scalar, unlkess they are both zero. i.e. given two elements du(p) and dy(p) if say du(p) is not zero, there is a unique scalar c such that cdu(p) = dy(p). we call this scalar c = (dy/du)(p).

In case u happens to be invertible, and hence defines a local coordinate system on the manifold near p, the number c equals the usual derivative of the composition y composed with u^(-1), which composition is a real valued function of a real variable.

so it does make good sense in this case (the one dimensional case) to not only define du as an independent object but also to divide two such objects. in higher dimensions one can define du the same way, but not divide by it., since two vectors in a higher dimensional space do not tend to be scalar multiples of each other.

14. Dec 21, 2004

dextercioby

Which case??Are you reading too much into what i've written above??I've shown him (and u 'implicitely' (a key word)) an example where the chain rule works.In that case $y=y(x)$ is just a curve in the 2 dimensional plane.If i make a reparametrization
$$y=y(t);x=x(t)$$
,wouldn't the curves $y=y(x)$ and $y=y(t(x)) [/tex] be identical????????????????????????????????????.And the tangent lines would be the same,right????????? Daniel. PS.I assumed both parametrizations as diffeomorphisms. 15. Dec 21, 2004 matt grime No, the point is that if y(x)=y(u(x)) then u(x)=x, at least locally anyway assuming all manner of things we usually do in analysis, or it's periodic or something equally restrictive. Ie this isn't how the question should have been phrased. 16. Dec 21, 2004 dextercioby I can only say that implicite variable dependence of arbitrary (smooth) functions is not among your favorites...I can tell u haven't worked with Lagrange functions in your lifetime... :tongue2: I had in mind the famous diagram: $$A\rightarrow B\rightarrow C$$ where from A(where is the element "x") to B u get by the [itex] C^{\infty} (A)$ invertible function u(x) and from B to C u get by the $C^{\infty} (B)$ invertible function y(u).Then y(x) is nothing that the mapping of A and into C,not necessarily one to one.
Your dillema assumes A=B and surjectivity of the function y.As u can see,things can be different and most commonly are...

Daniel.

EDIT:I see u edited and completed your post.That changes nothing inwhat i had to say.

Last edited: Dec 21, 2004
17. Dec 21, 2004

matt grime

No, I stopped being an analyst long ago, so I don't care for any of those things. My point was not to say anything about my opinion of the question but to illustrate what Neutronstar was (I think) getting at, that it is more common to see

z(x) = y(u(x)) then we can find dz/dx by....

A quote from Tom Koerner: After all, we are doing analysis because we aren't clever enough to be algebraists. As he was making a very easy subject (representations of finite abelian groups) seem very hard (as a generalization of Fourier Analysis).

18. Jan 4, 2005

danne89

Hey! How's about the second derivate in this notation. Is this right:
$$\frac{d(dy/dx)}{dx} = \frac{ \frac{d(dy)}{d(dx)}}{dx} =\frac{d^2y}{d^2y} \frac{1}{dx}= \frac{d^2y}{d^2dx^2}$$

19. Jan 4, 2005

dextercioby

No.It should be:
$$\frac{d}{dx}\frac{dy}{dx}=\frac{d^{2}y}{dx^{2}}$$

The derivative of "n"-th order wrt to "x" of the function "y" in the notation of Joseph Louis Lagrange is
$$y^{(n)}(x)$$
In the notation of Gottfried Wilhelm Leibniz the same "animal" is
$$\frac{d^{n}y(x)}{dx^{n}}$$
and it can be seen as appying the operator $\frac{d}{dx}$ "n" times on the function "y(x)".

Daniel.

20. Jan 4, 2005

danne89

Hmm. I've a hard time understanding this. Can you point me in right direction?

21. Jan 4, 2005

dextercioby

Okay,let's assume for simplicity that our functions depend only on variable and they are infinite times derivable on an arbitrary interval.
Then,we adopt Lagrange's notation and we say that the first order derivative of the function called y(x) is $$y'(x)$$.We can look on it as applying a linear operator called derivative which we'll denote by "'"(prime).So applying this operator on the function y(x),we get the derivative of the function "y(x)":$y'(x) [/tex].The same operator (called 'derivative') is denoted by G.W.Leibniz as [itex] \frac{d}{dx}$,since the result is the same (namely the first derivative of 'y' wrt to 'x'),no matter the notation,we can say
$$y'(x)=\frac{d}{dx}y(x)=\frac{dy(x)}{dx}$$

For the second derivative,u apply this operator on the first derivative and u get,in the Lagrange notation:
$$[y'(x)]'=y''(x)=y^{(2)}(x)$$
,and in the Leibniz notation
$$\frac{d}{dx}[\frac{dy(x)}{dx}]=\frac{d^{2}y(x)}{dx^{2}}$$

You equate the results,since they represent the same thing;the second-order derivative of the function "y(x)" wrt to "x".
$$y^{(2)}(x)=\frac{d^{2}y(x)}{dx^{2}}$$

And the same u do for every order of the derivative.For"n"
$$y^{(n)}(x)=\frac{d^{n}y(x)}{dx^{n}}$$

Daniel.

Last edited: Jan 4, 2005
22. Feb 6, 2012

Calculuser

"dx" is inifinitesimal but I'm confused that it's dx>0 or dx<0 ??

23. Feb 6, 2012

chiro

dx is usually considered a positive quantity because we are considered the tangent as the function changes as a result of positive change (ie delta > 0 which means dx > 0).

You could make dx negative, but by doing that you have to adjust the other definitions even if you want to consider how the tangent changes when you are going 'backwards' (dx < 0).

In numerical calculus we do have situations where we consider these kinds of things, but for normal calculus its almost always assumed that dx is an infinitesimal quantity with dx > 0

24. Feb 6, 2012

Calculuser

Okay, thanks.

But in this notation $\int^{b}_{a}{f(x).dx}$ I know it means area under the curve on interval [a,b] what if I take dx<0 how can I explain it? I mean I can take limit (lim $\frac{Δy}{Δx}$ as x approximates either side of x-axis (Δx→0$^{+}$ and Δx→0$^{-}$) I realize that if dx>0 or dx<0 at derivative of a function but I'm not sure about integral??

25. Feb 6, 2012

lavinia

A differentiable function can be approximated by the first term in its Taylor series

y = y0 + dxy'

This approximation gets increasingly accurate as dx approaches zero.

So the ratio (y-y0)/dx is arbitrarily close to y'

I think of dy as the change in y for a small change in x,dx. In physics books you will find this way of looking at it as well.