# Leibniz notation

I have a question I should have asked a LONG time ago.

When we see this notation being used in such formulas as i=dq/dt (definition of current) or dy/dx (etc, etc), are we saying (in the case of current) that current is equal to the change in charge with respect to time? Or is it the "current equal to the change in charge with respect to the change in time"?

What if, hypothetically, the definition of current were written as i=d/dt. Does this mean "current is equal to the change in time"?

I have always had issues reading/saying/comprehending leibniz notation. I know the calculus (meaning solving the problem), I just need to conceptualize what it represents. I hate solving a problem and NOT knowing what I'm doing. :(

Any help is highly appreciated.

## Answers and Replies

jhae2.718
Gold Member
I have a question I should have asked a LONG time ago.

When we see this notation being used in such formulas as i=dq/dt (definition of current) or dy/dx (etc, etc), are we saying (in the case of current) that current is equal to the change in charge with respect to time? Or is it the "current equal to the change in charge with respect to the change in time"?
Current is equal to the change in charge w.r.t. time. A derivative is the instantaneous rate of change.

What if, hypothetically, the definition of current were written as i=d/dt. Does this mean "current is equal to the change in time"?

This has no meaning, as you are equating a value with an operator.

Stephen Tashi
I have a question I should have asked a LONG time ago.

When we see this notation being used in such formulas as i=dq/dt (definition of current) or dy/dx (etc, etc), are we saying (in the case of current) that current is equal to the change in charge with respect to time? Or is it the "current equal to the change in charge with respect to the change in time"?

You raise a good point about the sloppiness of language. For derivatives, when we say "with respect to time", we actually refer to a process where we take the limit of a change in something with respect to a change in time. So the the "change in time" is implicit in saying "with respect to time". For integration "with respect to time" has a different, specialized meaning.

In physics, there can be formulas that have expressions like dQ/T and nobody calls that the change in Q with respect to T.

What if, hypothetically, the definition of current were written as i=d/dt. Does this mean "current is equal to the change in time"?

I don't think it would be interpreted that way. i = dt might be.

I have always had issues reading/saying/comprehending leibniz notation. I know the calculus (meaning solving the problem), I just need to conceptualize what it represents. I hate solving a problem and NOT knowing what I'm doing. :(

All I know to do is to interpret dy/dx as a derivative as long as the dy's and dx's stay nicely in a fraction. When they start to come apart, like dx sin(y) + (1-x)dy = 0, I just have to pretend that this indicates a pattern for making a numerical approximation that is more and more accurate as we make dx and dy smaller. Or I suppose you could pretend that both x and y are functions of some third variable t and that dx is a stand-in for dx/dt and that dy is a stand-in for dy/dt.

So far, he only logically rigorous way to tread dx and dy as "infinitesimals" is by using an axiom system known as "nonstandard analysis", which is quite abstract.

You should not neglect the Leibnitz notation, however. It is useful for intuitive reasoning.

Mark44
Mentor
d/dt is the operator for differentiation with respect to t (usually time). Being an operator, it has to have something to operate on, just as the square root symbol has to have something under it.

It's as much an error to say d/dt = <whatever> as it is to say $\sqrt{} = 2$.

Thank you. It's making a little more sense. I have always found the 'prime' notation to be much easier to interpret, but I guess with physics and differential equations the leibniz notation is just more specific.

For Math problems, if we were to see something as (dy/dx)(x+y)=(some constant), would that be the same as saying x'+y'=some constant? (Not sure if this made up problem makes any sense).

Stephen Tashi
if we were to see something as (dy/dx)(x+y)=(some constant), would that be the same as saying x'+y'=some constant?.

No.

If we say y = f(x) then (dy/dx)(x + y) = 5 would mean f'(x) (x + f(x)) = 5.

1. A function is a mathematical machine. It consists of three things: a set consisting of the function's inputs (called the domain), a set containing its outputs (called the codomain) and a rule which associates each input to one, and only, output. Example: $f : \mathbb{R} \rightarrow \mathbb{R}, f(x) = 5x$, this means the inputs are all of the real numbers, and so are the outputs, and the rule is "multiply the input by five". The output of a function, $f$, corresponding to a particular input, $x$, is written $f(x)$ and called "the value of $f$ at $x$."

2. Suppose the codomain (outputs) of a function $f$ is the real numbers, and its domain (inputs) some subset of the real numbers. The derivative of $f$ at (the input) $x$ is the limit

$$\lim_{h\rightarrow0} \frac{f(x+h)-f(x)}{h}$$

if it exists. (If not, we say that $f$ is not differentiable at $x$.)

3. Let's define a function $D$, the derivative operator. Its codomain (outputs) is the set of functions like $f$, that is, functions whose codomain is the real numbers, and whose domain some subset of the real numbers. The domain of $D$ is the set of differentiable functions. So, given a particular input, $f$, the output, $D(f) = f'$ is a new function whose inputs and outputs, like those of $f$, are real numbers. This new function is defined, of course, by the rule

$$[D(f)](x) = f'(x) = \lim_{h\rightarrow0} \frac{f(x+h)-f(x)}{h}$$

We say that the derivative of $f$ at $x$ is $[D(f)](x) = f'(x).$ (I put square brackets around the $D(f)$ just to stress that that's one whole function whose input is $x$; all the other brackets here mean "what goes inside here is an input".)

Example: $f:\mathbb{R} \rightarrow \mathbb{R}, f(x) = x^2$, and $D(f):\mathbb{R} \rightarrow \mathbb{R}, [D(f)](t) = f'(t) = 2t$. Actually, since $x$ can be any number, and $t$ can be any number, it doesn't matter what letter we use to represent them.

"Operator" is just a name often used for a function whose inputs and outputs are other functions. It's conventional not to write brackets around the input of an operator, thus $Df$ for the derivative of $f$ and $Df(x)$ for its value at $x$.

4. Leibniz notation. This is used with several varieties of meaning. Sometimes an author will specify one meaning. Alternatively, it may be used ambiguously, where the author feels it doesn't matter much whether it means the function or a value of the function. One family of usages goes like this

$$\frac{\mathrm{d} f}{\mathrm{d} x} = D(f)$$

$$\frac{\mathrm{d} }{\mathrm{d} x} f = D(f)$$

$$\frac{\mathrm{d} f}{\mathrm{d} x} (x) = [D(f)](x)$$

Another makes the Leibniz derivative operator symbol say "just while it's with me, treat the expression $f(x)$, or a particular rule of a function, as equivalent to the name of the function $f$".

$$\frac{\mathrm{d} f(x)}{\mathrm{d} x} = D(f)$$

$$\frac{\mathrm{d} }{\mathrm{d} x} f(x) = D(f)$$

$$\frac{\mathrm{d} }{\mathrm{d} x} f(x) \bigg|_{x=t} = [D(f)](t)$$

where $t$ stands for some particular numerical value that the variable $x$ can take, for example

$$\frac{\mathrm{d} }{\mathrm{d} x} x^2 \bigg|_{x=3} = 2 \cdot 3 = 6$$

Another practice is to write $y=f(x)$, then

$$\frac{\mathrm{d} y}{\mathrm{d} x} = [D(f)](x)$$

or perhaps

$$\frac{\mathrm{d} y}{\mathrm{d} x} = D(f)$$

It's not always clear which is meant.

5. The chain rule. One application of Leibniz notation is as a mnemonic to make the chain rule look like cancelling of fractions, thus

$$(g \circ f)'(x)=g' \circ f (x)\cdot f'(x)$$

$$= g'(f(x)) \cdot f'(x)$$

may be written, less explicitly, as

$$\frac{\mathrm{d} z}{\mathrm{d} x} = \frac{\mathrm{d} z}{\mathrm{d} y} \frac{\mathrm{d} y}{\mathrm{d} x}$$

Note that $z$ on the left denotes the composite function $g \circ f$, while $z$ on the right denotes the outer function $g$.

Stephen Tashi
Rasalhauge,

How do you interpret the notation that old books use in stating differential equations.

For example, in one post a forum member asked about

$$y-\sin^2(x) dx + \sin(x) dy = 0$$

and, in later post, claimed that the problem was

$$(y - \sin^2(x) )dx + \sin(x) dy = 0$$

(I understand that these are different problems.) Can we define a correct syntax for such expressions ? Or does almost any combination of dx's,dy's and functions of x and y make a sensible differential equation?

Why would the first equation make sense in the context of an ODE course?

Rasalhauge,

How do you interpret the notation that old books use in stating differential equations. [...] Can we define a correct syntax for such expressions ? Or does almost any combination of dx's,dy's and functions of x and y make a sensible differential equation?

I'm very much not the person to ask about this - at least not yet - as I'm still pretty confused about differential equations. Is the second equivalent to y'(x) = (sin(x)2-y(x))/(sin(x))? If so, and supposing y is an unknown function, and x the independent variable, could we say: dy means y', and dx is a fancy, old-fashioned way of writing the number 1?!

Yes. You could write it like that, but you're not actually doing anything except reinforcing the notion that the differentials can be treated as numbers.

Note the second equation originally popped up in a thread where someone asked about how to think about the equation in the context of linear ODEs. Thus the only reasonable interpretation seems to be within the specific context of exact differential equations, which happen to be more general than linear ODEs.