# The Pantheon of Derivatives – Part II

#### Generalizations Beyond ##\mathbb{R}## and ##\mathbb{C}##

As mentioned in the section of complex functions (The Pantheon of Derivatives – Part I), the main parts of defining a differentiation process are a norm and a direction. So to extend the differentiation concepts on normed vector spaces seems to be the obvious thing to do.

##### Fréchet Derivative

**Definition:** Let ##X## and ##Y## be two Banach spaces, i.e. normed real or complex vector spaces, which are complete as normed topological spaces, and ##U \subseteq X## an open subset. A function

$$

f: (U,||.||_X) \longrightarrow (Y,||.||_Y)

$$

is Fréchet differentiable at ##x_0 \in U## if there is a continuous linear operator ##J : X \rightarrow Y## such that

\begin{equation}\label{FRL}

\lim_{v \to 0} \frac{||f(x_0 + v) – f(x) – J(v) ||_Y}{||v\||_X} = 0

\end{equation}

The operator ##J## is called the **Fréchet derivative** of ##f## at ##x_0## and is written ##J=Df_{x_0}=Df(x_0)## indicating the dependence of the linear approximation at ##x_0##.

Sometimes it is only required that ##X\; , \;Y## are normed vector spaces, but as limits are involved, it is more convenient to require Banach spaces, i.e. complete spaces. Also the continuity requirement is new here, as it is not automatically the case and continuous functions are the natural (homo-)morphisms in the category of topological spaces. Taking a closer look on this limit reveals, that the similar (equivalent) change to the definition can be made as in the real case. Therefore we consider (as always)

\begin{equation}\label{Frechet}

\mathbf{f(x_{0}+v)=f(x_{0})+J(v)+r(v)}

\end{equation}

with a faster than linear vanishing term ##r(v)## and note, that the Fréchet derivative is unique, if it exists. This also means that the Fréchet derivative coincides with the usual derivative in finite dimensional spaces, where the linear operator ##J## can be represented by the Jacobian matrix. Whereas in the finite dimensional case all linear operators are Fréchet differentiable, in the infinte dimensional case only and exactly the bounded linear operators are Fréchet differentiable, unbounded are not.

##### Gâteaux Derivative

#### A – The Directional Derivative

The Gâteaux derivative is a generalization to normed vector spaces, too, the directional derivative. Let ##f\, : \,\left(X,||.||_X \right) \longrightarrow \left(Y,||.||_Y \right)## be a function on Banach spaces, ##x_0## a point in an open neighborhood ##U \subseteq X## and ##v## a directional vector in ##(X,||.||_X ) ##.

Unfortunately this is where the easy part gets to a hold. I chose ##X,Y## to be Banach spaces for the sake of simplicity. Usually they are only required to be locally convex, normed vector spaces. This is already an indicator of the difficulty we will face: the additivity of Gâteaux derivatives.

The English Wikipedia defines (remember that ##df## is the differential, ##J## the derivative)

“*At each point ##x_0 \in U##, the Gâteaux differential defines a function ##df(x_0,.) = J_{x_0}\, : \, X \rightarrow Y## which is homogeneous, i.e. ##J_{x_0}(\alpha\cdot v)=\alpha \cdot J_{x_0}(v).## However, this function need not be additive, so that the Gâteaux derivative may fail to be linear, unlike the Fréchet derivative. Even if linear, it may fail to depend continuously on ##v## if ##X## and ##Y## are infinite dimensional. Furthermore, for Gâteaux derivatives that are linear and continuous in ##v##, there are several inequivalent ways to formulate their continuous differentiability.*”

The German version defines

“*If ##df(x_0,.)## is a continuous, linear functional, i.e. the function ##v \mapsto J_{x_0}(v)## is homogeneous, additive and continuous, then it is called a Gâteaux derivative at ##x_0##.*”

Well, René Gâteaux has been a French mathematician, so let’s have a look on the French Wikipedia

“*The Gâteaux derivative of ##f## at ##x_0## in the direction of ##v## is the limit in ##Y## (so it exists)*”

\begin{equation}\label{Gateaux}

J_{x_0}(v) = \lim_{{t \rightarrow 0}\atop{t\neq 0}} \, \tfrac{f(x_0+tv)-f(x_0)}{t} = \left. \frac{d}{dt}\right|_{t=0} f(x_0+tv)

\end{equation}

“*where the variable ##t## is taken real … The function ##f## is Gâteaux differentiable at ##x_0## if there is a linear, continuous operator ##J_{x_0}: X \rightarrow Y## such that ##v \mapsto J_{x_0}(v)## exists for all ##v \in X##*”

Maybe it’s best to handle it like nlab [18] which links directly to their definition of directional derivatives or as in a paper from Texas Tech [13] in which they don’t bother linearity within the definition either. What’s all in common is the fact that the Gâteaux derivative definition isn’t unique, but always generalizes the concept of a directional derivative to infinite dimensional normed vector spaces of some kind, local convexity as minimal requirement.

#### B – Definitions and Examples

We assume the same conditions as in the previous section. Let ##f:X \rightarrow Y## be a function on Banach spaces, ##x_0 \in U \subseteq X## a point at which we differentiate and ##v \in X## a direction in which we differentiate.

**Definition [Weierstraß]:** A linear function ##J:X \rightarrow Y## such that

\begin{equation}\label{Weierstrass}

\mathbf{f(x_{0}+v)-f(x_{0})=J(v)+r({||v||}_{X})}

\end{equation}

with ##r(t)## vanishing faster than linear is called Gâteaux derivative ##J_{x_0}## at ##x_0##.

**Definition [Variational Derivative]:** The Gâteaux derivative of ##f## at ##x_0## in the direction of ##v## is the limit in ##Y## with real ##t##

\begin{equation}\label{Var1}

J_{x_0}(f)(v) = \lim_{{t \rightarrow 0}\atop{t\neq 0}} \, \frac{f(x_0+tv)-f(x_0)}{t} = \left. \frac{d}{dt}\right|_{t=0} f(x_0+tv)

\end{equation}

**Second Variation**

\begin{equation}\label{Var2}

d^2\,f(x_0;v) = \left. \frac{d^2}{dt^2}\right|_{t=0} f(x_0+t\cdot v)

\end{equation}

Derivatives of higher orders are defined accordingly. Also derivatives from the left or from the right are sometimes distinguished when dealing with Gâteaux derivatives.

**Linearity and Continuity
**

Let’s consider the function ##f: \mathbb{R}^2\rightarrow\mathbb{R}##

##f(x,y) =

\begin{cases}

\frac{x^3}{x^2+y^2} & \text{if } (x,y) \neq (0,0)\\

0 & \text{if } (x,y) = (0,0)

\end{cases}

##

Then ##f## is Gâteaux differentiable with the derivative

##

J_{(0,0)}((a,b)) = \begin{cases}

\frac{a^3}{a^2+b^2} & \text{if } (a,b) \neq (0,0)\\

0 & \text{if } (a,b) = (0,0)

\end{cases}

##

at ##(0,0)## in the direction ##(a,b)## according to the variational definition. ##J_{(0,0)}## is even continuous, however, ##f## is not Gâteaux differentiable in the Weierstraß’ sense, because ##J_{(0,0)}## is not linear. Note that it is still homogeneous, i.e. ##J_{(0,0)}(\alpha v)=\alpha J_{(0,0)}(v).##

Next consider the space of real, smooth functions on ##[0,1] \subseteq \mathbb{R}##. That is

$$

X=C^\infty_\mathbb{R}([0,1])

$$

equipped with the uniform norm, the supremum norm

$$

||f|| = \sup_{x \in [0,1]} \{|f(x)|\}

$$

and ##Y=(\mathbb{R},|.|)##. Then the derivative-at-zero operator ##T(f):=f'(0)## is linear, closed, but not continuous. (Consider the sequence ##f_{n}(x)=\frac{\sin(n^2x)}{n}## which converges uniformly to ##f \equiv 0## but ##(T(f_n))_{n \in \mathbb{N}}## does not.)

As the completeness condition of the normed vector spaces play an important role here, too, the general advice when using the Gâteaux derivative has to be: Make sure which definition you use and what the exact nature of the normed vector spaces are (locally convex, complete, Banach, etc.)

**Connection to the Fréchet Derivative
**

If ##f## is Fréchet differentiable at ##x_0## with Fréchet derivative ##J_{x_0,F}## then ##f## is also Gâteaux differentiable in all directions ##v## and for the Gâteaux differential ##df(x_0;v)## holds

\begin{equation}\label{FG}

df(x_0;v)=J_{x_0,G}(f)(v)=J_G(f)(v)=J_{x_0,F}(f)(v)

\end{equation}

Especially ##J_F(f)=J_G(f)##. In general, the opposite direction does not hold, i.e. from Gâteaux differentiability cannot be concluded Fréchet differentiability. Since the finite dimensional real case is a special case of both concepts, this was to be expected.

If ##f## is Gâteaux differentiable in a neighborhood ##U## of ##x_0##, such that ##J_{x,G}(v)## is continuous (in ##x##) and linear, and the operator

$$

J_G(f) : U \rightarrow \mathcal{L}(X,Y)

$$

\begin{equation}\label{Gateaux-local}

J_G(f) : x \mapsto (v \mapsto J_{x,G}(f)(v))

\end{equation}

is continuous with respect to the operator norm on the space of linear functions ##\mathcal{L}(X,Y)##, then ##f## is also Fréchet differentiable. It is not a necessary condition, so we have the real finite dimensional case again as a special case.

**Lagrange Formalism
**

Let us define the function

\begin{equation}\label{LF}

f(\varepsilon) := \int dt\, L(q(t)+\varepsilon \delta q\, , \,\dot q(t)+\varepsilon \delta \dot q\, , \,t)

\end{equation}

For the Gâteaux differential we get as first order approximation (*) and by partial integration (with constant endpoints of integration ##t_i##) and thus a vanishing ##\delta q(t_i)## term in the anti-derivative (**)

\begin{equation}\label{LD-I}

\begin{aligned}

\delta f &= J_{\delta q}(f)\\

& = \int dt \, J_{\delta q}(L)\\

&=\int dt \, \lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left(L(q(t)+\varepsilon \delta q\, , \,\dot q(t)+\varepsilon \delta \dot q\, , \,t)-L(q(t)\, , \,\dot q(t)\, , \,t)\right)\\

& {{(*)}\atop{=}}\int dt \, \left( \frac{\partial L}{\partial q} \delta q + \frac{\partial L}{\partial \dot q} \delta \dot q \right)\\

& {{(**)}\atop{=}} \int dt \,\frac{\partial L}{\partial q} \delta q \,-\, \int dt \, \left(\frac{d}{dt}\frac{\partial L}{\partial \dot q} \right) \delta q

\end{aligned}

\end{equation}

By the variation principle this means

\begin{equation}\label{LD-II}

J_{\delta q}(L) = \frac{\partial L}{\partial q} \delta q \,-\, \left(\frac{d}{dt}\frac{\partial L}{\partial \dot q} \right) \delta q

\end{equation}

or

\begin{equation}\label{LD-III}

\frac{\delta L}{\delta q} = \frac{\partial L}{\partial q} \,-\, \left(\frac{d}{dt}\frac{\partial L}{\partial \dot q} \right)

\end{equation}

#### Lie Derivative – Preliminaries

In this section we go even further with our generalizations. The main reason to consider differentiation processes is to calculate a linear approximation of non-linear objects. So far we regarded functions of normed linear spaces, i.e. (non-linear) equations which described curves and other analytic varieties. What they all had in common was, that they took place in an outer frame, normed vector spaces like ##\mathbb{R}^n## or Banach spaces. The frame brought with it the coordinates in which points and directions have been expressed. It was one of the greatest achievements in differential geometry to abandon this restriction: what if there is no outer frame like in General Relativity? Carl Friedrich Gauß had been obliged as land surveyor of the Kingdom of Hanover. The earth isn’t flat either nor is it naturally placed in an outer Euclidean frame. So mathematicians started to consider the analytic varieties which they called manifolds by themselves. Coordinates became local properties of the manifold, which is sufficient as we deal with local phenomena in differential geometry anyway. Outer frames were no longer needed to solve the problems within or on the manifolds.

#### Manifolds

**Definition:** An **m-dimensional mainfold** (sometimes shortly **m-manifold**) is a set ##M##, together with a countable collection of subsets ##U_i \subseteq M##, called the **coordinate charts**, and ##1:1## functions ##\chi : U_i \rightarrow V_i## onto connected open subsets ##V_i## of ##\mathbb{R}^m##, called local coordinate maps, which satisfy the following properties:

The** coordinate charts cover M**.

\begin{equation}\label{Charts-I}

\bigcup_i \; U_i = M

\end{equation}

On the **overlap of any pair of coordinate charts** ##U_i \cap U_j## the composite map

\begin{equation}\label{Charts-II}

\chi_j \circ \chi_i^{-1}\, : \,\chi_i(U_i\cap U_j) \rightarrow \chi_j(U_i\cap U_j)

\end{equation}

is a smooth (infinitely differentiable) function.

If ##x_i\in U_i\, , x_j\in U_j\,## are distinct points of ##M##, then there exist open subsets ##W_i## of ##\chi_i(x_i)## in ##V_i## and ##W_j## of ##\chi_j(x_j)## in ##V_j## such that

\begin{equation}\label{Charts-III}

\chi_i^{-1}(W_i) \cap \chi_j^{-1}(W_j) = \emptyset

\end{equation}

The coordinate charts endow the manifold ##M## with the structure of a topological space. Equation (\ref{Charts-III}) is basically a restatement of the Hausdorff separation axiom.

The overlapping functions ##\chi_j \circ \chi_i^{-1}## determine the degree of differentiability of the manifold. If they are smooth (##C^\infty##) diffeomorphisms on open subsets of the corresponding Euclidean space ##\mathbb{R}^m## then the manifold is called **smooth**, if they are real analytic functions, then the manifold is called **analytic**. Similar is true for the other differentiation classes ##C^k##.

It is important to note, that the manifold ##M## isn’t part of ##\mathbb{R}^m##. As a set it is defined without any reference to a Euclidean space in which it might or might not be embedded. With the usual standard examples: ##M=\mathbb{R}^m##, ##M=S^n## or ##M## a torus, there might be some surrounding Euclidean space in our imagination, but things change if we consider **Lie Groups** as manifolds instead, i.e. manifolds which carry an analytic group structure, means inversion and multiplication are analytic functions. Or if you like, the universe. The role of ##\mathbb{R}^m## in the definition is therefore not to characterize the manifold globally, but locally instead. A manifold behaves locally in an open neighborhood of a point like an open set in the Euclidean space ##\mathbb{R}^m## where we can use charts of ##M## as we use flat roadmaps to find our routes through a mountainous countryside.

#### Vector Fields

**Definition:** A **vector field** ##X## on a m-dimensional manifold ##M## is a mapping, that assigns to each point ##p \in M## a vector ##X(p)=X_p##. If the ##X_p## are tangent to ##M##, then ##X## is called a **tangent vector field**, if the ##X_p## are perpendicular to ##M##, then ##X## is called a **normal vector field**. Usually if not defined otherwise, the term vector field always refers to the tangent field.

In physics we often distinguish between vector fields and scalar fields. The difference is, that in case of a **scalar field**, there is a scalar (number) assigned at each point of the manifold. Temperatures on earth are a standard example for a scalar field, whereas the meteorologic wind chart represents a vector field, because at each point on earth there is a wind vector with a direction and a magnitude attached. Well, at least almost everywhere according to the Hairy-Ball-Theorem. This leads us directly to some important vector fields.

The gradient of a real valued function ##f: U \rightarrow \mathbb{R}\, , \,U \subseteq \mathbb{R}^n##

\begin{equation}\label{Grad-I}

\nabla f(x_0) = \operatorname{grad} (f)(a) = \left(\frac{\partial}{\partial x_1}f(x_0),\ldots , \frac{\partial}{\partial x_n}f(x_0) \right)

\end{equation}

defines a **gradient (vector) field** ##F=\nabla f##. The mapping ##x_0 \mapsto \nabla f(x_0)## determines the linear function

\begin{equation}\label{Grad-II}

f'(x_0)(v)=\sum_{i=1}^n v_i f'(x_0)(e_i)=\sum_{i=1}^n v_i \delta_i f(x_0)=\delta_v f(x_0)=\langle \nabla f(x_0),v \rangle

\end{equation}

The gradient field is a special case of a tangent field. Basically all derivatives introduced so far have been tangent fields: ##x_0 \mapsto J_{x_0}(v)##.

Another important vector field in the Euclidean three-dimensional space is given by the **curl operator** or **rotation**. Let ##F: U \rightarrow \mathbb{R}^3\, , \, U \subseteq \mathbb{R}^3## be a partially differentiable vector field. Then the curl defines a new vector field \begin{equation}\label{curl}

\operatorname{curl} F = \operatorname{rot} F = \nabla \times F

\end{equation}

An example of a scalar field in this context is the **divergence** of a vector field ##F## defined by scalar or dot product

\begin{equation}\label{div}

\operatorname{div} F = \nabla \cdot F

\end{equation}

Combined, i.e.

\begin{equation}\label{Laplace}

\Delta f = \nabla^2 f= \nabla \cdot \nabla f

\end{equation}

they define the **Laplace operator**: the divergence of the gradient field.

To define a Lie derivative, we could shortly say: it’s the multiplication in a Lie algebra. No manifolds, no vector fields. Of course this wouldn’t meet the requirements to actually understand what it is, because it meant to define Lie derivatives by one aspect of the resulting function, rather than by it’s motivation. Therefore we need some more terminology.

#### Flows

A curve ##\gamma : [a,b] \rightarrow X## on a vector field ##V## of the set ##X## is defined by the property ##\left. \frac{d}{dt}\right|_{t=t_0}\gamma(t)=V(\gamma(t_0))##. If ##V## is Lipschitz continuous, i.e. the overlapping charts are, then for each point ##x \in X## there is a unique differentiable curve ##\gamma_x## such that for some ##\varepsilon > 0##

\begin{equation}\label{flow-I}

\gamma_{x}(0)=x \; \wedge \; \left. \frac{d}{dt}\right|_{t=t_0}\gamma_x(t)=V(\gamma_x(t_0)) \, , \, t\in (-\varepsilon ,+\varepsilon) \subseteq \mathbb{R}

\end{equation}

These curves ##\gamma_x## are called **integral curves** or **trajectories** or **flow lines** of the vector field ##V## and they partition ##X## into equivalence classes.

We speak of a **flow on a vector field** as the set of all these curves, and

\begin{equation}\label{flow-II}

\gamma_{\gamma_x(t)}(s) = \gamma_x(s+t)

\end{equation}

or more convenient

\begin{equation}\label{flow-III}

\gamma(\gamma(x,t),s)=\gamma(x,s+t)

\end{equation}

holds, i.e. it doesn’t matter, whether we first move by ##t## and then by ##s## along the curve or vice versa. Flows are usually required to be compatible with structures endowed on ##X##, which means in our case, that the curves ##\gamma_x(t)## must be continuous (in both arguments). If ##X## is equipped with a differentiable structure, then they are required to be differentiable as well. In these cases the flow forms a one parameter subgroup of homeomorphisms and diffeomorphisms, respectively. **Local flows** are the curves in an open neighborhood of a certain point ##x_0##.

#### Sources

the space ##C^infty_{mathbb{R}}([0,1])## equipped with the uniform norm is not complete. That is not the point but above it is written that all the spaces are Banach

The definition of a vector field on a manifold is very bad. It looks like one implicitly assumes that the manifold is embedded into ##mathbb{R}^m##.

the gradient of a function is not a vector field, it is an 1-form at least until the manifold is not equipped with a metric tensor

the operation rotor as well as cross product give pseudo vectors

… which (is not claimed and) emphasizes my point to watch out the definitions made in the relating contexts.

True. I removed ##mathbb{R}^n##.

It is a line bundle, and the set ##{(x,nabla_xf),vert ,x in M}## is a vector field.

##vec{a} times vec{b}## is a vector in ##mathbb{R}^3##.

In principle I can dig textbooks up and show you corresponding paragraphs but what for?

It would be great if you ask a specialist in differential geometry whom you trust in , to read your text carefully.

So can I. But what for? Your goal is apparently one which I’m not interested in.

Is the "material derivative" of fluid mechanics a special case of a Lie derivative? Or is it yet another kind of derivative?

I'll have it in the fourth part (where I also included an example from Wiki): The material derivative is a special case of a derivative in order to describe the flow of fluids or gases. It is more of a special tool for these currents rather than a special concept of a differentiation process.

begin{equation*}

D_vPhi = frac{d_v}{dt} Phi = frac{partial Phi}{partial t}+ (v cdot nabla)(Phi) = left(,frac{partial}{partial t}+v_x,frac{partial}{partial x}+v_x,frac{partial}{partial x}+v_x,frac{partial}{partial x},right),(Phi)

end{equation*}

where ##v## represents the velocity of the flow at point ##x## and time ##t##. The first summand is the local behavior in time at a fixed point, the second is the convective change of a particle in the flow.

I would rather call it by its name Euler operator (cp. http://www.math.nyu.edu/faculty/childres/fluidsbook.pdf ; p.8 f.) because the two summands are treated differently: the time dependent part keeps a particle location fixed, whereas the second is to analyze the velocity (flow, integral curve) of a fluid. So strictly speaking I'd say no, since I cannot imagine how to combine this in a single view of a vector field (spacetime aside), will say it's an operator that combines two Lie derivatives. It is also called a total derivative. As I understand it, are Lie derivatives directional derivatives.

Good question. For simplicity sake let's consider a stationary flow of a fluid with velocity field ##v=v(x),quad v=(v^1,ldots,v^3)(x).##

Let ##g_v^t(x)## stand for corresponding one-parametric group:

$$frac{d}{dt}g_v^t(x)=v(g_v^t(x)),quad g_v^0(x)

=x.$$

Then take a volume ##D## and a differential form ##omega##.

Theorem.The following formula holds$$frac{d}{dt}Big|_{t=0}int_{g^t_v(D)}omega =int_DL_vomega,$$ here ##L_v## is the Lie derivative.

This very simple theorem remains true in all dimensions, not only in ##mathbb{R}^3##.

For example, let ##rho(x)## be a density of the fluid and let ##Omega =sqrt gdx^1wedge dx^2wedge dx^3,quad g=mathrm {det}(g_{ij}(x))## be the volume form. Then the mass conservation is written as follows

$$int_{g^t_v(D)}rhoOmega =const,$$ here ##D## is an arbitrary volume and the constant does not depend on time but it surely depends on ##D##.

Applying the above theorem we get

$$frac{d}{dt}Big|_{t=0}int_{g^t_v(D)}rhoOmega=int_DL_v(rhoOmega)=0.qquad (*)$$

It is not hard to show that ##L_v(rhoOmega)=mathrm{div},(rho v) Omega## and sinse ##D## is an arbitrary volume, formula (*) gives the standard continuity equation: ##mathrm{div},(rho v)=0.##

In the same way Helmholtz's theorems https://en.wikipedia.org/wiki/Helmholtz's_theorems and many other useful things from Hamiltonian dynamics are follow.

Good question. For simplicity sake let's consider a stationary flow of a fluid with velocity field ##v=v(x),quad v=(v^1,ldots,v^3)(x).##

Let ##g_v^t(x)## stand for corresponding one-parametric group:

$$frac{d}{dt}g_v^t(x)=v(g_v^t(x)),quad g_v^0(x)

=x.$$

Then take a smooth manifold ##Ssubset mathbb{R}^3,quad mathrm{dim},S=k##. The manifold ##S## can be a curve (k=1), a surface (k=2), or a domain of ##mathbb{R}^3## (k=3) and let ##omega## stand for a ##k-##form in ##mathbb{R}^3##.

Theorem.The following formula holds$$frac{d}{dt}Big|_{t=0}int_{g^t_v(S)}omega =int_SL_vomega,$$ here ##L_v## is the Lie derivative.

This very simple theorem remains true in all dimensions, not only in ##mathbb{R}^3##.

For example, let ##rho(x)## be a density of the fluid and let ##Omega =sqrt gdx^1wedge dx^2wedge dx^3,quad g=mathrm {det}(g_{ij}(x))## be the volume form; ##g_{ij}## is a metric tensor. Then the mass conservation law is written as follows

$$int_{g^t_v(D)}rhoOmega =const,$$ here ##D## is an arbitrary volume and the constant does not depend on time but it surely depends on ##D##.

Applying the above theorem we get

$$frac{d}{dt}Big|_{t=0}int_{g^t_v(D)}rhoOmega=int_DL_v(rhoOmega)=0.qquad (*)$$

It is not hard to show that ##L_v(rhoOmega)=mathrm{div},(rho v) Omega## and sinse ##D## is an arbitrary volume, formula (*) gives the standard continuity equation: ##mathrm{div},(rho v)=0.##

In the same way Helmholtz's theorems https://en.wikipedia.org/wiki/Helmholtz's_theorems and many other useful things from Hamiltonian dynamics are follow.

(I have dropped math details such as smoothness of ##v##, finite measure of ##S,D## etc)