Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Measure-preserving flow

  1. Feb 13, 2006 #1


    User Avatar
    Science Advisor
    Homework Helper

    Consider a system of first-order ODEs:

    [tex]x_1^{'}=f_1(x_1, \cdots ,x_n)[/tex]

    [tex]x_2^{'}=f_2(x_1, \cdots ,x_n)[/tex]


    [tex]x_n^{'}=f_n(x_1, \cdots ,x_n)[/tex]

    or written more compactly just as:


    If I start with a set of initial conditions compromising some volume in phase space and evolve them according to the equations of motion above (run the equations for a whole bunch of initial conditions all contained in some region of phase space and keep track of where the final values of each x_i land after some time t), what must F be in order that the volume containing all the final points be the same as the volume containing all the initial points?

    Well, I worked on this a long time ago and didn't quite understand it but someone in here recently brought up symplectic algorithms which are related and it caused me to revisit it. See, that's how science works you know.

    We can calculate this volume as an n-folded integral over the n-dimensions of phase space. That is, the initial volume would be:

    [tex]\int\cdots\int_{\Omega(0)} dx_1\cdots dx_n[/tex]

    with [itex]\Omega(0)[/itex] being the initial domain containing the set of vectors each being a single initial condition for the system. For example, if the system only involved x(t) and y(t), that integral would simply be the area of some selected part of the x-y plane chosen to contain all the initial points [itex](x_0,y_0)[/itex] under study. It's a bit messy for n-dimensions. The final volume in that case is:

    [tex]\int\cdots\int_{\Omega(t)} dx_1\cdots dx_n[/tex]

    with [itex]\Omega(t)[/itex] being the region containing all the final points of the simulation after time t.

    Thus, what must F be in order for:

    [tex]\int\cdots\int_{\Omega(0)} dx_1\cdots dx_n=\int\cdots\int_{\Omega(t)} dx_1\cdots dx_n[/tex]

    That's an interesting problem don't you guys think? This is in essence Liouville's Theorem as it pertains to Poincare's Integral Invariants. That is, the integral above is an Integral Invariant (French you know) if it doesn't change with time.

    Anyway, I've just learned the proof and would like to follow-up with the details unless someone else takes the initiative.
    Last edited: Feb 14, 2006
  2. jcsd
  3. Feb 15, 2006 #2


    User Avatar
    Science Advisor
    Homework Helper

    First wish to work a concrete simple example:



    (I ain't proud).

    Anyway, I can solve this one exactly:

    [tex]x(t)=x_0 Cos(t)+y_0 Sin(t)[/tex]

    [tex]y(t)=y_0 Cos(t)-x_0 Sin(t)[/tex]

    Now, suppose I choose a regular array of points [itex](x_0,y_0)[/itex] in the unit square as the initial conditions, and evolve them according to the equations above for 1 second. The first plot below shows this unit square as the initial domain [itex]\Omega(0)[/itex] and the second plot shows where the points land after one second [itex]\Omega(1)[/itex]. Now for this simple example, the areas look pretty much the same, an area of 1. But how would you prove this? That is, how would you calculate the area of [itex]\Omega(1)[/itex]? Well, that's where change of variables for double integrals come in. That is, if I'm given the second plot and asked to find the area, I'd make a suitable change-of-variable hopefully converting the domain to an easier one to integrate over. Well, the change of variable in this case is determined by the solutions above with [itex]x_0, y_0[/itex] being a new coordinate system. Naturally, this will just convert the domain back into the unit square so that the area of the second plot becomes:

    [tex]\int\int_{\Omega(1)} dxdy=\int\int_{\Omega(0)} \frac{\partial(x,y)}{\partial(x_0,y_0)} dx_0 dy_0[/tex]



    being the Jacobian of the transformation. In this case, the Jacobian is equal to 1 so that we have:

    [tex]\int\int_{\Omega(1)} dxdy=\int\int_{\Omega(0)} \frac{\partial(x,y)}{\partial(x_0,y_0)} dx_0 dy_0=\int_0^1\int_0^1 dx_0 dy_0=1[/tex]

    One can see immediately that a sufficient condition for the transformation to be measure-preserving is that the Jacobian should equal one. The problem with this is that in order to calculate the Jacobian, the solution (transformation) has to be know. There is however another way to determine if the flow is measure-preserving without knowing the solution.

    Attached Files:

    Last edited: Feb 15, 2006
  4. Feb 17, 2006 #3
    Hey saltydog, I was reading through your post, trying to understand it. I like your example, but I wonder if it is not general enough. What I mean, is that your eqns. of motion for x and y in the example come from a unitary transformation of the initial state, i.e. a rotation by angle t. Wouldn't the area in (phase? configuration?) space always be perserved in this case?

    When I first read your post, I was wondering what the form of F(X) was...in the example if would seem to be linear in X, while in more general situations it could be nonlinear. Perhaps those scenarios lead to non-equivalent volumes.

    Just my two cents...
  5. Feb 17, 2006 #4


    User Avatar
    Science Advisor
    Homework Helper

    Hey Beau. This is the general case. Keep the following in mind: For the system:



    Note that [itex]x_1^{'},\;x_2^{'} [/itex] are functions of x1,x2, and t but they are also "implicit" functions of the initial conditions right since the flow of x1 and x2 are dependent on the starting conditions so I should say something like:



    Knowing this is important when using the chain-rule below.

    So as above we define the Poincare' integral:

    [tex]I(t)=\int\cdots\int_{\Omega(t)}dx_1\cdots dx_n [/tex]

    and to evaluate it we use a change-of-variables back to the initial conditions like I did above:

    [tex]I(t)=\int\cdots\int_{\Omega(t)} dx_1\cdots dx_n =\int\cdots\int_{\Omega(0)}
    \frac{\partial(dx_1\cdots dx_n)}{\partial(dx_1^0\cdots dx_n^0)}dx_1^0\cdots dx_n^0 [/tex]

    In order to determine under what conditions the integral is invariant with time, we take its

    derivative and set it to zero:


    \frac{\partial(x_1,\cdots,x_n)}{\partial(x_1^0,\cdots,x_n^0)}dx_1^0 \cdots


    Now, we can use Leibniz's rule and differentiate across the integrals:

    dx_1^0\cdots dx_n^0=0[/tex]

    So, we wish to evaluate the derivative of the Jacobian:


    where T is the solution transformation as described in the post above.

    which is very messy for n-dimensions so I'll do it for just n=2

    So then we have:

    \begin{array}{cc}\frac{\partial x_1}{\partial x_1^0} & \frac{\partial x_1}{\partial x_2^0} \\
    \frac{\partial x_2}{\partial x_1^0} & \frac{\partial x_2}{\partial x_2^0}

    Expanding the determinant:

    [tex]=\frac{d}{dt}\left(\frac{\partial x_1}{\partial x_1^0} \frac{\partial x_2}{\partial x_2^0}-
    \frac{\partial x_2}{\partial x_1^0} \frac{\partial x_1}{\partial x_2^0}\right)[/tex]

    So using the chain rule, this becomes:

    &=\frac{\partial x_1}{\partial x_1^0}\frac{\partial^2 x_2}{\partial t \partial x_2^0}+
    \frac{\partial x_2}{\partial x_2^0}\frac{\partial^2 x_1}{\partial t \partial x_1^0}
    &-\frac{\partial x_2}{\partial x_1^0}\frac{\partial^2 x_1}{\partial t \partial x_2^0}-
    \frac{\partial x_1}{\partial x_2^0}\frac{\partial^2 x_2}{\partial t \partial x_1^0}

    Now, look at the mixed-partials and reverse the order of differentiation. For the first one,

    this becomes:

    [tex]\frac{\partial^2 x_2}{\partial t \partial x_2^0}=\frac{\partial}{\partial

    x_2^0}\left(\frac{\partial x_2}{\partial t}\right)[/tex]

    Now, isn't:

    [tex]\frac{\partial x_2}{\partial t}=f_2[/tex]

    from the differential equations above? Thus we can re-write the Jacobian derivatives as:

    &=\frac{\partial x_1}{\partial x_1^0}\frac{\partial f_2}{\partial x_2^0}+
    \frac{\partial x_2}{\partial x_2^0}\frac{\partial f_1}{\partial x_1^0}
    &-\frac{\partial x_2}{\partial x_1^0}\frac{\partial f_1}{\partial x_2^0}-
    \frac{\partial x_1}{\partial x_2^0}\frac{\partial f_2}{\partial x_1^0}

    Now, remember the implicit functional relationship of [itex]\mathbf{F}[/itex] to the initial
    variables (the ones with the super-script of o). Thus using the chain-rule:

    \frac{\partial x_1}{\partial x_1^0}\left[\frac{\partial f_2}{\partial x_1}\frac{\partial
    x_1}{\partial x_2^0}+\frac{\partial f_2}{\partial x_2}\frac{\partial x_2}{\partial x_2^0}\right]

    &+\frac{\partial x_2}{\partial x_2^0}\left[\frac{\partial f_1}{\partial x_1}\frac{\partial
    x_1}{\partial x_1^0}+\frac{\partial f_1}{\partial x_2}\frac{\partial x_2}{\partial x_1^0}\right]
    &-\frac{\partial x_2}{\partial x_1^0}\left[\frac{\partial f_1}{\partial x_1}\frac{\partial
    x_1}{\partial x_2^0}+\frac{\partial f_1}{\partial x_2}\frac{\partial x_2}{\partial x_2^0}\right]
    &-\frac{\partial x_1}{\partial x_2^0}\left[\frac{\partial f_2}{\partial x_1}\frac{\partial
    x_1}{\partial x_1^0}+\frac{\partial f_2}{\partial x_2}\frac{\partial x_2}{\partial x_1^0}\right]

    Now, that's 8 partials. Four of them drop out leaving (after some re-arranging):

    \frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}&=\frac{\partial f_2}{\partial x_2}\frac{\partial x_1}{\partial x_1^0}\frac{\partial x_2}{\partial x_2^0}-\frac{\partial f_2}{\partial x_2}\frac{\partial x_1}{\partial x_2^0}\frac{\partial x_2}{\partial x_1^0} \\
    \frac{\partial f_1}{\partial x_1}\frac{\partial x_1}{\partial x_1^0}\frac{\partial x_2}{\partial x_2^0}-\frac{\partial f_1}{\partial x_1}\frac{\partial x_1}{\partial x_2^0}\frac{\partial x_2}{\partial x_1^0}

    Well . . . after studying this for awhile, that last expression can be encapsulated nicely in terms of the Jacobian and the divergence of [itex]\mathbf{F}[/itex]. Look at it carefully and see that:

    \begin{array}{c} f_1 \\ f_2\end{array}\right)
    \left|\begin{array}{cc}\frac{\partial x_1}{\partial x_1^0} & \frac{\partial x_1}{\partial x_2^0} \\ \frac{\partial x_2}{\partial x_1^0} & \frac{\partial x_2}{\partial x_2^0} \end{array}\right|[/tex]

    So we have finally:

    [tex]\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}=\nabla \left(\mathbf{F}\mathcal{J}\left\{\mathbf{T}\right\}\right)[/tex]

    Therefore, a sufficient conditions for the flow to be measure-preserving is:


    That is, the divergence of F is zero. That this is true in the n-dimensional case can be shown by induction.

    The divergence of the simple case above was zero as are all Hamiltonians in general.

    I know, it's a lot and some may find it odd I spent so much time formatting it in LaTex but we all have our most beautiful problems we've ever worked on. This is one of mine. In the beginning . . . there was Poincare'.:smile:
    Last edited: Feb 17, 2006
  6. Feb 20, 2006 #5
    Hey saltydog,

    I really like the derivation you posted and especially how nice the final result appears, that the divergence of F needs to be zero to conserve the phase space volume (n.b. it looks like gradient in my browser).

    Your remark

    "The divergence of the simple case above was zero as are all Hamiltonians in general."

    promted me to look into that. I found that the divergence of the phase space vector field [tex](\dot{p},\dot{q})[/tex] is zero for Hamiltonian dynamics since

    \nabla \cdot (\dot{p},\dot{q}) = \frac{\partial \dot{p}}{\partial p} + \frac{\partial \dot{q}}{\partial q}\ = -\frac{\partial^{2} \dot{p}}{\partial p \partial q} + \frac{\partial^{2} \dot{q}}{\partial q \partial {p}}\ = 0

    which I think agrees with your more general result.
  7. Feb 20, 2006 #6


    User Avatar
    Science Advisor
    Homework Helper

    That's exactly right. Now how might we show experimentally (other than changing coordinates via the Jacobian) that the 4-space hypervolume of a 2-DOF Hamiltonian like:

    [tex]H(p_1,q_1,p_2,q_2)=1/2(p_1^2+p_2^2)+p_1 q_2-p_2 q_1[/tex]

    is measure-preserving? Suppose need some way of doing the (4-folded) integration directly. Problem though is how to "contain" the final points into some 4-space volume and then numerically integrate over that space.
    Can reduce it to the problem:

    Suppose I have a set of (dense) points in n-space. What is the minimum 4-space volume containing all those points? How about just 3-space first? Well, how about just a set of dense points in the x-y plane first? How can I calculate the minimum area containing those points?
  8. Feb 21, 2006 #7
    Can we think about a probability distribution? In that case, it seems that an isoenergy contour would trace the boundary to whatever level of accuracy you wanted. And if you have the Hamiltonian you could just set H to this constant to put a constraint on p and q.

    But if they are really points, then I'm not sure...
  9. Feb 22, 2006 #8


    User Avatar
    Science Advisor
    Homework Helper

    Well I think it's doable. Would be a good programming exercise for someone in Numerical Methods. Consider some "simply-connected" region in the x-y plane and a set of points in the region like the plot below. Yea, I know it's a circle but wish to start easy. Now, I don't suppose it's too tough to run though the set of points and reduce it to just a set of "peripheral" points on the perimeter (haven't done that yet). Once I get that set, then I can use Green's Theorem and numerically estimate the line integral around the perimeter to approximate the area:

    [tex]A=\frac{1}{2}\oint_C xdy-ydx\approx\frac{1}{2}\sum_{n=1}^N x_i|y_i-y_{i+1}|-y_i|x_i-x_{i+1}|[/tex]

    At least I think that would work. Need to check it first with the circle. Maybe a few others. Ok, now in the case of some arbitrary "simply-connected" volume in x-y-z space, just integrate the line integral over z:

    [tex]V=\frac{1}{2}\int_{z_0}^{z_1}\oint_{C(z)} (xdy-ydx)dz[/tex]

    Can I say that? Intutitively that makes sense to me. And to estimate it numerically, would need to reduce the set of points into a set of "shells", calculate the area of each shell, then multiply each by [itex]\Delta z[/itex].

    Oh yea. That'll work. Who's taking Numerical Methods in here and needs a project?:smile:

    Attached Files:

    Last edited: Feb 22, 2006
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Measure-preserving flow
  1. Flow of a vector field (Replies: 2)