What is the relationship between volume in phase space and Liouville's Theorem?

  • Thread starter saltydog
  • Start date
  • Tags
    Flow
In summary, the volume of a region containing all the final points in a first-order ODE system after some time t must be equal to the volume of the region containing all the initial points in order for the system to be measure-preserving.
  • #1
saltydog
Science Advisor
Homework Helper
1,591
3
Consider a system of first-order ODEs:

[tex]x_1^{'}=f_1(x_1, \cdots ,x_n)[/tex]

[tex]x_2^{'}=f_2(x_1, \cdots ,x_n)[/tex]

[tex]\cdots[/tex]

[tex]x_n^{'}=f_n(x_1, \cdots ,x_n)[/tex]

or written more compactly just as:

[tex]X^{'}=F,\;X\in\mathbb{R}^n[/tex]

If I start with a set of initial conditions compromising some volume in phase space and evolve them according to the equations of motion above (run the equations for a whole bunch of initial conditions all contained in some region of phase space and keep track of where the final values of each x_i land after some time t), what must F be in order that the volume containing all the final points be the same as the volume containing all the initial points?

Well, I worked on this a long time ago and didn't quite understand it but someone in here recently brought up symplectic algorithms which are related and it caused me to revisit it. See, that's how science works you know.

We can calculate this volume as an n-folded integral over the n-dimensions of phase space. That is, the initial volume would be:

[tex]\int\cdots\int_{\Omega(0)} dx_1\cdots dx_n[/tex]

with [itex]\Omega(0)[/itex] being the initial domain containing the set of vectors each being a single initial condition for the system. For example, if the system only involved x(t) and y(t), that integral would simply be the area of some selected part of the x-y plane chosen to contain all the initial points [itex](x_0,y_0)[/itex] under study. It's a bit messy for n-dimensions. The final volume in that case is:

[tex]\int\cdots\int_{\Omega(t)} dx_1\cdots dx_n[/tex]

with [itex]\Omega(t)[/itex] being the region containing all the final points of the simulation after time t.

Thus, what must F be in order for:

[tex]\int\cdots\int_{\Omega(0)} dx_1\cdots dx_n=\int\cdots\int_{\Omega(t)} dx_1\cdots dx_n[/tex]

That's an interesting problem don't you guys think? This is in essence Liouville's Theorem as it pertains to Poincare's Integral Invariants. That is, the integral above is an Integral Invariant (French you know) if it doesn't change with time.

Anyway, I've just learned the proof and would like to follow-up with the details unless someone else takes the initiative.
 
Last edited:
Physics news on Phys.org
  • #2
First wish to work a concrete simple example:

[tex]x^{'}=y[/tex]

[tex]y^{'}=-x[/tex]

(I ain't proud).

Anyway, I can solve this one exactly:

[tex]x(t)=x_0 Cos(t)+y_0 Sin(t)[/tex]

[tex]y(t)=y_0 Cos(t)-x_0 Sin(t)[/tex]

Now, suppose I choose a regular array of points [itex](x_0,y_0)[/itex] in the unit square as the initial conditions, and evolve them according to the equations above for 1 second. The first plot below shows this unit square as the initial domain [itex]\Omega(0)[/itex] and the second plot shows where the points land after one second [itex]\Omega(1)[/itex]. Now for this simple example, the areas look pretty much the same, an area of 1. But how would you prove this? That is, how would you calculate the area of [itex]\Omega(1)[/itex]? Well, that's where change of variables for double integrals come in. That is, if I'm given the second plot and asked to find the area, I'd make a suitable change-of-variable hopefully converting the domain to an easier one to integrate over. Well, the change of variable in this case is determined by the solutions above with [itex]x_0, y_0[/itex] being a new coordinate system. Naturally, this will just convert the domain back into the unit square so that the area of the second plot becomes:

[tex]\int\int_{\Omega(1)} dxdy=\int\int_{\Omega(0)} \frac{\partial(x,y)}{\partial(x_0,y_0)} dx_0 dy_0[/tex]

with:

[tex]\frac{\partial(x,y)}{\partial(x_0,y_0)}[/tex]

being the Jacobian of the transformation. In this case, the Jacobian is equal to 1 so that we have:

[tex]\int\int_{\Omega(1)} dxdy=\int\int_{\Omega(0)} \frac{\partial(x,y)}{\partial(x_0,y_0)} dx_0 dy_0=\int_0^1\int_0^1 dx_0 dy_0=1[/tex]

One can see immediately that a sufficient condition for the transformation to be measure-preserving is that the Jacobian should equal one. The problem with this is that in order to calculate the Jacobian, the solution (transformation) has to be know. There is however another way to determine if the flow is measure-preserving without knowing the solution.
 

Attachments

  • omega 0.JPG
    omega 0.JPG
    6.9 KB · Views: 430
  • omega 1.JPG
    omega 1.JPG
    7.6 KB · Views: 405
Last edited:
  • #3
Hey saltydog, I was reading through your post, trying to understand it. I like your example, but I wonder if it is not general enough. What I mean, is that your eqns. of motion for x and y in the example come from a unitary transformation of the initial state, i.e. a rotation by angle t. Wouldn't the area in (phase? configuration?) space always be perserved in this case?

When I first read your post, I was wondering what the form of F(X) was...in the example if would seem to be linear in X, while in more general situations it could be nonlinear. Perhaps those scenarios lead to non-equivalent volumes.

Just my two cents...
 
  • #4
Hey Beau. This is the general case. Keep the following in mind: For the system:

[tex]x_1^{'}=f_1(x_1,x_2)[/tex]

[tex]x_2^{'}=f_2(x_1,x_2)[/tex]

Note that [itex]x_1^{'},\;x_2^{'} [/itex] are functions of x1,x2, and t but they are also "implicit" functions of the initial conditions right since the flow of x1 and x2 are dependent on the starting conditions so I should say something like:

[tex]x_1^{'}=f_1(x_1,x_2,t),\;x_1=T_1(x_1^0,x_2^0)[/tex]

[tex]x_2^{'}=f_2(x_1,x_2,t),\;x_2=T_2(x_1^0,x_2^0)[/tex]

Knowing this is important when using the chain-rule below.

So as above we define the Poincare' integral:

[tex]I(t)=\int\cdots\int_{\Omega(t)}dx_1\cdots dx_n [/tex]

and to evaluate it we use a change-of-variables back to the initial conditions like I did above:

[tex]I(t)=\int\cdots\int_{\Omega(t)} dx_1\cdots dx_n =\int\cdots\int_{\Omega(0)}
\frac{\partial(dx_1\cdots dx_n)}{\partial(dx_1^0\cdots dx_n^0)}dx_1^0\cdots dx_n^0 [/tex]

In order to determine under what conditions the integral is invariant with time, we take its

derivative and set it to zero:

[tex]\frac{dI}{dt}=\frac{d}{dt}\left\{\int\cdots\int_{\Omega(0)}

\frac{\partial(x_1,\cdots,x_n)}{\partial(x_1^0,\cdots,x_n^0)}dx_1^0 \cdots

dx_n^0\right\}=0[/tex]

Now, we can use Leibniz's rule and differentiate across the integrals:

[tex]\frac{dI}{dt}=\int\cdots\int_{\Omega(0)}
\frac{d}{dt}\left\{\frac{\partial(x_1,\cdots,x_n)}{\partial(x_1^0,\cdots,x_n^0)}\right\}
dx_1^0\cdots dx_n^0=0[/tex]

So, we wish to evaluate the derivative of the Jacobian:

[tex]\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}[/tex]

where T is the solution transformation as described in the post above.

which is very messy for n-dimensions so I'll do it for just n=2

So then we have:

[tex]\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}=
\frac{d}{dt}\left|
\begin{array}{cc}\frac{\partial x_1}{\partial x_1^0} & \frac{\partial x_1}{\partial x_2^0} \\
\frac{\partial x_2}{\partial x_1^0} & \frac{\partial x_2}{\partial x_2^0}
\end{array}\right|
[/tex]

Expanding the determinant:

[tex]=\frac{d}{dt}\left(\frac{\partial x_1}{\partial x_1^0} \frac{\partial x_2}{\partial x_2^0}-
\frac{\partial x_2}{\partial x_1^0} \frac{\partial x_1}{\partial x_2^0}\right)[/tex]

So using the chain rule, this becomes:

[tex]\begin{align*}
&=\frac{\partial x_1}{\partial x_1^0}\frac{\partial^2 x_2}{\partial t \partial x_2^0}+
\frac{\partial x_2}{\partial x_2^0}\frac{\partial^2 x_1}{\partial t \partial x_1^0}
&-\frac{\partial x_2}{\partial x_1^0}\frac{\partial^2 x_1}{\partial t \partial x_2^0}-
\frac{\partial x_1}{\partial x_2^0}\frac{\partial^2 x_2}{\partial t \partial x_1^0}
\end{align}
[/tex]

Now, look at the mixed-partials and reverse the order of differentiation. For the first one,

this becomes:

[tex]\frac{\partial^2 x_2}{\partial t \partial x_2^0}=\frac{\partial}{\partial

x_2^0}\left(\frac{\partial x_2}{\partial t}\right)[/tex]

Now, isn't:

[tex]\frac{\partial x_2}{\partial t}=f_2[/tex]

from the differential equations above? Thus we can re-write the Jacobian derivatives as:

[tex]\begin{align*}
&=\frac{\partial x_1}{\partial x_1^0}\frac{\partial f_2}{\partial x_2^0}+
\frac{\partial x_2}{\partial x_2^0}\frac{\partial f_1}{\partial x_1^0}
&-\frac{\partial x_2}{\partial x_1^0}\frac{\partial f_1}{\partial x_2^0}-
\frac{\partial x_1}{\partial x_2^0}\frac{\partial f_2}{\partial x_1^0}
\end{align}
[/tex]

Now, remember the implicit functional relationship of [itex]\mathbf{F}[/itex] to the initial
variables (the ones with the super-script of o). Thus using the chain-rule:

[tex]
\begin{align*}
\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}&=
\frac{\partial x_1}{\partial x_1^0}\left[\frac{\partial f_2}{\partial x_1}\frac{\partial
x_1}{\partial x_2^0}+\frac{\partial f_2}{\partial x_2}\frac{\partial x_2}{\partial x_2^0}\right]

\\
&+\frac{\partial x_2}{\partial x_2^0}\left[\frac{\partial f_1}{\partial x_1}\frac{\partial
x_1}{\partial x_1^0}+\frac{\partial f_1}{\partial x_2}\frac{\partial x_2}{\partial x_1^0}\right]
\\
&-\frac{\partial x_2}{\partial x_1^0}\left[\frac{\partial f_1}{\partial x_1}\frac{\partial
x_1}{\partial x_2^0}+\frac{\partial f_1}{\partial x_2}\frac{\partial x_2}{\partial x_2^0}\right]
\\
&-\frac{\partial x_1}{\partial x_2^0}\left[\frac{\partial f_2}{\partial x_1}\frac{\partial
x_1}{\partial x_1^0}+\frac{\partial f_2}{\partial x_2}\frac{\partial x_2}{\partial x_1^0}\right]
\end{align}
[/tex]

Now, that's 8 partials. Four of them drop out leaving (after some re-arranging):

[tex]
\begin{align*}
\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}&=\frac{\partial f_2}{\partial x_2}\frac{\partial x_1}{\partial x_1^0}\frac{\partial x_2}{\partial x_2^0}-\frac{\partial f_2}{\partial x_2}\frac{\partial x_1}{\partial x_2^0}\frac{\partial x_2}{\partial x_1^0} \\
&+
\frac{\partial f_1}{\partial x_1}\frac{\partial x_1}{\partial x_1^0}\frac{\partial x_2}{\partial x_2^0}-\frac{\partial f_1}{\partial x_1}\frac{\partial x_1}{\partial x_2^0}\frac{\partial x_2}{\partial x_1^0}
\end{align}\
[/tex]

Well . . . after studying this for awhile, that last expression can be encapsulated nicely in terms of the Jacobian and the divergence of [itex]\mathbf{F}[/itex]. Look at it carefully and see that:

[tex]\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}=\nabla\left(
\begin{array}{c} f_1 \\ f_2\end{array}\right)
\left|\begin{array}{cc}\frac{\partial x_1}{\partial x_1^0} & \frac{\partial x_1}{\partial x_2^0} \\ \frac{\partial x_2}{\partial x_1^0} & \frac{\partial x_2}{\partial x_2^0} \end{array}\right|[/tex]

So we have finally:

[tex]\frac{d}{dt}\mathcal{J}\left\{\mathbf{T}\right\}=\nabla \left(\mathbf{F}\mathcal{J}\left\{\mathbf{T}\right\}\right)[/tex]

Therefore, a sufficient conditions for the flow to be measure-preserving is:

[tex]\nabla\mathbf{F}=0[/tex]

That is, the divergence of F is zero. That this is true in the n-dimensional case can be shown by induction.

The divergence of the simple case above was zero as are all Hamiltonians in general.

I know, it's a lot and some may find it odd I spent so much time formatting it in LaTex but we all have our most beautiful problems we've ever worked on. This is one of mine. In the beginning . . . there was Poincare'.:smile:
 
Last edited:
  • #5
Hey saltydog,

I really like the derivation you posted and especially how nice the final result appears, that the divergence of F needs to be zero to conserve the phase space volume (n.b. it looks like gradient in my browser).

Your remark

"The divergence of the simple case above was zero as are all Hamiltonians in general."

promted me to look into that. I found that the divergence of the phase space vector field [tex](\dot{p},\dot{q})[/tex] is zero for Hamiltonian dynamics since

[tex]
\begin{multline}
\nabla \cdot (\dot{p},\dot{q}) = \frac{\partial \dot{p}}{\partial p} + \frac{\partial \dot{q}}{\partial q}\ = -\frac{\partial^{2} \dot{p}}{\partial p \partial q} + \frac{\partial^{2} \dot{q}}{\partial q \partial {p}}\ = 0
\end{multline}
[/tex]

which I think agrees with your more general result.
 
  • #6
beautiful1 said:
Hey saltydog,

I really like the derivation you posted and especially how nice the final result appears, that the divergence of F needs to be zero to conserve the phase space volume (n.b. it looks like gradient in my browser).

Your remark

"The divergence of the simple case above was zero as are all Hamiltonians in general."

promted me to look into that. I found that the divergence of the phase space vector field [tex](\dot{p},\dot{q})[/tex] is zero for Hamiltonian dynamics since

[tex]
\begin{multline}
\nabla \cdot (\dot{p},\dot{q}) = \frac{\partial \dot{p}}{\partial p} + \frac{\partial \dot{q}}{\partial q}\ = -\frac{\partial^{2} \dot{p}}{\partial p \partial q} + \frac{\partial^{2} \dot{q}}{\partial q \partial {p}}\ = 0
\end{multline}
[/tex]

which I think agrees with your more general result.

That's exactly right. Now how might we show experimentally (other than changing coordinates via the Jacobian) that the 4-space hypervolume of a 2-DOF Hamiltonian like:

[tex]H(p_1,q_1,p_2,q_2)=1/2(p_1^2+p_2^2)+p_1 q_2-p_2 q_1[/tex]

is measure-preserving? Suppose need some way of doing the (4-folded) integration directly. Problem though is how to "contain" the final points into some 4-space volume and then numerically integrate over that space.
Can reduce it to the problem:

Suppose I have a set of (dense) points in n-space. What is the minimum 4-space volume containing all those points? How about just 3-space first? Well, how about just a set of dense points in the x-y plane first? How can I calculate the minimum area containing those points?
 
  • #7
saltydog said:
Well, how about just a set of dense points in the x-y plane first? How can I calculate the minimum area containing those points?

Can we think about a probability distribution? In that case, it seems that an isoenergy contour would trace the boundary to whatever level of accuracy you wanted. And if you have the Hamiltonian you could just set H to this constant to put a constraint on p and q.

But if they are really points, then I'm not sure...
 
  • #8
Well I think it's doable. Would be a good programming exercise for someone in Numerical Methods. Consider some "simply-connected" region in the x-y plane and a set of points in the region like the plot below. Yea, I know it's a circle but wish to start easy. Now, I don't suppose it's too tough to run though the set of points and reduce it to just a set of "peripheral" points on the perimeter (haven't done that yet). Once I get that set, then I can use Green's Theorem and numerically estimate the line integral around the perimeter to approximate the area:

[tex]A=\frac{1}{2}\oint_C xdy-ydx\approx\frac{1}{2}\sum_{n=1}^N x_i|y_i-y_{i+1}|-y_i|x_i-x_{i+1}|[/tex]

At least I think that would work. Need to check it first with the circle. Maybe a few others. Ok, now in the case of some arbitrary "simply-connected" volume in x-y-z space, just integrate the line integral over z:

[tex]V=\frac{1}{2}\int_{z_0}^{z_1}\oint_{C(z)} (xdy-ydx)dz[/tex]

Can I say that? Intutitively that makes sense to me. And to estimate it numerically, would need to reduce the set of points into a set of "shells", calculate the area of each shell, then multiply each by [itex]\Delta z[/itex].

Oh yea. That'll work. Who's taking Numerical Methods in here and needs a project?:smile:
 

Attachments

  • set of points.JPG
    set of points.JPG
    17.8 KB · Views: 369
Last edited:

1. What is a measure-preserving flow?

A measure-preserving flow is a mathematical concept used in dynamical systems theory that refers to a continuous transformation of a space that preserves a measure on that space. In other words, the flow does not change the overall size or shape of the space, but rather moves points around while preserving certain properties, such as volume or probability.

2. How is a measure-preserving flow different from other types of flows?

A measure-preserving flow differs from other types of flows, such as volume-preserving or area-preserving flows, in that it preserves a more general type of measure on the space. This measure can be any mathematical quantity that is conserved under the flow, such as energy, mass, or information.

3. What are some real-world examples of measure-preserving flows?

Measure-preserving flows can be found in various physical, biological, and social systems. For example, the flow of a river is measure-preserving as the total amount of water remains constant over time. The movement of particles in a gas or the spread of a disease can also be modeled as measure-preserving flows.

4. How is the concept of measure-preserving flow used in research?

Measure-preserving flows are used in various fields of research, such as physics, biology, economics, and computer science, to study and understand the behavior of complex systems. They can help identify patterns, predict future states, and analyze the stability of a system. They are also used in the development of algorithms for data analysis and machine learning.

5. What are some challenges in studying measure-preserving flows?

One of the main challenges in studying measure-preserving flows is the high dimensionality of many systems of interest. As the number of variables and parameters increases, it becomes more difficult to analyze and understand the behavior of the flow. Additionally, the chaotic nature of some systems can make it difficult to accurately predict and model the behavior of the flow over time.

Similar threads

  • Quantum Physics
Replies
1
Views
545
  • Differential Equations
Replies
1
Views
770
  • Differential Equations
Replies
1
Views
666
Replies
7
Views
775
Replies
2
Views
799
Replies
8
Views
1K
Replies
6
Views
2K
Replies
16
Views
2K
Replies
4
Views
295
  • Advanced Physics Homework Help
Replies
1
Views
832
Back
Top