2-form and dissipative systems

  • Thread starter homology
  • Start date
  • Tags
    Systems
In summary, a dissipative system lacks a Hamiltonian and Liouville's theorem does not hold. However, the equations of motion can still be used to obtain a vector field on phase space and the Lie derivative of the symplectic form can still be taken along it. This leads to a differential equation that corresponds to the Liouville operator, which includes a probability preserving term. The interpretation of this equation is unclear, but it may be related to the shrinking of phase-space volume. The flow map can be used to convert this equation into an ODE in time. There is a possibility of a dissipative generalization of symplectic evolution in classical mechanics, but it has not been fully explored.
  • #1
homology
306
1
Suppose you have a dissipative system where

[tex] \dot{q}=p/m [/tex]

[tex] \dot{p}=-\gamma p -k\sin(q) [/tex]

So there isn't a Hamiltonian for this system and Louiville's theorem doesn't hold. But the equations of motion still give us a vector field on phase space and we can still take the Lie derivative of [tex] \omega = dp\wedge dq [/tex] along it.

If I do that I get [tex] L_X \omega = -\gamma dp\wedge dq [/tex] (let me know if you want to see the details, I'm omitting them here to keep things brief) where X is the dynamical vector field. I'm still new to the geometric understanding of mechanics and so the above 'looks' like a differential equation, but I'm not sure what to make of this? I suppose I'd like to see how to use this to interpret [tex] \omega [/tex] as a function of time, but I'm stuck.

Any ideas are welcome.
 
Last edited:
Physics news on Phys.org
  • #2
For this simple dissipation your Liouville operator is:

[tex]\boldsymbol{\mathcal{L}} = -\frac{p}{m} \frac{\partial}{\partial q} +k \sin(q) \frac{\partial}{\partial p} + \gamma \frac{\partial}{\partial p} p[/tex]

and so in addition to the dissipative force [tex]\gamma p \frac{\partial}{\partial p}[/tex], you have the probability preserving term [tex]\gamma[/tex]. Instead of Liouville's theorem you can apply its generalization: the method of characteristic curves.

I am not sure (off the top of my head) what it means to do what you are doing as dissipative mechanics is not symplectic and you are working with a symplectic form. I would also be interested if anyone knows the dissipative generalization of symplectic evolution in classical mechanics. In quantum mechanics, the group structure is much simpler and this is known.
 
  • #3
Okay, well your Louiville operator corresponds to vector field I'm differentiating [tex]\omega[/tex] along. I think I understand your objection to possibly using [tex]\omega[/tex] however I can't see why there still wouldn't be such a two form on the phase space, Hamiltonian or no Hamiltonian? clearly dq and dp are one forms, why wouldn't their wedge exist?
 
  • #4
What you have constructed exists - I don't question that. It's the interpretation I wonder about. The fact that it doesn't vanish along your dynamical vector field is probably telling you that no symplectic transformation is volume preserving here.

Moreover, if you omit the probability preserving [tex]\gamma[/tex] term from the correct Liouville operator, as to obtain a purely differential operator, then that differential part of the Liouville operator would actually be shrinking phase-space volume. That is probably what corresponds to the Lie-differential equation you have obtained [tex]L_X \omega = -\gamma \omega[/tex], which looks like exponential decay.
 
  • #5
So do you know if we can 'do' anything with such a 'differential equation'?

I think we both agree that omega exists and gives area, but that area is time dependent. I'd like to compute that. So give some time t and vectors v and w, I'd like to know how omega evaluates on v and w at time t. Then also, how that area changes with time.

My first thought is to evaluate the 'differential' equation on some vectors and perhaps obtain a differential equation for the area?
 
  • #6
I believe you can use the flow map to convert it to an ODE in time.
 
  • #7
Do you have a favorite reference for learning about flow maps? I'm not necessarily familiar with this, or do you mean the flow obtained by equations of motion?

By the way, I appreciate the time you've taken to engage in this thread, thank you.
 
  • #8
I come from a dissipative quantum mechanics background, so my only good reference to what you are doing is a fine book called "The Geometry of Physics" by Theodore Frankel. But, in brief the flow map is simply the map generated by your dynamical vector field. The dynamical vector field belongs to the Lie algebra, whereas the flow map belongs to the Lie group. Perhaps you know the flow map by another name. In QM I would simply call it the propagator or transition matrix (for the density matrix).

And no problem about engagement, I am weak on exterior forms, so this is making me think about things I know in a different context.
 
  • #9
I have that book! I'll have to check it out this weekend. But it does sound like yes, the flow you're talking about is the 'solution' to the dynamical vector field.
 
  • #10
C. H. Fleming said:
I would also be interested if anyone knows the dissipative generalization of symplectic evolution in classical mechanics. In quantum mechanics, the group structure is much simpler and this is known.
What about the entry ''Dissipative dynamics and Lagrangians'' of Chapter A1 of my theoretical physics FAQ at http://arnold-neumaier.at/physfaq/physics-faq.html#dissslag ?
 
  • #11
A. Neumaier said:
What about the entry ''Dissipative dynamics and Lagrangians'' of Chapter A1 of my theoretical physics FAQ at http://arnold-neumaier.at/physfaq/physics-faq.html#dissslag ?

I am thinking of a much deeper question that I did not sufficiently specify.

In quantum mechanics, closed system evolution has unitary evolution with anti-hermitian generators ([tex]-\imath \mathbf{H}[/tex]) whereas dissipative evolution has completely-positive evolution with Lindblad-GKS generators ([tex]\mathcal{L}[/tex]). (And here I am specifically talking about the algebraic generators. The time-translation generators (a.k.a. the master equation) are only Lindblad in the Markovian regime. (For unitary dynamics this distinction is irrelevant even though the generators are also not always equivalent).).

For classical physics the closed system evolution is symplectic, but what characterizes the dissipative evolution? Is there, or can there be, something like the Lindblad-Gorini-Kossakowski-Sudarshan theorem for the generators of classical dissipative evolution? I don't do much classical physics, so I have never attempted to derive such a thing, but I should probably sit down and try one day.
 
  • #12
C. H. Fleming said:
I am thinking of a much deeper question that I did not sufficiently specify.

In quantum mechanics, closed system evolution has unitary evolution with anti-hermitian generators ([tex]-\imath \mathbf{H}[/tex]) whereas dissipative evolution has completely-positive evolution with Lindblad-GKS generators ([tex]\mathcal{L}[/tex]). (And here I am specifically talking about the algebraic generators. The time-translation generators (a.k.a. the master equation) are only Lindblad in the Markovian regime. (For unitary dynamics this distinction is irrelevant even though the generators are also not always equivalent).).

For classical physics the closed system evolution is symplectic, but what characterizes the dissipative evolution? Is there, or can there be, something like the Lindblad-Gorini-Kossakowski-Sudarshan theorem for the generators of classical dissipative evolution?
Yes. The classical analogue (and in many cases, a suitable classical limit) of a quantum dynamical semigroup is a Markov process.

The most general Markov processes are a combination of diffusion processes and jump processes. See the differential Chapman-Kolmogorov equation in Gardiner's Handbook of stochastic methods - (3.4.22) in the second edition. The derivation there is on the level of rigor of theoretical physics; but it is very likely that there is a fully rigorous version of this result in terms of measure-theoretic stochastic processes.

Should you or someone else find a reference to such a mathematical presentation, I'd be interested.
 
  • #13
A. Neumaier said:
Yes. The classical analogue (and in many cases, a suitable classical limit) of a quantum dynamical semigroup is a Markov process.

I don't want to limit scope to the dynamical semigroup, so the stochastic process may be non-Markovian and the quantum dynamical generator might not be of Lindblad-GKS form. The algebraic generator (which is not equivalent to the dynamical generator given time dependence) should still satisfy the Lindblad-GKS form. (I believe Kussakovski recently had a paper where he studied some of this in the Laplace domain. I also have a paper, coming out soon, looking at this perturbatively.) The reason this happens is because Lindblad-GKS derives directly from Choi's (group) theorem on CP maps, and so the Lindblad-GKS theorem is more fundamentally a theorem about algebra. The Markovan dynamics result (which is more widely appreciated) then derives from that.

So really, I am asking about the underlying Lie group and algebraic generators of dissipative classical mechanics. Choi and Lindblad-GKS characterize both for quantum mechanics. That is what I compare to the Hamiltonian and its unitary evolution. What characterizes dissipative classical mechanics? That is what I would compare to the symplectic evolution if I knew it. It would be something that is not (phase-space) volume preserving, but still probability preserving. When the question was posed about applying symplectic thinking to a dissipative system, my reaction was one of hesitance, because the evolution is not symplectic. That was why I made the comment that started this. I wish I knew the better algebra to think about.

The quantum correspondence in application between the algebraic and Markovian dynamics of the Lindblad-GKS theorem could be useful classically... if it still exists. Even then, I don't remember a Markovian classical theorem like Lindblad-GKS which says "the Liouvillian can only take the form ... in terms of [tex]x[/tex], [tex]\frac{\partial}{\partial x}[/tex], ...". Lindblad-GKS is extremely robust because it doesn't care what the model is, how you introduced the stochastic process, etc. I will look back over Gardiner though. Thanks for the recommendation.
 
  • #14
C. H. Fleming said:
I don't want to limit scope to the dynamical semigroup, so the stochastic process may be non-Markovian and the quantum dynamical generator might not be of Lindblad-GKS form. The algebraic generator (which is not equivalent to the dynamical generator given time dependence) should still satisfy the Lindblad-GKS form. (I believe Kussakovski recently had a paper where he studied some of this in the Laplace domain. I also have a paper, coming out soon, looking at this perturbatively.) The reason this happens is because Lindblad-GKS derives directly from Choi's (group) theorem on CP maps, and so the Lindblad-GKS theorem is more fundamentally a theorem about algebra. The Markovan dynamics result (which is more widely appreciated) then derives from that.

So really, I am asking about the underlying Lie group and algebraic generators of dissipative classical mechanics. Choi and Lindblad-GKS characterize both for quantum mechanics.
Could you please refer to an online source or journal paper on ''Choi's (group) theorem on CP maps'', so that I can see what you mean by ''the underlying Lie group'' in the quantum case?

C. H. Fleming said:
I don't remember a Markovian classical theorem like Lindblad-GKS which says "the Liouvillian can only take the form ... in terms of [tex]x[/tex], [tex]\frac{\partial}{\partial x}[/tex], ...". Lindblad-GKS is extremely robust because it doesn't care what the model is, how you introduced the stochastic process, etc. I will look back over Gardiner though.
Gardiner does precisely that, though only for Markov processes. I need to see the non-semigroup quantum version to be able to connect it (perhaps) to some classical statement.
 
  • #15
A. Neumaier said:
Could you please refer to an online source or journal paper on ''Choi's (group) theorem on CP maps'', so that I can see what you mean by ''the underlying Lie group'' in the quantum case?

Consider the dynamical map:

[tex]\boldsymbol{\rho}(t) = \boldsymbol{\mathcal{G}}(t,0) \boldsymbol{\rho}(0)[/tex]

where [tex]\boldsymbol{\rho}(t)[/tex] is the density matrix of the reduced system at time [tex]t[/tex] and we have a non-unitary theory (likely with a traced out environment) such that we can consider any initial state [tex]\boldsymbol{\rho}(0)[/tex] and also we might later include ancillary degrees of freedom (e.g. external entanglement).

Then with very few assumptions, [tex]\boldsymbol{\mathcal{G}}(t,0)[/tex] are completely-positive (CP) maps or semi-group elements. Then consider Choi's theorem on CP maps which immediately characterizes them:

http://en.wikipedia.org/wiki/Choi%27s_theorem_on_completely_positive_maps" [Broken]

The (algebraic) generators of these semi-group elements are given by the Lindblad-GKS theorem, though it is usually only useful in the Markovian regime where the algebraic and dynamical generators are equivalent. (Otherwise, one can not extract very much from Lindblad-GKS.) If you refer back to the original papers of Lindblad and Gorini, Kossakowski and Sudarshan, you will see that they refer back to Choi. Choi's theorem describes the semi-group, Lindblad-GKS then describes the algebra which generates it. It's all a very beautiful structure. (I almost finished writing a review paper wherein I try to explain these lesser discussed (and applied) details.)

I think classically, one would have to think akin to non-symplectic flow maps and their generators. (Flow would then be a misnomer.) I am unfamiliar with what kind of structure these non-symplectic maps would be constrained to have. Maybe it is something simple that every classical physicist knows. My knowledge of symplectic manifolds is very weak.
 
Last edited by a moderator:
  • #16
C. H. Fleming said:
Then consider Choi's theorem on CP maps which immediately characterizes them:
http://en.wikipedia.org/wiki/Choi%27s_theorem_on_completely_positive_maps" [Broken]
[...]
I think classically, one would have to think akin to non-symplectic flow maps and their generators. (Flow would then be a misnomer.) I am unfamiliar with what kind of structure these non-symplectic maps would be constrained to have. Maybe it is something simple that every classical physicist knows. My knowledge of symplectic manifolds is very weak.

The classical version of Choi's theorem says that a linear mapping from C^n to C^m that maps real nonnegative vectors to real nonnegative vectors is given by a m x n matrix with nonnegative entries. (The proof is straightforward.) These form a semigroup. Their infinitesimal generators are the matrices that are off-diagonally nonnegative.

The associated dynamical semigroups that preserve the trace (i.e., the sum of the entries) are the Markov chains, while the more general version you are after seem to be Markov chains with arbitrarily long memory. In infinite dimensions, and assuming appropriate topologies, you get in place of a Markov chain a combined jump&diffusion process, and presumably the more general version is an arbitrary stochastic process. But since you did not specify the quantum version precisely enough, I can't tell.

Symplecticity never enters. The latter is present only when one specifies a Heisenberg algebra of distinguished operators, which provide a symplectic phase space structure.
 
Last edited by a moderator:
  • #17
You will have to excuse my ignorance. Why are we mapping between nonnegative vectors? Naively I would imagine the starting point to be norm-preserving positive linear maps between positive functions of the phase-space coordinates (i.e. density functions instead of density matrices). Is there some representation that I am missing?

I also need to think about classical correlations to ancillary degrees of freedom and whatever the analog to complete positivity would be, if any. With the Markov chain that doesn't seem to matter.
 
  • #18
C. H. Fleming said:
Why are we mapping between nonnegative vectors?
Because Choi is mapping between positive semidefinite matrices. This corresponds to n-level quantum systems (whose classical analogue is an n-state probability space), not to particles moving in space. In the latter case, you'd have in place of positive semidefinite matrices integral operators on L^2(R^3) with nonnegative kernels.
I don't know whether the analogue of Choi's theorem has been proved rigorously. Certainly the corresponding Lindblad operators are used in quantum optics.

C. H. Fleming said:
Naively I would imagine the starting point to be norm-preserving positive linear maps between positive functions of the phase-space coordinates (i.e. density functions instead of density matrices). Is there some representation that I am missing?
C. H. Fleming said:
You need to select a quantum system to start with. This requires the choice of an algebra of operators. In Choi's case the algebra is C^{n x n}. The classical version corresponds to restricting to a maximal commuting subalgebra - wlog the algebra of diagonal matrices. This is equivalent with taking th algebra C^n with pointwise operations. Whence the setting I was using.

If you want to consider classical phase space, you'd have to think of it as the classical analogue of the quantum algebra of linear operators on L^2(R^3).
I also need to think about classical correlations to ancillary degrees of freedom and whatever the analog to complete positivity would be, if any. With the Markov chain that doesn't seem to matter.[/QUOTE]
This is too cryptic to make sense to me.
 
  • #19
A. Neumaier said:
Because Choi is mapping between positive semidefinite matrices. This corresponds to n-level quantum systems (whose classical analogue is an n-state probability space), not to particles moving in space. In the latter case, you'd have in place of positive semidefinite matrices integral operators on L^2(R^3) with nonnegative kernels.
I don't know whether the analogue of Choi's theorem has been proved rigorously. Certainly the corresponding Lindblad operators are used in quantum optics.

I believe the generalization of Choi's theorem is Stinespring's theorem (although I think it is a touch too general). As you already seem to know, people do apply Lindblad's theorem to systems with unbounded operators and the end result looks the same. This is typically safe. I think Davies first worked this out in Rep Math Phys '77, but Sciencedirect is down for maintenance so I can't pull up the paper. I believe the unbounded proof was incrementally fine tuned by several other papers that I don't know off the top of my head.

This is too cryptic to make sense to me.

Yes, its ubiquitous in QM but I've never seen talk of it in Classical physics, so perhaps it is irrelevant.

Say you have some positive maps [tex]\mathcal{G}[/tex] between density functions [tex]\rho[/tex] on [tex]2n[/tex]-dimensional phase-space with coordinates [tex]z[/tex], and parametrized by time:

[tex]\rho(z,t) = \mathcal{G}(t,0) \rho(z,0)[/tex]

Then say you add an additional [tex]2m[/tex] degrees of freedom to phase space and consider the trivially extended maps

[tex]\mathcal{G}(t,0) \otimes 1[/tex]

between arbitrary density functions in the [tex]2(n+m)[/tex]-dimensional phase-space. Then will this map also be positive on the higher dimensional phase space? In QM the answer is not necessarily, and that's why you have to invoke Choi's theorem in the first place. Now that I think about it more, the classical answer is more trivial: all positive maps are completely positive. You just have to invoke the fact that the density function is every where non-negative and that the maps are norm preserving. So indeed, this was irrelevant for me to think about.

Also I would add that in the quantum CP generators, as you likely know, you can see the unitary part. It would seem strange to me that in the classical CP generators, you could not see the symplectic part.
 
  • #20
C. H. Fleming said:
the classical answer is more trivial: all positive maps are completely positive.
Yes.

C. H. Fleming said:
Also I would add that in the quantum CP generators, as you likely know, you can see the unitary part. It would seem strange to me that in the classical CP generators, you could not see the symplectic part.
No. The classical equivalent of a unitary operator is a bijection of the state space.

Symplecticity is classically expressed by the CCR for p and q in the Poisson bracket, and hence quantum mechanically by the Heisenberg CCR.

In general, the structure of a classical or quantum theory is determined by a distinguished Lie algebra of operators. (See my book http://lanl.arxiv.org/abs/0810.1019 for a deeper discussion.) Without that, Hilbert spaces are far too structureless - just one space for each cardinality of the basis. And most physical systems live in a separable, infinite-dimensional Hilbert space, of which there is only a single one.
 
  • #21
A. Neumaier said:
No. The classical equivalent of a unitary operator is a bijection of the state space.

So there are volume preserving flows which cannot be described by a Hamiltonian? I have been thinking about Hamiltonian motion instead of maps between pure states like I should. In QM they are equivalent, but I've never thought about it classically.
 
  • #22
C. H. Fleming said:
So there are volume preserving flows which cannot be described by a Hamiltonian?
Yes. Hamiltonian flow conserves much more than phase space volume.

The analogy is:
unitary mapping -- bijection
commutator bracket -- Poisson bracket

The Hamiltonian defines the flow only given the bracket. But the bracket is not an intrinsic part of the space. Indeed, one can have even in the quantum case [x,y]=xuy-yux, which satisfies for any u the Leibniz and Jacobi identities. in some systems, taking u distinct from the canonical choice i/hbar simplifies the presentation...
 
  • #23
Great progress. I see now, it is obvious that Hamiltonian motion is not necessary to satisfy the continuity of phase-space volume in the evolution of [tex]\rho[/tex]. It is only sufficient.

I have been thinking about this with the wrong framework: phase space instead of the cotangent space of the symplectic manifold. First I think I should consider symplectomorphisms between symplectic manifolds (which automatically preserve probability) as the analogy I desire to relate to unitary transformations.

Then maybe I should consider relaxing the bijective condition of the diffeomorphism (while retaining probability preservation) as the analogy to Lindblad-GKS. I need to think about this much more.
 
  • #24
I have thought about this some more. Unitary maps are analogous to Symplectomorphic maps. Neither groups are simply bijections, but they are moreover isomorphisms. There are bijections between state vectors in Hilbert space which are not unitary, but they do not preserve the state overlap.

However, for dissipative classical mechanics, I believe you have steered me towards the correct path. One should consider stochastic matrices which act as the positive maps between state vectors, though translating that into the context of phase space, which would naturally involve some functional analysis, seems like it might be tricky. I will think about that some more.
 
  • #25
C. H. Fleming said:
I have thought about this some more. Unitary maps are analogous to Symplectomorphic maps.
But this is the wrong analogy. The classical symplectic structure is tied to the existence of conjugate observables satisfying canonical Poisson bracket relations. Thus any analogous quantum system must have corresponding conjugate observables satisfying CCR. Thus associated to any real Hilbert space V, one can associate canonically a symplectic phase space V x V with symplectic form omega(p,q) = p^*q-q^*p and a Hilbert space L^2(V); V=R^3 gives the 1-particle case. In this _particular_ situation, Unitary maps are analogous to Symplectomorphic maps.

But if your Hilbert space is C^n, there are no conjugate operators, and your analogy breaks down. The corresponding classical ''phase space'' (if one may call it that) is a discrete probability space with n elementary events.
 
  • #26
I hope its not too out of line to pop back in with something lower key. Its been great following the material you guys have been writing. I was hoping to put some more dots together with the calculation.

I was thinking about the flow [tex]\phi:T^*M\to T^*M[/tex] obtained from the vector field X (given in the original post). It would induce a pullback [tex]\phi^*:T^*(T^*M)\to T^*(T^*M)[/tex] and so if [tex]\omega=dp\wedge dq[/tex] and if, at a point we have a couple tangent vectors [tex]\vec{u},\vec{v}\in T(T^*M)[/tex] which span an area A then we could write:

[tex](\phi^*\omega)(\vec{u},\vec{v})=\omega(\phi_*\vec{u},\phi_*\vec{v})[/tex]

We also know that the Lie derivative of [tex]\omega[/tex] along X is given by:

[tex]L_X\omega=-\gamma \omega[/tex]

I feel close and a little dumb at the moment. The Lie derivative is clearly giving us some measure of the time evolution of [tex]\omega[/tex]. Just as the pullback is pushing the area, spanned by the tangent vectors, along the vector field, 'forward in time' Is it correct to say that

[tex]L_X\omega=\dot{\omega}[/tex]

If so we'd get the expression [tex]\omega(t)=e^{-\gamma t}dp\wedge dq[/tex] then,

[tex]A(t+\Delta t)-A(t)=A(t)(e^{-\gamma\Delta t}-1)[/tex]

divide both sides by [tex]\Delta t[/tex],


[tex]\frac{A(t+\Delta t)-A(t)}{\Delta t}=A(t)\frac{(e^{-\gamma\Delta t}-1)}{\Delta t}[/tex]

In the limit [tex]\Delta t\to 0[/tex], the quotient on the right goes to [tex]-\gamma[/tex] so we end up with,

[tex]\frac{dA}{dt}=-\gamma A[/tex] which results in what we'd expect.

My goal is to do this without having to solve for the flow (which would involve the usual elliptic integral mish mash).
 
  • #27
homology said:
if [tex]\omega=dp\wedge dq[/tex] [...]
Is it correct to say that
[tex]L_X\omega=\dot{\omega}[/tex]
Probably not. This doesn't follow from what you assumed so far, hence would be a new assumption.

It is not clear at all what you are doing and what you want to achieve in posts #1 and #26. If you work on th level of forms, you can't treat omega as time-dependent - neither are p and q. Omega just served to define the Poisson bracket, and then only a Hamiltonian (which you didn't specify at all) would define a a dynamics for p(t) and q(t) - which has almost nothing to do with the p and q of the forms.

But the standard recipe then gives a conservative system while your system isn't conservative. So you don't have a Hamiltonian.

Why do you want to force your system into a symplectic framework? If you want to do so, you first need to generalize the conservative dynamical equation fdot= {f,H} so that it has a chance to describe your system.
 
  • #28
True, sorry about the omega definition, that was sloppy, perhaps:[tex]\omega=w(t)dp\wedge dq[/tex] where [tex]w(0)=1[/tex]?

I want to see how area/volume of phase space evolves for a dissipative system. Certainly there is a two form [tex]\omega=w(t)dp\wedge dq[/tex] though its not going to be the usual 2-form. I don't see why I need a Hamiltonian, there isn't one for this sytem, however we should still be able to talk about the area spanned by a set of tangent vectors. That area is going to change over time and so the area 2-form should be time dependent, no?
 
  • #29
homology said:
True, sorry about the omega definition, that was sloppy, perhaps:[tex]\omega=w(t)dp\wedge dq[/tex] where [tex]w(0)=1[/tex]?

I want to see how area/volume of phase space evolves for a dissipative system. Certainly there is a two form [tex]\omega=w(t)dp\wedge dq[/tex] though its not going to be the usual 2-form. I don't see why I need a Hamiltonian, there isn't one for this sytem, however we should still be able to talk about the area spanned by a set of tangent vectors. That area is going to change over time and so the area 2-form should be time dependent, no?
No. Your omega is not the area between two tangent vectors in phase space. z= (p,q) denotes a single curve in phase space following your dissipative equation. It has only a single tangent vector at each particular time.

What you are looking for is how, given a set Omega in phase space, the volume (=area) of the set of all points z(t) with z(0) in Omega changes with time.

If your equation is dz/dt=F(z), the infinitesimal volume change factor is the trace of F'(z). So an infinitesimal volume A close to a trajectory z changes according to the differential equation dA/dt = tr F'(z) A
 
Last edited:
  • #30
(1) only a single tangent vector? Why aren't the tangent spaces 2D? I also don't see why omega is no longer the area when it was before. Or perhaps its more accurate to say I don't see why the absence or presence of a Hamiltonian changes the interpretation of omega in terms of area/volume.

(2) So then for my system:

[tex]

\frac{d}{dt}\begin{pmatrix} p\\ q \end{pmatrix} =\begin{pmatrix} -\gamma & ??\\ 1/m & 0\end{pmatrix} \begin{pmatrix} p \\ q \end{pmatrix}

[/tex]

The system is nonlinear, how should I represent F?

When you say F', what is it with respect to?

The particular stuff you're saying at the end of your last post (taking the trace of F' etc) is there a place I can find more on this? Otherwise I'm going to have to ask a number of questions which may become tiring for you.

Cheers (and thanks!)
 
  • #31
homology said:
(1) only a single tangent vector? Why aren't the tangent spaces 2D?
They are. But curves in an n-dimensional manifold have tangents that are vectors in the n-dimensional tangent spaces. Here n=2.
homology said:
I also don't see why omega is no longer the area when it was before.
omega was never an area. It is a volume form, which means that (without your prefactor) omega(u,v) is the area of the parallelogram with vertices 0, u, v, and u+v.
homology said:
So then for my system: [...] The system is nonlinear, how should I represent F?
In a dynamical system, F(z) is a nonlinear map. For your system, F(z) is the vector with components -gamma z_1 - k sin z_2 and z_1/m. People also write div F or nabla dot F for trace F'.
homology said:
The particular stuff you're saying at the end of your last post (taking the trace of F' etc) is there a place I can find more on this? Otherwise I'm going to have to ask a number of questions which may become tiring for you.
I don't know where to find it; never look up these elementary things. Maybe others can help out.
 
  • #32
A. Neumaier said:
Curves in an n-dimensional manifold have tangents that are vectors in the n-dimensional tangent spaces. here n=2.

omega was never an area. It is a volume form, which means that (without your prefactor) omega(u,v) is the area of the parallelogram with vertices 0, u, v, and u+v.

Okay, I probably have been careless in associating omega with area/volume. But in one of my previous posts I did express the area as [tex]\omega(\vec{u},\vec{v})[/tex] where u,v are tangent vectors at some point (not tangent to the curve, just tangent to the phase space and so giving some notion of area at that point.

In a dynamical system, F(z) is a nonlinear map. For your system, F(z) is the vector with components -gamma z_1 - k sin z_2 and z_1/m. People also write div F or nabla dot F for trace F'.

I don't know where to find it; never look up these elementary things. Maybe others can help out.

Okay,

[tex]
\frac{d}{dt}\begin{pmatrix} p \\ q\end{pmatrix} = \begin{pmatrix} -\gamma p - k\sin(q) \\ p/m\end{pmatrix}
[/tex]

So the divergence, is this just [tex](\partial/\partial p, \partial/\partial q)[/tex]? If so then that would give me [tex]\nabla\cdot F = -\gamma[/tex]

If its terribly elementary then perhaps I just know it in a different context? My department doesn't do anything geometrical, so I do this on the side, slowly, very slowly. Is there a name, or term for F? Or F', or div(F)?

I mean, div(F) is the divergence of the dynamical vector field, which, if the system was Hamiltonian, should be zero correct?
 
  • #33
homology said:
So the divergence, is this just [tex](\partial/\partial p, \partial/\partial q)[/tex]?
No. It is dF_1/dp + dF_2/dq. You seem to have used that but what you wrote is quite different. You need to take much more care in writing formulas.
homology said:
If so then that would give me [tex]\nabla\cdot F = -\gamma[/tex]
Yes. Thus [tex]dA/dt = -\gamma A[/tex]
homology said:
I mean, div(F) is the divergence of the dynamical vector field, which, if the system was Hamiltonian, should be zero correct?
Yes. Prove it for a general Hamiltonian system, as an exercise!
 
  • #34
Apologies for the careless verbage and notation and gratitude for your help, I'll work on this and then post something coherent :)
 
  • #35
homology said:
Is it correct to say that

[tex]L_X\omega=\dot{\omega}[/tex]

You want to say this

[tex]\phi^* L_X \omega = \frac{d}{dt} \phi^* \omega[/tex]

The flow has time dependence, not the symplectic form, as Arnold already mentioned.

From this relation you probably immediately see

[tex]\frac{d}{dt} \phi^* \omega = - \gamma \phi^* \omega[/tex]

which is the ODE you were looking for. I figured this out a while back, but I thought I would give you a chance to work it out yourself.
 
Last edited:
<h2>1. What is a 2-form system?</h2><p>A 2-form system is a mathematical model used to describe the behavior of a physical system that has two degrees of freedom. It consists of two variables, typically denoted by x and y, and their corresponding rates of change, dx/dt and dy/dt.</p><h2>2. How does a 2-form system differ from a 1-form system?</h2><p>A 2-form system has two degrees of freedom, while a 1-form system has only one. This means that a 2-form system can represent more complex behaviors and interactions between variables compared to a 1-form system.</p><h2>3. What is the significance of dissipative systems?</h2><p>Dissipative systems are systems that lose energy over time, typically due to friction or other forms of resistance. They are important in understanding the behavior of real-world systems, as they can help explain why certain systems tend towards equilibrium or stability.</p><h2>4. How do dissipative systems relate to chaos theory?</h2><p>Dissipative systems are closely related to chaos theory, as they can exhibit chaotic behavior where small changes in initial conditions can lead to vastly different outcomes. This is because dissipative systems are sensitive to initial conditions and can amplify small differences over time, leading to unpredictable behavior.</p><h2>5. Can 2-form systems exhibit dissipative behavior?</h2><p>Yes, 2-form systems can exhibit dissipative behavior. In fact, many real-world systems, such as weather patterns and chemical reactions, can be described using 2-form systems with dissipative properties. This allows for a more accurate representation of these systems and their behavior over time.</p>

1. What is a 2-form system?

A 2-form system is a mathematical model used to describe the behavior of a physical system that has two degrees of freedom. It consists of two variables, typically denoted by x and y, and their corresponding rates of change, dx/dt and dy/dt.

2. How does a 2-form system differ from a 1-form system?

A 2-form system has two degrees of freedom, while a 1-form system has only one. This means that a 2-form system can represent more complex behaviors and interactions between variables compared to a 1-form system.

3. What is the significance of dissipative systems?

Dissipative systems are systems that lose energy over time, typically due to friction or other forms of resistance. They are important in understanding the behavior of real-world systems, as they can help explain why certain systems tend towards equilibrium or stability.

4. How do dissipative systems relate to chaos theory?

Dissipative systems are closely related to chaos theory, as they can exhibit chaotic behavior where small changes in initial conditions can lead to vastly different outcomes. This is because dissipative systems are sensitive to initial conditions and can amplify small differences over time, leading to unpredictable behavior.

5. Can 2-form systems exhibit dissipative behavior?

Yes, 2-form systems can exhibit dissipative behavior. In fact, many real-world systems, such as weather patterns and chemical reactions, can be described using 2-form systems with dissipative properties. This allows for a more accurate representation of these systems and their behavior over time.

Similar threads

  • Classical Physics
Replies
4
Views
686
Replies
27
Views
2K
Replies
2
Views
705
Replies
16
Views
2K
  • Classical Physics
Replies
1
Views
684
Replies
1
Views
525
  • Classical Physics
Replies
6
Views
720
  • Classical Physics
Replies
3
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
703
  • Differential Geometry
Replies
29
Views
1K
Back
Top