What causes the arrow of time ?

In summary: PP... is most likely due to the fact that Landau considered it so obvious. He argues that the interaction with a classical or macroscopic system is all that is needed to derive the PP. This is essentially what Penrose and Prigogine do, but they go further and argue that the irreversibility in classical statistical mechanics comes about from the very specific initial condition, which is highly improbable.

What causes “The arrow of time" ?

  • Imperfect entanglement: The conservation laws are not exactly 100%

    Votes: 0 0.0%

  • Total voters
    41
  • #36
Careful said:
Hi Vanesch, I do not think so. Gravitation will make matter clump together and lower the entropy of the matter degrees of freedom (unless you start out from a highly idealized stable state already).

? I don't see why this decreases entropy. The volume decreases all right, but the kinetic energy of the particles increases ; assuming there are radiation degrees of freedom (without exactly being Maxwell, because I want to stay Newtonian and leave relativity out for the sake of argument), you get emission of thermal radiation that way, and the "cloud + no radiation" might very well have a much lower entropy than the "lump of clustered matter + radiation".
But I really didn't want to observe this from a "cosmological" POV (although one is ultimately always led there).

The gravitational contraction you are talking about here could be replaced by a balloon, that is stretched by many strings attached to the inside of a hollow metal sphere to be in "under pressure". Do you think that cutting the strings, hence have the gas inside being compressed by the elasticity of the balloon, (exactly as gravity does), LOWERS the entropy of the system ? Wouldn't think so!

What I mean is: where does the second law come from in classical thermodynamics ? It comes from the observation that "heat" goes from "hotter" to "colder" objects and that it is "impossible" (in fact, STATISTICALLY IMPOSSIBLE) to do otherwise without doing the same somewhere else. In a small part of the universe.

The second law (at least, I understand it that way) is not an "absolute" law ; it is almost a "tautology": "only probable things happen". So sometimes it is violated, namely when something improbable happens. The only point is that you will have to WAIT A LONG TIME for something improbable to happen.
So the second law says that MOST OF THE TIME you heat water, it will boil off. REALLY REALLY most of the time. Because it is highly improbable that, for instance, all the molecules nicely vibrate up and down but do not leave the liquid. But this *can* happen, once in a while (a LONG while, say, 10^10000 years or so :-)
Now, the point was made that conservative systems have 1) recurrence times and 2) using canonical transformations, you could make the state "not move" a bit like the Heisenberg view in QM, so the "initial state" is "the state". That's true. Concerning recurrency times, I don't think it has anything to do with the second law, because it only means that ONCE IN A WHILE (a very very very long while) the second law will be violated. But that's exactly what she says :-) The second law has been empirically derived in a small corner of the universe, for small amounts of time, and being "close" to the initial condition (compared to any recurrency time). So it is very unlikely to have observed any violation. And you CAN BET ON IT that you won't see it (probabilistic argument).
But in order to even verify that law, you NEED to be able to *produce* hot and cold objects! So the environment of the lab can already not be in thermal equilibrium, which means it has to be in a "special macroscopic state". These macroscopic states are defined by the properties of low-order correlation functions over the phase space.
What really counts (as I understand it) is not the particularity that a certain detailled microconfiguration is on the phase space track of a specific initial condition. It is that during its evolution, it goes from smaller to larger "macrovolumes" (these macrovolumes being defined by coarse grained correlation functions between 1, 2, 3 and a *few* particles). There's nothing magical about it. It's just that it 'started off' in a small volume because the experimenter put it there (special initial condition). About just ANY evolution would soon put it in a larger volume, simply because the volume is larger. THAT is, to me, what the second law says.
Why are these macrovolumes defined by low-order correlation functions important ? Because they define the macroscopically observable things such as temperature, densities of different sorts, concentrations, reaction rates, ...
And THESE are the quantities where entropy plays a role, and which we test the second law against.
So I really think that, seen that way, the second law holds as well in a strictly Newtonian universe as in anything else as long as we had "special initial conditions" (and, you could add, that special condition occurred in a *recent past* as compared to the recurrency time, but given the VERY LONG recurrency time that doesn't really matter FAPP:smile: )
 
Last edited:
Physics news on Phys.org
  • #37
Careful said:
This is really a funny discussion.. Juan R is right that in the first law of thermodynamics, the Shannon - Von Neumann entropy has to be used
Yes, but even in order to be able to define it, you need to define your macrostates. If you KNOW perfectly well the microstate of a system, then the Shannon entropy of that system is zero (and all the entropy is in your head!).

Of course, Juan is right that IF YOU KNOW the special initial state of a CLOSED SYSTEM, and you know perfectly well the (reversible) dynamics, so that you *can calculate* the microstate after some time, then the Shannon- Von Neumann entropy is zero to start with, and zero all along. For a kind of god creature who knows this, "nothing surprisingly" happens, no "irreversible phenomena" occur etc... This is what happens when you apply the reversible evolution theorems (Liouville in CM, unitary evolution to a pure state density matrix in QM).

But the second law doesn't apply to this case (well, it does: it says that *if you know all that, then entropy zero is conserved*, so it is trivial). It applies to SUBSYSTEMS. We found empirically the second law by looking at small pieces of universe over short amounts of time, and by looking at coarse-grained properties (temperature, chemical reactions...) which only depend upon the properties of low-order correlation functions. It is THERE that the second law is valid.

Only when these coarse-grained properties are defined, (the correlation functions selected which will matter), the entropy can be defined, because we have now sliced up the state space in macrovolumes and can count the microstates corresponding to it (or weight it with the probabities induced by these correlation functions). That's what the standard ensembles do, but one should realize that the very definition of entropy will depend on exactly how you choose to "slice up the state space".
And your microstate wanders happily from small volumes (small entropies) to big volumes (large entropies). So the entropy increases. Until it reaches the biggest volume, where it stays FOR MOST OF THE TIME. No matter exactly what track (what initial state), as long of course as the initial state was within a "small volume".
Whether or not it was part of a cosmic track that started out in a known state.
 
Last edited:
  • #38
vanesch said:
? I don't see why this decreases entropy. QUOTE]
Careful said:
I have to think deeper about this if I want to give you a fair answer.
Cheers,
Careful
Hi, I was just revising my anwer. I will give it here, I shall read your comments later on (I have to go away for some time now).
Hi Vanesch, I revise my old answer here. As far as I remember is the first law of thermodynamics only valid for near equilibrium situations (slowly running processes). The Von Neumann-Shannon entropy notion is an *equilibrium* concept and therefore, by definition, should not change. The formal verification of this is a check of one line. Obviously, we can increase/decrease entropy even for reversible processes by switching on force fields which enhances/reduces the total number of degrees of freedom of the system *acted* upon (This is in a sense what happens when you release the gas from a smaller box into a larger one). The second law is something heuristic which we observe even in off equilibrium situations (again except for black holes), so entropy there cannot be Von Neumann - Shannon entropy, but a ``dynamical entropy´´ whose definition you can realize by adaptively counting the effective number of degrees of freedom (on the other hand any dynamical notion of entropy should also undergo a ``thermalization´´ process even if the number of degrees stays fixed). However, the universe is a closed system and in principle all number of degrees should be known if you stick to a Newtonian picture. In GR the spatial universe might change of volume and therefore change the number of degrees of freedom. So everything I say below has to be interpreted in the following sense:
(a) S denotes a dynamical entropy which coincides with the Shannon Von Neumann equilibrium notion (b) the first law holds with respect to S.
Since in your Newtonian universe, the total energy is conserved, the first law says that
T dS = p dV
The latter expression should be SMALLER than zero since the gravitational force is going to make matter clump together. I do not think that the inclusion of radiation degrees of freedom due to chemical binding which occurs will change the outcome of this conclusion but there is no a priori reason why you should do this (you might as well assume that all particles are neutral). It seems to me that you have to take into account the gravitational degrees of freedom in order to compensate for this (Penrose argued that also I think). Other options which might avoid this conclusion are : (a) stick to Shannon entropy, but to go over to a classical mechanics with a time dependent Hamiltonian (so that my conservation of energy argument does not apply here, but total entropy is still conserved) (b) Stick to shannon entropy but allow for non-Hamiltonian dynamics (not every classical Newton equation can be derived from a Hamiltonian) - this is what Juan R adheres at the quantum level (by demanding non unitarity). Most people are convinced the gravitational degrees of freedom are important and that quantum gravity has a unitary dynamics with conserved Shannon entropy. This is of course very possible and not in conflict with observation.

Cheers,

Careful

PS: I might adapt this further.
 
  • #39
Careful said:
Hi, I was just revising my anwer. I will give it here, I shall read your comments later on (I have to go away for some time now).
Hi Vanesch, I revise my old answer here. As far as I remember is the first law of thermodynamics only valid for near equilibrium situations (slowly running processes).

Nah, the first law is just conservation of energy. It is *always* valid in a conservative system.

Of course, writing T dS = p dV is something else: it just says that for a given system THAT CAN BE DESCRIBED BY 2 EXTENSIVE QUANTITIES S AND V (and is as such in equilibrium), any change in internal heat energy has to be brought in by mechanical work IN A THERMALLY ISOLATED SYSTEM (no heat influx). You have to add terms to the right if there is also electrical or other energy coming in, and you add dQ if heat is allowed to flow in.

I don't think that this equation can, in any way, be applied to our situation, as it is not in equilibrium, and certainly not defined by just 2 extensive quantities S and V.

The Von Neumann-Shannon entropy notion is an *equilibrium* concept and therefore, by definition, should not change.

No, it is not (look up at Wiki for instance). But you can use the entropy as an extensive variable to parametrize the "equilibrium states" you want to consider (to slice up the state space !). Nevertheless, for just ANY state ("non-equilibrium" - note that equilibrium or not depends on what you consider as macrostates: if you consider every microstate individually, then you NEVER reach equilibrium of course), and ANY way of slicing up your state space, you can calculate an entropy.

Obviously, we can increase/decrease entropy even for reversible processes by switching on force fields which enhances/reduces the total number of degrees of freedom of the system *acted* upon (This is in a sense what happens when you release the gas from a smaller box into a larger one).

You've got it :-) That's what is the effective use of the second law! And this comes about because of the different sizes of phase space that are affected.

The second law is something heuristic which we observe even in off equilibrium situations (again except for black holes), so entropy there cannot be Von Neumann - Shannon entropy, but a ``dynamical entropy´´ whose definition you can realize by adaptively counting the effective number of degrees of freedom (on the other hand any dynamical notion of entropy should also undergo a ``thermalization´´ process even if the number of degrees stays fixed).

But, that's the same entropy, no ? And that's what we do when we write down the second law, no ?

(a) S denotes a dynamical entropy which coincides with the Shannon Von Neumann equilibrium notion (b) the first law holds with respect to S.
Since in your Newtonian universe, the total energy is conserved, the first law says that
T dS = p dV
The latter expression should be SMALLER than zero since the gravitational force is going to make matter clump together. I do not think that the inclusion of radiation degrees of freedom due to chemical binding which occurs will change the outcome of this conclusion but there is no a priori reason why you should do this (you might as well assume that all particles are neutral).

I don't agree with your use of T dS = p dV. This only describes the (non-existing) equilibrium situation of my universe in an S/V diagram.

cheers,
Patrick.
 
  • #40
The posts scattered around in several threads relating to the arrow of time have been put all here (in chronological order).
 
  • #41
vanesch said:
Nah, the first law is just conservation of energy. It is *always* valid in a conservative system.
I don't think that this equation can, in any way, be applied to our situation, as it is not in equilibrium, and certainly not defined by just 2 extensive quantities S and V.
.
Hem, is that not a contradiction in one and the same message :smile: (the first law is not just conservation of energy) Moreover, you might restrict yourself to the pure mechanical situation where no radiation and particle creation/annihilation is involved (this is perfectly allowed in Newtonian mechanics).
Concerning the equilibrium, I will express myself more accurately here: entropy is constructed by making a phase space average, which is for ergodic transformations the same as the time average over an infinite time period (independently of the intial conditions you start out from). Shannon entropy is NOT a time dependent concept, it is constructed by making exactly this average (still using the dynamics though), and a one line calculation confirms this. What I call adaptive counting is not entropy in the Shannon sense, it is a handyman's approach to describe (by hand) what happens when we enlarge our interest to larger systems by coupling it with another one. It is this ``by hand´´ what is not described into your reversible dynamics (it is the same issue as your FAPP reduction rule in QM in some sense). The entropy in your line of thinking would make discrete jumps, while an appropriate notion of dynamical entropy would undergo a ``thermalization´´ process as I mentioned before. And sure : one microscopic complete description can reach equilibrium. You have to be very careful here: equilibrium is a TIME average, it is entirely meaningless to speak about temperature at one moment in time in one particular place of the box. In statistical mechanics, this time average is over the entire real line, while in the dynamical situation of thermodynamics (which is an empirical science) the latter is over some small time interval required for thermalisation. Therefore you have two options: either you kick shannon to hell and develop some better notion of entropy (which is desirable), either you kick unitarity or Hamiltonian dynamics out of the window (which might be a bit too wild).

But you may be right in the practical sense that the universe did not have time to thermalise yet during the matter clumping and that you need to subdivide it into different areas with different macroscopic parameters so that you can still save dS_(total) >= 0. But then you propose something better.

Cheers,

Careful
 
  • #42
Careful said:
Hem, is that not a contradiction in one and the same message :smile:
Ah, you call T dS = p dV the first law of thermodynamics ?
To me, it is energy conservation, which, in the very specific case of a system in equilibrium described by two extensive variables S and V, reduces to the above expression. Matter of definition I guess. I wanted to state that the equation T dS = p dV is NOT applicable to the ENTIRE elastic ball universe because it is NOT in equilibrium during the time we are considering the application of the second law which is "shortly after the initial condition" (on the time scale of recurrency). But energy conservation IS, of course.
Moreover, you might restrict yourself to the pure mechanical situation where no radiation and particle creation/annihilation is involved (this is perfectly allowed in Newtonian mechanics).
Ok, but then you will also not see any gravitational contraction !
Concerning the equilibrium, I will express myself more accurately here: entropy is constructed by making a phase space average, which is for ergodic transformations the same as the time average over an infinite time period (independently of the intial conditions you start out from).
:approve: But you can even go further. You can slice up your phase space in smaller phase spaces of small chunks of the system (say, a container of gas), and apply ergodicity already here. So you can consider that each of these smaller chunks of the system have *their* phase space point distributed according to the time average of one such system in equilibrium.
Shannon entropy is NOT a time dependent concept, it is constructed by making exactly this average (still using the dynamics though), and a one line calculation confirms this.
Ok, I may be wrong here, but Shannon entropy I only know stricty in information theory http://en.wikipedia.org/wiki/Shannon_entropy
So to me it describes *your state of knowledge* of the microstate of the system (namely, the amount of information you would WIN over what you know already when one would tell you the exact microstate of the system).
As such, from this point of view, the second law only tells you that at best, you can know what you know already, or you might loose knowledge, but you'll never GAIN knowledge by having your system evolve in time.
You also see that it depends on "how you described your system" (what correlation functions you consider relevant and of which you have hope to retain the knowledge through dynamical evolution). In the microcanonical ensemble for instance, you assume that you know the energy, period.
I don't see why this cannot be "instantaneous".
What I call adaptive counting is not entropy in the Shannon sense, it is a handyman's approach to describe (by hand) what happens when we enlarge our interest to larger systems by coupling it with another one. It is this ``by hand´´ what is not described into your reversible dynamics (it is the same issue as your FAPP reduction rule in QM in some sense). The entropy in your line of thinking would make discrete jumps
I don't see what's so non-Shannon about it. I just describe the knowledge I have about the system's microstate as compared to knowing entirely the microstate. It does not necessarily have to "jump", because there can be smooth weighting functions instead of "hard slices".
equilibrium is a TIME average, it is entirely meaningless to speak about temperature at one moment in time in one particular place of the box. In statistical mechanics, this time average is over the entire real line, while in the dynamical situation of thermodynamics (which is an empirical science) the latter is over some small time interval required for thermalisation.
Yes, but it is only over a small amount of time that the second law has any practical meaning. For me, the second law is entirely FAPP, as a function of what you know and are interested in in the system.
Therefore you have two options: either you kick shannon to hell and develop some better notion of entropy (which is desirable), either you kick unitarity or Hamiltonian dynamics out of the window (which might be a bit too wild).
But you may be right in the practical sense that the universe did not have time to thermalise yet during the matter clumping and that you need to subdivide it into different areas with different macroscopic parameters so that you can still save dS_(total) >= 0. But then you propose something better.
Cheers,
Careful
I think that dS(total) doesn't make much sense if you KNOW the initial state of the universe. I think that dS/dt > 0 only has a FAPP meaning, during the first part of time evolution after that initial state, for a subsystem, and that what precisely you understand by S *IS* shannon entropy, namely your lack of information about the microstate (which I don't think is not possible to define instantaneously!).
You can of course add together all entropies of all subsystems in your universe and call that the entropy of the entire universe, but that then simply means your lack of knowledge of the *precise* initial state of the universe ; the only thing that you know about that initial state is that it was special concerning low-order correlation functions (which are usually what you HAVE as information about a system), and that information gets "lost" during the first part of its dynamical evolution. You will win it back at the end of a cycle, when you are reaching a recurrency time. But that's far far far in the future.
In the mean time, you'll have a practical law which says dS > 0, and then a long period of equilibrium, where dS = 0 (you won't be able to do any experiments during that period - in fact you will be dead).
 
  • #43
vanesch said:
QUOTE]
I will comment the details later, but if you go back to your FAPP arguments (wich wants me to say PAF to you :smile: ) then I agree, but then it is also impossible to give this law a fundamental meaning (which people nowadays seem to do)

Cheers,

careful
 
Last edited:
  • #44
Careful said:
but then it is also impossible to give this law a fundamental meaning (which people nowadays seem to do)

I didn't realize that people wanted to make this a fundamental law - it is an almost "tautological" law! Note that there may be OTHER causes of irreversibility which DO make this law more fundamental. In the whole discussion, I wanted to point out that there is no fundamental clash between reversible microdynamics and "apparent" irreversibility described by dS > 0, in that this can also occur in the situation I proposed (a Newtonian universe with a special initial condition). So the *empirical* observation of dS > 0 DOES NOT IMPLY NECESSARILY an irreversible microprocess. *that's* the point I wanted to make.

For instance, it is not because one person has seen once, in a million years, say, one tiny violation of the second law which hasn't been repeated since, that the second law would be "falsified", which would be the case if it had serious fundamental status. (of course, in practice, people would doubt about the mental ability of the poor observer :-)
 
  • #45
vanesch said:
I didn't realize that people wanted to make this a fundamental law - it is an almost "tautological" law! Note that there may be OTHER causes of irreversibility which DO make this law more fundamental.
Indeed, he would be crucified for making that observation :smile: If you mean it in this practical sense then I agree with you as was clear from my very first posting on this thread. I will come back to the details of the previous one later on (it is good to elaborate on these issues, since as you might have noticed I am looking for some kind of objective dynamical notion of entropy again) but have no time for now.

Cheers,

Careful
 
  • #46
CarefulI am looking for some kind of objective dynamical notion of entropy again [/QUOTE said:
Hi, I read your message now and what I have in mind is rather similar to what you want to say there but I would like to have it objective and quasi local (which I shall explain now - the quasi local aspect is non standard of course). The crux of what you say is that you have to adapt your notion of entropy when you notice that (on some timescale) the bunch of particles you are studying has access to a larger amount of degrees of freedom. Realistic averaging time scales are a function of the temperature and are of the order hbar/k_B T = 10^(-11) /T seconds. However, in the Unruh effect the temperature is T = hbar a/(2 pi k_B c), implying a timescale of 2 pi c / a = 10^9 /a seconds! Assuming that a rocket accelerates at 10 m/s^2 this gives around 10^8 seconds which is roughly of the order of one year (the same goes for the Hawking effect). Of course: for a lab, this is not an issue. So, I want to incoorporate the idea that entropy coincides with the number of degrees of freedom the system can click in a reasonable time scale (this is not the shannon definition). You might make a quasi local notion of this by subdividing space (not phase space) into tiny boxes of length L (or the order of the diameter of the particles) and introducing a momentum cutoff M = n L where n is a natural number running such that M stays constant; this introduces a subdivision of phase space. Fix a timescale T, and initial conditions for the system of particles under study (you might even assume you know them exactly): follow the particles for time T and compute the logarithm of the volume in phase space the particles went through. Between time T and 2T you can do the same and so on ... I should refine this still (for example when the dimensions of the spatial volume the particles can be in gets substantially larger, you might want to increase the time scale) and you might even take an average over realistic initial conditions. Anyway, you can spell out your comments, I could make this more formal if you want to.

But this goes all way beyond the standard textbook Shannon notion in the sense that you need a dynamical notion of the available degrees of freedom associated to a preferably dynamical timescale.

I know this does not coincide (in a straightforward way) with the idea that the system moves from smaller to bigger boxes (which you choose according to your notion of macroscopically distinguishable) in the ENTIRE phase space which you have fixed from the beginning (which of course you refer to and I am aware of). But I think it might capture it rather well ... it is based upon the intuition that motion becomes more chaotic when equilibrium is to be reached.

Cheers,

Careful
 
Last edited:
  • #47
Careful said:
I am looking for some kind of objective dynamical notion of entropy again

Entropy? Let's for now just say that high Entropy means less structure,
less organization, however you want to define this:


Attractive Forces decrease Entropy.

-Gravity organizes matter into stars and galaxies.
-The Strong Force brings us nucleo-synthesis.
-The EM field gives us atoms, molecules, solid matter.

They all turn chaotic matter into organized matter.


(Pseudo) Repulsive Forces increase Entropy

-Heat/Kinetic energy increases entropy. (Boltzman, 2nd law)
-Pauli's exclusion principle also acts as a pseudo repulsive force.

The both together save us from becoming black holes in no-time.


All Real forces are Irreversible in time

Gravity should be repulsive in order to organize matter into stars and
Galaxies backward in time. For the time-reversal of EM fields things
become even more wired: Equal charges have to attract each other
while opposite charges need to repel each other.

Only the pseudo repulsive forces (Heat, Pauli) seem to be symmetric
in time.



Again on Entropy:

Use Shannon? You'll get in this discussion that information is never lost
at all, not even if stuff is poured into a black hole, (Hawking...)

Use? Boltzman? 150 year old 2nd law intended for heat/kinetic energy.
Don't extrapolate poor, old laws outside their intended domain...


Regards, Hans
 
  • #48
Hans de Vries said:
Entropy? Let's for now just say that high Entropy
Regards, Hans
That's funny: you gave a description of what force is supposed to do what with entropy (what I already knew) without actually giving one particular definition :smile: What I am trying to adress here is the following : when we observe a system S which we want to study, entropy is the logarithm of its degrees of freedom which we can somehow estimate (by hand) at that moment in time through observation (for example the particles are in a bounded region of space and there is a momentum cutoff). S is usually open and can conquer more and more (or less) ``degrees of freedom´´ per time interval as it evolves. This picture is a quasi local one, it does not start from the a priori knowledge that S is a part of a closed system which can give rise to some a priori partition of macroscopically distinguishable configurations (as is usually done). It defines dynamically entropy by caunting the ``degrees of freedom´´ the system occupies in some small time interval. This has the advantage that the time derivative of the entropy can be instanteneously calculated while in the picture of Vanesch (with the c) I should wait until I know the ``final´´ phase space and the associated coarse graining in order to do this. I wondered wether someone knows something about this, or has some comments.

Cheers,

Careful
 
Last edited:
  • #49
Careful said:
when we observe a system S which we want to study, entropy is the logarithm of its degrees of freedom which we can somehow estimate (by hand) at that moment in time through observation.

Maybe you want to involve symmetries rather than degrees of freedom.
(Think for instance atomic grids). Symmetry provides the way to describe
a system with less parameters, is less information, is less entropy.

The whole point is indeed to correctly quantify this (The no. of bits needed)
Not so easy. You'll probably keep on finding tricks to reduce the number
of bits just a litle bit more, just like what we see in video compression.

I somehow doubt if there's a single, simple and elegant way to do this.Regards, Hans
 
Last edited:
  • #50
Hans de Vries said:
Use Shannon? You'll get in this discussion that information is never lost
at all, not even if stuff is poured into a black hole, (Hawking...)

I've always understood this as a weird way of saying that this is not going to be possible with reversible (unitary) dynamics...
 
  • #51
Hans de Vries said:
Maybe you want to involve symmetries rather than degrees of freedom.
(Think for instance atomic grids). Symmetry provides the way to describe
a system with less parameters, is less information, is less entropy.

The whole point is indeed to correctly quantify this (The no. of bits needed)
Not so easy. You'll probably keep on finding tricks to reduce the number
of bits just a litle bit more, just like what we see in video compression.

I somehow doubt if there's a single, simple and elegant way to do this.


Regards, Hans

hi Hans,

My first post provided already this grid picture. I am convinced there MUST be an elegant way to (at least) DEFINE it quasi locally (if we call it the second *law* of thermo, then I would like to see it on less heuristic grounds). From the physical point of view, a quasi local strategy is always superior: it is exactly what Wald, Ashtekar et al. have done with the old *global* black hole concept of Hawking and Penrose (sorry this was the obvious example which came to my mind :smile: ).

Cheers,

Careful
 
  • #52
vanesch said:
I've always understood this as a weird way of saying that this is not going to be possible with reversible (unitary) dynamics...
No, no, this is not so obvious. 't Hooft has written lately a series of papers about this in which he POSTULATES that the scattering matrix is unitary in the outer region and derives its logical consequences. There are however a few subtle problems with this still... :smile:

Cheers,

careful
 
  • #53
Poll: arrow of time

My opinion (today) is that there is no universal arrow of time since all fundamental laws are exempt of any arrow of time (except maybe marginally?).

But this does not exclude that we are experiencing a part of the universe where the correlations of events push us to believe time goes from (our) past to (our) future.
 
  • #54
Juan, I cannot agree easily with your statement:

It is simply false that a non-unitary evolution can be derived from an unitary evolution as a kind of "good approximation". It is mathematically imposible and physically wrong. This is the reason people seriouly working in arrow of time (specialists in the topic) is proponing nonunitary evolutions. For example Prigogine theory, CSM, etc.

You know that Poincaré was not precisely an admirer or Boltzman. But you know also that Poincaré introduced a recurrence-time concept associated with a theorem. Unitary (reversible) evolution would imply that a system in a closed phase-space could come back as close as one wants to its initial position in phase space. But Poincaré also explained that this 'recurrence time' grows very fast with the precision required on the recurrence. Is it not 'a good approximation' if one just forget about any precise recurrence and say the recurrence time is infinite. This is specially valid if one consider larger systems. It is classical to find in thermodynamics book the recurrence time for particles to rejoin one half of a box and to compare this time to the age of the universe. Why should it be mathematically incorrect to say that particles will never rejoin one half of the box (instead of saying it will take 1000 times the age of the universe)? Mathematics can accommodate meaning.

You should consider that maybe right and wrong are not words for physics. Physics is more concerned with precision. And physicists have no interrest in precision that cannot be measured of experienced.

Reading about Landau damping of em waves in plasma is also quite useful in this regard. The equations for electromagnetics and charged particles motions are reversible. Collisions are neglected. Still a wave damping mechanism has been highlighted by Landau. It is also called 'collision-less' damping. Here is one of the ways I picture it. Formally, the equations have indeed a solution without damping. But, if you assume that a damping is possible (even extremely small), then the same equations tell you that charged particles can absorb energy from the waves. And the reason that makes this absorption possible is that the damping modifies (even sligthly) the geometry of the wave field. This (slight) pertubation of the wave field is precisely what is needed to make the energy transfer to particles possible. This looks like the Landau solution is a stable solution, while the reversible solution is not stable. (note: there is much more to say about Landau damping, like that it can also lead to emission instead of damping depending of the distribution function of the particles)

Finally, let me note that I have *seriously* read Prigogine. I have a poor understanding about how his approach with non-unitary-transformations changes the century-old picture of irreversibility. I am desesperate about understanding it, but till know I belief that this is not a new theory but it is more like a synthesis. And for me the synthesis is not so useful: I know too few examples to have any use of the synthesis. Could you suggest me some readings that could illustrate the Prigogine approach? I would also like to make links with Poincaré and with Landau.
 
Last edited:
  • #55
lalbatros said:
My opinion (today) is that there is no universal arrow of time since all fundamental laws are exempt of any arrow of time (except maybe marginally?).
But this does not exclude that we are experiencing a part of the universe where the correlations of events push us to believe time goes from (our) past to (our) future.
Time, as we experience it, goes *by definition* from our past to our future. Now, if you want to suggest that our future may eventually be a part of our past (in either that the universe would not be globally hyperbolic) then the laws of nature would not be predictive anymore (and we might stop doing physics). If you want to suggest that there does not necessarily exist a globally defined *dynamically* determined arrow of time, then I would say this is the generic situation in GR. However, our measurements indicate that the universe is homogeneous, isotropic and expanding on a sufficiently large averaging scale so that it is reasonable to make this extrapolation. It is much more difficult to speak of a thermodynamical arrow of time since this requires a dynamical, objective, quasi local notion of entropy (and this does not exist to my knowledge). Moreover, it is fairly obvious from the start that this will not give a globally well defined arrow of time. As Vanesch, Hans and I pointed out before, the second law is something heuristic for now, it depends upon the experimentators preference as well as the temperature scale involved and is only observed on much smaller time scales than the Poincare recurrence times. However, one should decide wether one wants it fundamental or heuristic (and I think this is the source of confusion here).

Moreover, it think you should refrain yourself from saying what physicists are interested in and what not since 5 seconds of braintime will ensure you that there is no consensus about this (albeit certain people might wish it to be so).

Cheers,

careful
 
Last edited:
  • #56
I understand you remark:

... it think you should refrain yourself from saying what physicists are interested in ...

and sure I was not careful enough saying

You should consider that maybe right and wrong are not words for physics. Physics is more concerned with precision. ...

My intention was to say that physics is based on measurements and that no measurement has an infinite precision. Physical knowledge contains therefore a part of statistics. I was particularly disappointed by the affirmation by Juan that -in short- *proving irreversibility fromp reversible laws must be a mathematical mistake*. For me physics is about modeling the world and maths is about doing it logically. There is no point in "proving" irreversibility, but the point is about "understanding" it or "modelling" it: this is not only mathematics.

When I first heard about the Boltzman theorem, during a morning lecture on plasma physics, I was totally upset: I decreeted that this was no mathematical proof. I still have the lecture notes where I wrote 'wrong' in bold. This is clearly an excess of french culture! This was a long time ago, and I am still not happy with my understanding of the problem. This is a sign I am getting old, maybe.

Nevertheless, the problem at hand here has huge intrinsic limits to the precision: for even tiny systems, the Poincaré reccurrence time is incredibly long. So why would it be mathematically wrong to just assume the recurrence time is infinite and built a theory (model) on that? Maybe it is not worded in a mathematically-correct way, but the idea is clear enough physically.

I don't want to exclude useful mathematical development on the topic.
But I fear that regarding the fundamentals it will be more cosmetics than anything else.
The fundamental understanding of irreversibility was achieved at the end of the 19th century.

What we still need today are the tools to bridge from the microscopic reversible world to the macroscopic irreversible world. Even the properties of water cannot be explained in this way today.

I must add that I have the same opinion regarding the so-called "measurement problem" in QM.
For me it is more a problem of spelling the words of physics than really doing physics.
I think the contrary for the EPR problem, as I wrote in another forum. Because it deals -again- with the nature of space-time.
 
Last edited:
  • #57
lalbatros said:
I understand you remark:
and sure I was not careful enough saying
My intention was to say that physics is based on measurements and that no measurement has an infinite precision. Physical knowledge contains therefore a part of statistics. I was particularly disappointed by the affirmation by Juan that -in short- *proving irreversibility fromp reversible laws must be a mathematical mistake*. For me physics is about modeling the world and maths is about doing it logically. There is no point in "proving" irreversibility, but the point is about "understanding" it or "modelling" it: this is not only mathematics.
When I first heard about the Boltzman theorem, during a morning lecture on plasma physics, I was totally upset: I decreeted that this was no mathematical proof. I still have the lecture notes where I wrote 'wrong' in bold. This is clearly an excess of french culture! This was a long time ago, and I am still not happy with my understanding of the problem. This is a sign I am getting old, maybe.
Nevertheless, the problem at hand here has huge intrinsic limits to the precision: for even tiny systems, the Poincaré reccurrence time is incredibly long. So why would it be mathematically wrong to just assume the recurrence time is infinite and built a theory (model) on that? Maybe it is not worded in a mathematically-correct way, but the idea is clear enough physically.
I don't want to exclude useful mathematical development on the topic.
But I fear that regarding the fundamentals it will be more cosmetics than anything else.
The fundamental understanding of irreversibility was achieved at the end of the 19th century.
What we still need today are the tools to bridge from the microscopic reversible world to the macroscopic irreversible world. Even the properties of water cannot be explained in this way today.
I must add that I have the same opinion regarding the so-called "measurement problem" in QM.
For me it is more a problem of spelling the words of physics than really doing physics.
I think the contrary for the EPR problem, as I wrote in another forum. Because it deals -again- with the nature of space-time.
No, no, your are partially misunderstanding me here: all I wanted to say was that everyone shoud make very clear from the beginning what we expect entropy (what it is and so on) to be/to do. I myself do not worry too much about recurrence times either, but Juan R. clearly does since I have the impression he wants the second law to be an absolute (deterministic) fact.
I disagree however, when you say that the fundamental *understanding* of irreversibility was achieved at the end of the 19'th century. That is not true, the *observation* was unmistakably made but a *proof* should certainly be made in the end (the lack of such thing is what pushes people to go over to non unitary stuff and so on). For this purpose, we shall need a dynamical definition of entropy and apply it to arbitrary intial conditions, and explore if any further constraints arise (which is probably so). The fact that you stick to your stubborn notes is not a sign of getting old :smile: , it is the honest recognition that some old problems are still jumping right in our face. I myself am a proponent of time reversal invariant laws since I am a classical relativist (cannot hide it :smile: ), so I consider this to be very useful.

So, in my opinion, this problem is very real. Just as the measurement problem in QM is. I can however just about agree with you when you say you don't mind from the handyman point of view since then you know which theory to apply correctly in which domain of physics. However, if one wants to construct one theory which governs both the micro and macro world, then these are very real primary concerns.

Sorry for reacting a bit sensitive to YOU.

Cheers,

Careful
 
Last edited:
  • #58
ZapperZ said:
http://www.math.rutgers.edu/~lebowitz/PUBLIST/lebowitz_370.pdf

There's another, newer article related to this in Physics Today, but it's not available online.

Zz.


Lebowitz is rather famous between time arrow comunity by his irrelevant thoughts are flagrantly wrong work. In the past i studied some article by Lebowitz where it claimed derivation of second law of thermodynamics from reversible microscopic dynamics. Unfrotunately, Lebowitz physics is rather wrong and his math incorrect.

Some time ago Lebowitz wrote a 'chidlish' paper on Chaos and time arrow.

I contacted by Prigogine (a recognized leader in time arrow) and explained my thoughts about that wrong article. I still remember his reply:

Lebowitz article is completely wrong

This Physics today is completely wrong. At a first look explanation appears reasonable, but when one work the details thing does not fit.

That is the great problem of Lebowitz. He newer worked the details, just wrote superfitial papers.

This is the reason people who seriously work in the arrow of time does not follow Lebowitz suggestions o:)
 
  • #59
vanesch said:
The second law (at least, I understand it that way) is not an "absolute" law ; it is almost a "tautology": "only probable things happen". So sometimes it is violated, namely when something improbable happens. The only point is that you will have to WAIT A LONG TIME for something improbable to happen.

This is a common misconception. The second law is EXACT. In fact the popolar textbooks explanation that the second law is probabilistic was the basis of the flagrantly wrong article published in Physical review about "experimental violation of the second law of thermodynamics". At least two comments proving why what was wrong were published by specialists.

I also wrote an work in the topic. He was available at CPW, but closed. However i post again the paper in the web in brief (2006) and you can read it.

People who claim that second law of thermodynamics is probabilistic and there is small probability of decreasing of entropy fails to notice difference between <S> and deltaS.

The monotonic increase of entropy applies just to the average. In fact, the deviations from that tendency follows from fluctuation theoy which is based in second law (Einstein formula).
 
  • #60
vanesch said:
Yes, but even in order to be able to define it, you need to define your macrostates. If you KNOW perfectly well the microstate of a system, then the Shannon entropy of that system is zero (and all the entropy is in your head!).
Of course, Juan is right that IF YOU KNOW the special initial state of a CLOSED SYSTEM, and you know perfectly well the (reversible) dynamics, so that you *can calculate* the microstate after some time, then the Shannon- Von Neumann entropy is zero to start with, and zero all along. For a kind of god creature who knows this, "nothing surprisingly" happens, no "irreversible phenomena" occur etc... This is what happens when you apply the reversible evolution theorems (Liouville in CM, unitary evolution to a pure state density matrix in QM).
But the second law doesn't apply to this case (well, it does: it says that *if you know all that, then entropy zero is conserved*, so it is trivial). It applies to SUBSYSTEMS. We found empirically the second law by looking at small pieces of universe over short amounts of time, and by looking at coarse-grained properties (temperature, chemical reactions...) which only depend upon the properties of low-order correlation functions. It is THERE that the second law is valid.
Only when these coarse-grained properties are defined, (the correlation functions selected which will matter), the entropy can be defined, because we have now sliced up the state space in macrovolumes and can count the microstates corresponding to it (or weight it with the probabities induced by these correlation functions). That's what the standard ensembles do, but one should realize that the very definition of entropy will depend on exactly how you choose to "slice up the state space".
And your microstate wanders happily from small volumes (small entropies) to big volumes (large entropies). So the entropy increases. Until it reaches the biggest volume, where it stays FOR MOST OF THE TIME. No matter exactly what track (what initial state), as long of course as the initial state was within a "small volume".
Whether or not it was part of a cosmic track that started out in a known state.

Completely wrong discussion.

The Second laws says that entropy may increase, but this is stopped by Liouville theorem. There in that exist a thing called the "problem of arrow of time".

You discussion about coarse grained entropies is completely wrong. In fact there is well-known theorem that says that if one increase the level of measurements (more information) the production of entropy may decrease. Nobody has observed that effect. The production of entropy remain constant and that is independent of the level of detail one uses.

Moreover, nobody has proven as an increase in coarse grained entropy follow from a constant fine grained one. This is the reason that exist a school of research who reject the coarse-grained intepretation.
 
  • #61
How about the following statement: "the information I have about the microstate of a freely evolving system can either remain constant or decrease, but not increase".
 
  • #62
lalbatros said:
Juan, I cannot agree easily with your statement:
You know that Poincaré was not precisely an admirer or Boltzman. But you know also that Poincaré introduced a recurrence-time concept associated with a theorem. Unitary (reversible) evolution would imply that a system in a closed phase-space could come back as close as one wants to its initial position in phase space. But Poincaré also explained that this 'recurrence time' grows very fast with the precision required on the recurrence. Is it not 'a good approximation' if one just forget about any precise recurrence and say the recurrence time is infinite. This is specially valid if one consider larger systems. It is classical to find in thermodynamics book the recurrence time for particles to rejoin one half of a box and to compare this time to the age of the universe. Why should it be mathematically incorrect to say that particles will never rejoin one half of the box (instead of saying it will take 1000 times the age of the universe)? Mathematics can accommodate meaning.
You should consider that maybe right and wrong are not words for physics. Physics is more concerned with precision. And physicists have no interrest in precision that cannot be measured of experienced.
Reading about Landau damping of em waves in plasma is also quite useful in this regard. The equations for electromagnetics and charged particles motions are reversible. Collisions are neglected. Still a wave damping mechanism has been highlighted by Landau. It is also called 'collision-less' damping. Here is one of the ways I picture it. Formally, the equations have indeed a solution without damping. But, if you assume that a damping is possible (even extremely small), then the same equations tell you that charged particles can absorb energy from the waves. And the reason that makes this absorption possible is that the damping modifies (even sligthly) the geometry of the wave field. This (slight) pertubation of the wave field is precisely what is needed to make the energy transfer to particles possible. This looks like the Landau solution is a stable solution, while the reversible solution is not stable. (note: there is much more to say about Landau damping, like that it can also lead to emission instead of damping depending of the distribution function of the particles)
Finally, let me note that I have *seriously* read Prigogine. I have a poor understanding about how his approach with non-unitary-transformations changes the century-old picture of irreversibility. I am desesperate about understanding it, but till know I belief that this is not a new theory but it is more like a synthesis. And for me the synthesis is not so useful: I know too few examples to have any use of the synthesis. Could you suggest me some readings that could illustrate the Prigogine approach? I would also like to make links with Poincaré and with Landau.

People has worked the problekm of arrow of time during more than a century. The level of last works in the topic is very advanced.

The use of a formally infinite recurrence time does not solve the problem of arrow of time. I had solved, people has stopped research many time ago. Exactly in Poincaré époque!

From an unitary dynamics one cannot obtain arrow of time. This is the reason that nobody has obtained the solution to the measurement problem of QM, which is obviously a irreversible phenomena. After of decades of irrelevant attempts to obtain the solution from a unitary approach people is few to few passing to explicit nonunitary approaches, many of them related to quantum gravity. For example, Penrose clearly claim that one may use a NON unitary approach. Therefore people who cited Penrose and his initial conditions does not understand him.

Inittial conditions are not sufficient. Both Newton or Schrödinger equations are always solved with initial conditions and, however, both are reversible equations offering us reversible physics. The use of an initial condition does not introduce reversibility into physics.

Landau clearly emphasized that the true basis of the Second law of thermodynamics was not ignorance. He clearly stated that solution was in that quantum measurements was pure irreversible phenomena. He traced irreversibility of thermodynamics to irreversibility of QM measurement. However he failed to provide us a detailed theory on this.

About Prigogine, yes i agree with you that he made some mistakes. They are solved in my approach. For example relationships between lambda transformation and U looks clear in my approach and found an error in one of theorems of Brushels School.

I do not know what you are read or what level you need.
 
  • #63
vanesch said:
How about the following statement: "the information I have about the microstate of a freely evolving system can either remain constant or decrease, but not increase".

An informational interpretation of the Second laws of thermodynamnics obtained via substitution S_{thermodynamical} ----> S_{statistical}

dS > 0 => dI > 0

where I is ignorance in the coarse-grained school. After people claim that using dI > 0
one is proving dS > 0.

As proven in many literature, if one consistently expresses the physical basis under any process of adquisition of information one obtains (due that mechanics is time symmetric).

dS >=< 0 => dI >=< 0

the information I have about the microstate of a freely evolving system can remain constant, decrease, or increase.

Moreover, that is valid for coarse grained statistical entropies. If one take the real fine-grained entropy. By Liouville theorem dS =0 => dI = 0 and the process is reversible.

The standard use of the law of increasing of ignorance is laws one of mathematical funambulism denunciated by van Kampen.
 
Last edited:
  • #64
Juan R. said:
Inittial conditions are not sufficient. Both Newton or Schrödinger equations are always solved with initial conditions and, however, both are reversible equations offering us reversible physics. The use of an initial condition does not introduce reversibility into physics.

Well I think we can now have a semantic discussion about what exactly it means to be "irreversible".

But, can you answer the following question:
Is it, or isn't it, possible to define a function based upon the distribution of low-order correlation functions (density of particles per small volume in position/momentum space, density of distances and relative velocities of 2 particles in position/momentum space,...) which, on average, increases or stays almost constant in a relatively short amount of time after we apply a special initial condition, even with reversible dynamics (Newtonian).
Here, "short amount of time" is a time, small compared to the recurrency time.

For instance, in a box with elastic balls, consider, as such a function, the squared integral of the difference of the distribution of particle positions with a uniform distribution. When I put all my particles in a corner, then this squared integral is a big number (highly peaked distribution minus flat distribution). When I let evolve this system under REVERSIBLE dynamics, this distribution widens to become almost uniform (low value of the squared integral). This is a simple example, but it shows how it is VERY EASY to obtain "an arrow of time" function from a special initial condition and reversible mechanics. In what way does that seem problematic ?
 
Last edited:
  • #65
particles in a box

Juan,

You started this interresting threat on the origin of irreversibility.
I am a little bit puzzled by your rejection of any kind of 'simple' explanation.
Apparently, if I understand well, you would prefer to reject any explanation based on reversible microphysics.
And apparently you would prefer some new laws to explain irreversibility.

Could you please explain why you are taking this point of view.
I would propose that you take as starting point the tougth experiment dealing with particles in a box.
A box is separated in two parts A and B.
Particles are located at random places in part A with random velocities.
There is no interactions between particles, which only reflect on the walls.
This simple dynamical system obeys reversible micro physics.
Still, as you can verify, you can use to illustrate truly irreversible behaviour.
You will observe the 'irreversible' filling of the two parts of the box and wait an eternity before somthing new happens. This is just what is observed in the real world. (forgetting about velocity thermalisation of course)

Then this question: is this 'particles in a box' experiment not showing clearly the origin of irreversibility?
I consider that the origin of irreversibility is quite apparent in this simple experiment.
I also consider that stuying irreversiblity and modelling it in a comprehensive way is still a wonderful subject where nearly everything still has to be discovered. But this will only show the origin of irrevesibility with more detail, but not something totally new.
And definitively it will be based on reversible micro-physics.
 
Last edited:
  • #66
vanesch said:
Well I think we can now have a semantic discussion about what exactly it means to be "irreversible".

But, can you answer the following question:
Is it, or isn't it, possible to define a function based upon the distribution of low-order correlation functions (density of particles per small volume in position/momentum space, density of distances and relative velocities of 2 particles in position/momentum space,...) which, on average, increases or stays almost constant in a relatively short amount of time after we apply a special initial condition, even with reversible dynamics (Newtonian).
Here, "short amount of time" is a time, small compared to the recurrency time.

For instance, in a box with elastic balls, consider, as such a function, the squared integral of the difference of the distribution of particle positions with a uniform distribution. When I put all my particles in a corner, then this squared integral is a big number (highly peaked distribution minus flat distribution). When I let evolve this system under REVERSIBLE dynamics, this distribution widens to become almost uniform (low value of the squared integral). This is a simple example, but it shows how it is VERY EASY to obtain "an arrow of time" function from a special initial condition and reversible mechanics. In what way does that seem problematic ?

The true is NOT. One cannot define an irreversible evolution of a distribution density or correlation function or similar USING reversible dynamics. In fact, all equations used in the study of irreversibility are irreversible ones. A clear example is Boltzmann kinetic equation which is irreversible. If you choose the same initial molecular configuration and introduce it in reversible Newton equations you obtain a reversible dynamics. This is the reason of the use of irreversible equations. This is the reason basic equation of kinetics (gas phase) is irreversible. Precisely the great problem of nonequilibrium statistical mechanics (which is still unfounded) is that whereas the obtaining of simple irreversible equations (as Boltzmann in classical physics and Pauli in quantum one) is really trivial. The problem is the obtaining of more general irreversible equations of motion. for example, what is the equivalent of Boltzman irreversible kinetic equation for a condensed fluid?

Obviously if it was so easy like "to work with Schrödinger dynamics or Newton equations using 'special' initial conditions" (even if one know that this mean beyond the simple model of 'all-balls-in-a-side-of-the-box'). Nonequilibrium statistical mechanics would have been developed 125 years ago :!)

About your model of box with elastic balls, -as it is natural on you- you always trivialize things. Of course, you are NOT obtainining irreversible behavior and of course you are not using a reversible dynamics. For example the entropy computed for that model does not coincide with entropy computed from thermodynamics. This is the reason that Bolztmann used explicitely an irreversible equation.
 
  • #67
Juan R. said:
About your model of box with elastic balls, -as it is natural on you- you always trivialize things.
Everybody has its shortcomings, Juan. You always like to make easy things complicated, I make complicated things easy :rofl:.
But you misunderstood my example. I didn't mean to recreate any true entropy. I just wanted to show you that, from reversible dynamics, it is possible to create a function which would show you "irreversibility" (that is, which would increase with t, for values of t > 0 and not too big).
Of course the situation is symmetrical around t=0. But that doesn't matter. I had a function which *increased almost monotoneously* for t>0 after t=0, although I had reversible dynamics. Of course, I realize that after long long long times, this function will decrease again (when we have a recurrency time given a certain accuracy, following Poincare). But "just after" (say, 10^50 years) the initial condition, this function will first rise and then level off.
This shows that there is no *fundamental* need for irreversible dynamics in order to obtain such a function - which is the essential function of entropy even if it doesn't go with the entropy value - as long as the monotonicity is respected, the "arrow of time" is defined.
And now I ask you: how are you going to distinguish *EMPIRICALLY* this scheme from a theory where you require this arrow of time to be present for ALL times (after the 10^50 years) - where I grant it to you that my scheme of things doesn't work. Because that's the only distinction - as far as I understand - between this "apparent irreversibility" and some hypothetical "true irreversibility": that "apparent irreversibility breaks down after mindboggling long times, while true irreversibility doesn't. But how do you distinguish that empirically ?
 
  • #68
Juan could use a wormhole to go and check himself :rofl: :rofl: I hope he is not going to use my own argument of global hyperbolicity against me now o:) But, seriously, he has a point that we should construct a definition of entropy and prove the second law for reasonable timescales and realistic setups (as I mentioned already a few times).
 
  • #69
lalbatros said:
Juan,
You started this interresting threat on the origin of irreversibility.
I am a little bit puzzled by your rejection of any kind of 'simple' explanation.
Apparently, if I understand well, you would prefer to reject any explanation based on reversible microphysics.
And apparently you would prefer some new laws to explain irreversibility.
Could you please explain why you are taking this point of view.
I would propose that you take as starting point the tougth experiment dealing with particles in a box.
A box is separated in two parts A and B.
Particles are located at random places in part A with random velocities.
There is no interactions between particles, which only reflect on the walls.
This simple dynamical system obeys reversible micro physics.
Still, as you can verify, you can use to illustrate truly irreversible behaviour.
You will observe the 'irreversible' filling of the two parts of the box and wait an eternity before somthing new happens. This is just what is observed in the real world. (forgetting about velocity thermalisation of course)
Then this question: is this 'particles in a box' experiment not showing clearly the origin of irreversibility?
I consider that the origin of irreversibility is quite apparent in this simple experiment.
I also consider that stuying irreversiblity and modelling it in a comprehensive way is still a wonderful subject where nearly everything still has to be discovered. But this will only show the origin of irrevesibility with more detail, but not something totally new.
And definitively it will be based on reversible micro-physics.
Precision: it is not 'my' rejection. It is a well defined school of research followed by many physicists and chemists. In fact one recent Solvay conference (1997 if i remember correctly the year, perhaps was 1999) was explicitely devoted to that topic and one heard really interesting stuff.

Specialist Van Kampen, for example, said that any attempt to derive irreversible dynamics from reversible dynamics was based in any amount of mathematical funambulism. Prigogine School is very famous about the search of irreversible laws. Penrose is also searching some similar via his nonunitary theory. Landau (Nobel Prize) also did. Quantum specialist Piron also claimed that one may search irreversible laws, etc. The list of people working in this is very large. If i remember correctly the director of International Solvay Institutes also has his own theory on this based in a new kind of logical calculus for states with diagonal singularity.

I am NOT rejecting

any explanation based on reversible microphysics.

I am claiming (as others) that explanation is not possible and that 'explanations' one finds in literature are wrong.

lalbatros said:
I would propose that you take as starting point the tougth experiment dealing with particles in a box.
A box is separated in two parts A and B.
Particles are located at random places in part A with random velocities.
There is no interactions between particles, which only reflect on the walls.
This simple dynamical system obeys reversible micro physics.

That model is exactly IRREVERSIBLE. You are not solving reversible equations of motion. There are, implicit, irreversible points in the model. Those irreversible points appears when you study the system with great care and mathematical detail. In fact, remember that initially Boltzmann claimed that had derived the Second law of thermodynamics from reversible Newton equation. After -with more rigorous treatments- it was proven that it was really using an irreversible model.

Why you believe that any guy on published literature who studied those points in detail (from Bolztmann époque more 125 years ago) have rejected any attempt to explain it from reversible dynamics more initial conditions.

Only people as Lebowitz and similar who newer have worked the details and newer offered to us a complete theory is supporting the point.

If you have time go to library and look one of Prigogine last popular books: The End of certainty. I have the Spanish version but english version would be identical. Read chapter 3 (from probabilities to irreversibility). There the model of balls in collision is explained (for 'translation' to your model substitute collision with other ball with collision with walls, but is the same: both are collisions!). Look figures III-2 and III-3 (would be the same numeration in english version). Look particles before collision and after collision. The situation is NOT simmetric and this is because the real process of collision is not well defined in classical or quantum mechanics.

Before collision (left on figure III-2) the particles (O) look

O--> <--O

After collision (right on figure III-2) them look

<--O::::O-->

The flow of binary correlations is not time simmetric. This is the reason that the collision operator in Bolztman equation is IRREVERSIBLE. Precisely, as proven by Bogouligov (great specialist on statistical physics) and van Hove (the great specialist in classical and quantum physics) time time ago via very rigorous theorems, that it is the collision operator in Boltzmann equation which cannot be obtained from Newton equations.

What is reversible is, and only is, the motion of particles before and after each collision. But the overall motion (i.e. including collisions) is not reversible.

For your model you would use

| <--O

and|::::O-->

with | the wall, but the basic idea is the same. Since you would use an irreversible collision operator wall-balls.

People like Lebowitz only write flagrantly wrong popular-level papers as above on physics today. The understanding of people as vanesch is still poor...

I consider that the origin of irreversibility is quite apparent in this simple experiment.

Remember that exist a 100 year-long extensive literature with very very advanced studies proving just the contrary. I have counted around 12 Nobel laureates for physics who worked in this specific topic without solve it (i did a figure with his names and appeared in the web in brief).

Remember that some of more recent proposals -for example Prigogine RHS for LPS- are working at level of a NEW quantum mechanics: new evolution equation, new mathematical space, new state vectors, etc.

Read Prigogine book for some details. My own theory is more advanced and, i think, solve the arrow of time problem. My theory corrects some errors in Prigogine and others theories today available (including non-critical string theory, Penrose theory, Lindblad axiomatic theory, etc.).
 
Last edited:
  • #70
vanesch said:
You always like to make easy things complicated, I make complicated things easy :rofl:.

Correction, you make complicated working stuff sufficiently easy until it obviously does not work :rofl:

vanesch said:
This shows that there is no *fundamental* need for irreversible dynamics in order to obtain such a function - which is the essential function of entropy even if it doesn't go with the entropy value - as long as the monotonicity is respected, the "arrow of time" is defined.

:zzz:


vanesch said:
And now I ask you: how are you going to distinguish *EMPIRICALLY* this scheme from a theory where you require this arrow of time to be present for ALL times (after the 10^50 years) - where I grant it to you that my scheme of things doesn't work. Because that's the only distinction - as far as I understand - between this "apparent irreversibility" and some hypothetical "true irreversibility": that "apparent irreversibility breaks down after mindboggling long times, while true irreversibility doesn't. But how do you distinguish that empirically ?

It is rather simple :wink:.

Advice: prove -at least by one time- to read literature in a topic before claim your own irrelevant and totally wrong ideas.

It is a first step for any knowledgeable guy.
 
<h2>1. What is the arrow of time?</h2><p>The arrow of time is the concept that time only moves in one direction, from the past to the present to the future. It is often described as the asymmetry of time, as it only flows in one direction and cannot be reversed.</p><h2>2. What causes the arrow of time?</h2><p>The exact cause of the arrow of time is still a topic of debate among scientists. Some theories suggest that it is a result of the increasing disorder or entropy in the universe, while others propose that it is a fundamental property of time itself.</p><h2>3. Can the arrow of time be reversed?</h2><p>Currently, there is no known way to reverse the arrow of time. While some processes may appear to reverse in time, such as melting and freezing, the overall direction of time always remains the same.</p><h2>4. Does the arrow of time apply to all systems?</h2><p>The arrow of time is a fundamental principle of our universe and applies to all systems, from the smallest particles to the largest galaxies. However, some systems, such as black holes, may have different perceptions of time due to their extreme gravitational forces.</p><h2>5. How does the arrow of time relate to the concept of causality?</h2><p>The arrow of time is closely related to the concept of causality, as it suggests that events in the past cause events in the future, but not vice versa. This is known as the cause and effect relationship, which is a fundamental principle in physics and other sciences.</p>

1. What is the arrow of time?

The arrow of time is the concept that time only moves in one direction, from the past to the present to the future. It is often described as the asymmetry of time, as it only flows in one direction and cannot be reversed.

2. What causes the arrow of time?

The exact cause of the arrow of time is still a topic of debate among scientists. Some theories suggest that it is a result of the increasing disorder or entropy in the universe, while others propose that it is a fundamental property of time itself.

3. Can the arrow of time be reversed?

Currently, there is no known way to reverse the arrow of time. While some processes may appear to reverse in time, such as melting and freezing, the overall direction of time always remains the same.

4. Does the arrow of time apply to all systems?

The arrow of time is a fundamental principle of our universe and applies to all systems, from the smallest particles to the largest galaxies. However, some systems, such as black holes, may have different perceptions of time due to their extreme gravitational forces.

5. How does the arrow of time relate to the concept of causality?

The arrow of time is closely related to the concept of causality, as it suggests that events in the past cause events in the future, but not vice versa. This is known as the cause and effect relationship, which is a fundamental principle in physics and other sciences.

Similar threads

  • Quantum Physics
Replies
1
Views
657
  • Quantum Physics
Replies
15
Views
2K
Replies
2
Views
1K
Replies
212
Views
21K
Replies
6
Views
1K
Replies
18
Views
1K
  • Quantum Physics
Replies
29
Views
4K
Replies
89
Views
6K
  • Quantum Physics
2
Replies
69
Views
4K
Back
Top