What causes the arrow of time ?

What causes “The arrow of time" ?

  • Imperfect entanglement: The conservation laws are not exactly 100%

    Votes: 0 0.0%

  • Total voters
    41
Juan R.
Messages
416
Reaction score
1
lalbatros said:
Juan wrote:
I cannot agree with that statement, altough I recognize a conceptual difficulty there.
For me, this problem is similar to the problem of irreversibility seen from the classical mechanics point of view. Non-unitary evolution might be a good approximation (maybe even *exact*!) when an interaction with a huge system (huge freedom) is involved.
My favorite example is the decay of atomic states: clearly the interaction of the discrete atomic system with the continuum system of electromagnetic radiation brings the decay. This decay is very conveniently represented by a "non hermitian" hamiltonian: this allows modeling of an atom (for the Stark effect e.g.) without including the whole field. This represents correctly the reality, altough the fundamental laws are unitary.

Precisely this is the reason that ALSO the problem of arrow of time is still unsolved

It is simply false that a non-unitary evolution can be derived from an unitary evolution as a kind of "good approximation". It is mathematically imposible and physically wrong. This is the reason people seriouly working in arrow of time (specialists in the topic) is proponing nonunitary evolutions. For example Prigogine theory, CSM, etc.

That a non-unitary evolution cannot be obtained from an unitary evolution was already adressed many time ago. In words of specialist van Kampen: irreversibility cannot be obtained from reversibility except by an appeal to manthematical funambulism. He clearly emphasized the word funambulism. In fact all 'derivations' in literature beggining from unitary physics have wrong mathematical steps of kind "since 2 + 2 = 5 then A > B". People is doing is adding wrong mathematicals teps for deriving the corerct answer from a incorrect beggining. That is, NOBODY is deriving irreversibility from unitarity.

All supposed 'derivations' i know from literature are mathematically wrong and physically unsustainable.

Your example of decay of atomic states is simply wrong as is well-known in literature on the problem of time. There is a couple of mistakes in standard elementary textbook 'derivations' (i remark supposed derivations). Literature on why standard elementary approaches are wrong when one study details is excesively huge i can cite all relevant papers on the topic. But i can say some of typical errors.

First the use of a continuum of radiation does not introduce irreversiblity since QFT is time-simmetric. The quantum states are not defined in standard QM and QFT and one uses approximation that a state is described via Dirac kets, which is not true, because the Dirac state is valid only when interaction is EXACTLY zero. Some authors are exploring more general states like Gamov ones.

The use of a pure continuum is an approximation known like 'thermodynamic limit'. In standard approaches resonances between discrete spectra and that ill-defined continuum spectra are simply ignored. In rigor, standard QM does not work in that continuum. In fact, as proven by Prigogine and colleagues the Hilbert space structure of QM collapses and wavefunctions loose probabilistic interpretation, for example the norm of density matrices is NOT the unity -they solve this introducing a more general RHS-. The relationship between the non-hermitian 'Hamiltonian' and the original Hermitian one is NEWER addressed. One can prove that the solution choosed in textbooks is incomplete (in a similar manner like ignoring negative energy states in relativistic Schrödinger equation does not work). The total system atom + field continues to be reversible and production of entropy computed is zero, which is wrong, etc.

As said the derivation of the nonunitary law from the unitary one is mathematically wrong. People DOES is really substitute the unitary law by the nonunitary one at some specific point of the computation, but this is 'hidden' is usual presentations -however one can prove that is that people is really doing-. Etc, etc.

lalbatros said:
Juan wrote:
For many people, the interaction with a 'classical' or 'macroscopic' system is all that is needed to derive the PP. I think this is the most probable explanation for the PP. Landau considered this so obvious that it comes in the first chapters in his QM book.

1) Precisely the problem with QM -as already noted by Einstein- is that QM is incompatible with classical mechanics. Precisely Born explicitely splitted universe into two parts, classical and quantum, with QM applying only to the latter. The problem of quantum measurement is that people is attmepting to derive measuremente from QM only when one needs introduce some classicality concept from outside of QM. 2) Precisely Prigogine approach is the construction of a generalization of QM ALSO applicable to classical systems. It is also Penrose approach who argues that GR cannot completely quantized and that classical residue is hidden element for explaining measurement.

I think that both approaches (Prigogine and Penrose) are good but are not the final answer.
 
Last edited:
Physics news on Phys.org
vanesch said:
The irreversibility in classical statistical mechanics comes about from the very specific initial condition, which is highly improbable.
I don't see how this can come about.

This popularity of this 'explanation' is only suppered by its incorrectness.

It is completely false that arrow of time can be explained via initial conditions alone. It is also false that appeal to "improbable". Probability is computed from wavefunctions or classical distributions functions. Since basic evolution law is time-simmetric, transitions from less probable to more probable are theoretically permited.

When one solve Schrödinguer equation one uses an initial state

Phy(t) = exp(-iHt) Phy (0)

That do NOT introduces irreversibility because the equation is time symmetric. On ANY application of above equation evolution is reversible and production of entropy is zero.

However, in Prigogine theory the basic equation is irreversible and applied to the same initial state Phy(0) evolution IS compatible with experimental data: irreversible and producting entropy that verifies second law.

Take the irreversible process A ---> B

Irreversibility does not mean that initial condition A explains transition to B. Irreversibility means that when the system was in B, newer returns to A.

The process B ---> A is newer observed.

Therefore the evolution is

A ---> B ---> C

if B is an equilbrium state

A ---> B ---> B

Moreover, one would remark the paradox that those 'highly improbable' initial states are ALWAYS observed, just in the initial state of the irreversible evolution. If A was really highly improbable (so improbable that newer will observed, why do we observe always? At t =0 the state is precisely that 'highly improbable' state A).
 
Last edited:
What causes the "arrow of time" ?

What causes the "arrow of time" ?
════════►
Multiple Choice and Public.
Alternative suggestions welcome.Regards, Hans
 
Last edited:
I think you've forgotten the standard textbook explanation in statistical physics: the very special initial condition of the universe...
 
Juan R. said:
This popularity of this 'explanation' is only suppered by its incorrectness.
It is completely false that arrow of time can be explained via initial conditions alone. It is also false that appeal to "improbable".

Do a simple simulation on a computer, with a totally reversible dynamical law: you can very simply simulate "entropy increase".
For instance, put classical elastic marbles packed in one corner of a cube, all with the same momentum, and let it evolve. You get soon a totally messy distribution which looks a lot like a classical perfect gas. The dynamics is perfectly reversible. The initial condition was special. Liouville's equation applies. No singularities in the dynamics. No magic.
 
I think the "arrow of time" is cause by the 2nd Law of theromodynamics. The amount of entropy in a system will always increase; the way it increases is if it travel forward in time. I believe if without the 2nd law of thermodynamics we would not be able to tell the difference between forward in time and backwards.
 
Thermodynamics, self-contained determinismus of the evolution
 
Causality requires time to be one-dimensional and unidirectional (although one could try multi-dimensional time, but the other dimensions must be compact variables), else one can construct scenario that defy causality.
 
That the time dimension is one way seems to follow from the big bang model. The concepts of causality, the past, the future, evolution, etc., have the meanings that they have, and are physically meaningful, because the universal wave front created by the big bang is moving isotropically away from its source, and the flotsam and jetsam (which constitute us and the rest of the physical universe) moving in the wake of this expansion must follow this general, universal trend. (ie., any direction of motion follows the general omnidirectional expansion)

So, disturbances move, on any scale, away from their points of origin. If the disturbance is in a more or less homogenous, isotropic medium, like water or air or light, then the disturbance moves more or less isotropically away from the origin. It simply can't be any other way in an expanding universe. In order for phenomena to spontaneously return to previous states (eg., the evolutionary process that led to a broken cup suddenly reversing and the cup assembling again, or the observation of advanced waves) it would seem necessary to reverse the universal, isotropic expansion -- and at least one way of interpreting the available evidence suggests that this is impossible (at least in our universe).

These considerations don't depend on positing a certain set of initial conditions, but only on observations of how the universe at large (and medium and small) is behaving.

The observed expansion is the fundamental physical reason why there is any motion at all in the first place, and observations suggest that that motion is constrained in certain general ways {including the i) necessary evolutionary direction of any process, ii) inertia, iii) and a universal speed limit on any evolutionary process, any propagation}.
Maybe the speed limit hasn't been nailed yet, maybe it isn't defined fundamentally by electromagnetic phenomena, but if the universe had a finite beginning, is finite in extent and energy content (even if it's constituents are continually evolving according to local interactions), and is expanding, then it seems to me that a universal limit on the rate of any evolutionary process is required.
 
Last edited:
  • #10
Isn't Time just the product of change?
NO CHANGE = NO TIME
 
  • #11
simon009988 said:
Isn't Time just the product of change?
NO CHANGE = NO TIME
Time is change. The question concerns an apparently general characteristic of, and constraint on, change. We observe an 'arrow of time'. Nature never runs in reverse of this 'arrow of time'. Why??

Statistical physics says that Nature can and will run in reverse, but that the probability of this happening is so small that FAPP it will never happen.

I would rather assume that Nature *can't* run in reverse, and consider why that must be.
 
Last edited:
  • #12
Sherlock said:
Time is change. The question concerns an apparently general characteristic of, and constraint on, change. We observe an 'arrow of time'. Nature never runs in reverse of this 'arrow of time'. Why??

The cause for the arrow of time may be just entropy, because if a closed system is at maximun entropy and you were to say record it on tape and watch the tape backwards, you would not know that your watching the tape backwards because the entropy would not increase anymore. thus, the arrow of time is just a system going from low entropy to high entropy and it's just the because of the second law of thermodynamics

for example take a box filled of half footballs and soccerballs each on one side, then shake it up to increase the disorder(entropy) and if you were to tape it on video and watch the tape backwards you would not be able to tell if it was forwards or back, but at the beginning when the balls was all organized, you would because it was going from a state of low entropy to high entropy.
 
  • #13
simon009988 said:
The cause for the arrow of time may be just entropy, because if a closed system is at maximun entropy and you were to say record it on tape and watch the tape backwards, you would not know that your watching the tape backwards because the entropy would not increase anymore. thus, the arrow of time is just a system going from low entropy to high entropy and it's just the because of the second law of thermodynamics
for example take a box filled of half footballs and soccerballs each on one side, then shake it up to increase the disorder(entropy) and if you were to tape it on video and watch the tape backwards you would not be able to tell if it was forwards or back, but at the beginning when the balls was all organized, you would because it was going from a state of low entropy to high entropy.
Systems tend to evolve toward equilibrium. Drop a pebble in a flat pool of water and the disturbance will propagate outward until the pool is flat again. It never happens that a, say, 50 meter diameter, wave front spontaneously appears in a flat pool, propagates inward toward a central point, gradually increasing in amplitude and decreasing in diameter, until suddenly, the pool is flat again.


Just as Newton's gravitation law doesn't tell us the physical reason why gravitating bodies behave accordingly, and just as the first law of motion doesn't tell us why there's any motion in the first place or the fundamental physical reason for inertia, the second law of thermodynamics doesn't tell us why there is an arrow of time. It's just one way to describe it.

The alternatives in the poll aren't physical reasons, per se, for the arrow of time. That "the time dimension itself is simply one way: The future does not yet exist" is simply a restatement of the arrow of time that our collective experience tells us is a fact of Nature.

The way of talking about it that I've learned is that the fundamental physical reason for the arrow of time is the isotropic expansion of the universe.
 
Last edited:
  • #14
vanesch said:
I think you've forgotten the standard textbook explanation in statistical physics: the very special initial condition of the universe...

That's the GR version indeed, but as you say it's an initial condition.
There might be some equivalent border condition at the end of time as well.

I'm interested in how this keeps working each and every moment, what kind
of processes, if any, are responsible... quantum mechanically or other.
Off course this poll is more of a gut-feeling kind poll rather than a "what is
the answer" poll.

The big bang response reminds me of an amusing answer I once read, on
physycs.research, on the question where the 'missing' anti-particles are
(the particle/anti-particle asymmetry in the universe) The response was:

"They all flew off into the other direction, BACKward in time" :rolleyes:


Regards, Hans

P.S. I suppose it's only Greg who can edit/patch thread titles. It's what
you get when you're paying attention to a five-year old at the same time :smile:
 
  • #15
Hans de Vries said:
P.S. I suppose it's only Greg who can edit/patch thread titles. It's what
you get when you're paying attention to a five-year old at the same time :smile:

Apparently, super mentors can do it too :smile:
(didn't know until I tried).
 
  • #16
All other options (except the latter which is not defined) on the poll follow directly from the non-unitarity approach.

In fact, Prigogine theory is a non-unitary approach. Penrose theory is non-unitary, etc.

Many time think that projection postulate alone explain arrow of time. Well that is not true, and this is the reason of in more than 100 years the quantum measurement problem has been not solved and decoherence approach is in a dead way. Prigogine has shown as the projection postulates follow from his nonunitary theory. One begin with a quantum system in a superposition state, then the system contact with a measurement systems (an LPS in Prigogine theory). The theory clearly shows how the wavefunction collapse.

In fact, any other derivations of the arrow of time without the explicit use of nonunitarity are mathematically wrong and unphysical. This is the reason that Penrose also has choosed nonunitarity.

Most of physicists do not like unitary because there is a theorem that links unitarity with conservation of quantum probability. The theorem of course is valid only on standard quantum mechanics in a Hilbert space. One can construct a nonunitary theory with conservation of probability.

Any attempt to explain the arrow of time on function of 'initial conditions' or ratios of probabilities is completely wrong. One would read advanced literature before claim that solutions is in a basic textbook. One would read detailed analisys if those solutions before believe that are correct. The best valuation of those irrelevant explanations that textbooks explain was done by specialist in arrow of time and stochastic theory van Kampen:

Those attempts to derive the arrow of time are plagued by any amount of mathematical funanbulism
 
Last edited:
  • #17
vanesch said:
Apparently, super mentors can do it too :smile:
(didn't know until I tried).

Thanks! :smile:

Regards, Hans
 
  • #18
vanesch said:
Do a simple simulation on a computer, with a totally reversible dynamical law: you can very simply simulate "entropy increase".
For instance, put classical elastic marbles packed in one corner of a cube, all with the same momentum, and let it evolve. You get soon a totally messy distribution which looks a lot like a classical perfect gas. The dynamics is perfectly reversible. The initial condition was special. Liouville's equation applies. No singularities in the dynamics. No magic.

It appears you have an increased tendence to trivialize things. It appears you think that reading some basic textbook you are in the cutting edge of a specific research topic and 'all is known' or well you 'are solved the question'. Again you are wrong.

Vanesch, there is different levels of literature from 7-years old coloured books to advanced very, very specific journals (as Chaos and Fractals). You would read published research level material on a specific topic before doing irrelevant claims. If your only basis is one elementary textbook, vanesch, you would be more 'prudent' on your claims.

If you use an unitary propagator on the simulation, the simulation is reversible on all moment and does not increase entropy.

If you use a nonunitary propagator (for example forcing 8bit digit arithmetic), then due to rounded errors of the simulation process, trajectories are breaked and the system cannot memorize the trajectory and simulation generates entropy. In a nonunitary simulation (which is the usual due to limitations on memory and digits of computers), when you reverse the simulation the computer does not obtain the initial state due to acumulation of rounded errors. Then one can prove that generates entropy.

If you put classical elastic marbles packed in one corner of a cube, all with the same momentum, and let it evolve. If they evolve unitarity, entropy is, of course, conserved by Liouville theorem. If you use a nonunitary propagator (for example forzing 8bit digit numerical arithmetic or programing colisions probabilistically via a model of independent particles colliding at azar (as in a perfect gas) then you can simulate entropy increase. In both of last models the simulation is not dynamical and Liouville theorem does not hold.

Any attempt to derive irreversiblity from a reversible law is subject to (in specialist on arrow of time van Kampen words)

any amount of mathematical funambulism.

It appears that you like mathematical funambulism. But people doing research in the arrow of time has proved -in basis to rigorous published work- that simplistic approaches as yours are completely incorrect.

As already explained above, initial condition is not the key to irreversibility, because i) if dynamics is unitarity by Liouville theorem entropy is conserved and violates second law of thermodynamics. ii) if one takes the final quasiequilbrium state B on (A ---> B), the use of initial conditions doe not forbid the unphysical return to 'A', which is newer experimentally measured.

There are many publications in the topic proving that initial conditions do not solve the arrow of time. You would read research-level literature on a specific topic before doing irrelevant claims.
 
Last edited:
  • #19
simon009988 said:
The cause for the arrow of time may be just entropy, because if a closed system is at maximun entropy and you were to say record it on tape and watch the tape backwards, you would not know that your watching the tape backwards because the entropy would not increase anymore. thus, the arrow of time is just a system going from low entropy to high entropy and it's just the because of the second law of thermodynamics
for example take a box filled of half footballs and soccerballs each on one side, then shake it up to increase the disorder(entropy) and if you were to tape it on video and watch the tape backwards you would not be able to tell if it was forwards or back, but at the beginning when the balls was all organized, you would because it was going from a state of low entropy to high entropy.

If one observes a box of Gas molocules tucked away in one corner, over time the Gas tends to order, via a distributed thermal Equilibrium?

If one now replaces the Gas molocules with Gravitational bodies, then things tend to evolve the other way, they tend towards collecting into clumps (like the Gas initial location Molocules in 1st example), as entropy increases, bodies collect tegether, finally there is a vast increase at the location of clumping as Blackholes form.

From the Penrose book Road To Reality page 707.

The 'initial' Arrow of Time can be manipulated if one has systems that are isolated, a Blackhole provides a technical isolation location, the Big-Bang has to have had an intial state, Gas, Liquid, Solid or other?
 
  • #21
Juan R. said:
If you use an unitary propagator on the simulation, the simulation is reversible on all moment and does not increase entropy.
I'm not talking about quantum mechanics but about classical mechanics. The classical mechanics of elastic balls is 100% reversible, nevertheless, they produce well many aspects of an ideal gas.
You can even change the model, and have red and blue balls, the red balls initially in one corner, the blue balls in another one, and let the computer calculate. After a while, there is no distinguishing this mixed state with any other mixed state EVEN THOUGH if you were to calculate backward, you would get them back in the corner again of course. About all statistical tests that you could perform upon this mixed state (such as n-particle correlation functions and so on) would agree perfectly with what you would have with a "high entropy state". So this IS a high-entropy state for all practical purposes.
I agree with you that if you KNEW that the state evolved from such a special "corner state" you should consider this as a low-entropy state, in that you could, IN PRINCIPLE, apply an action upon the system that reversed the motions, and you would then get a violation of the second law. But that's never going to happen ; FAPP, this is not feasible: in your computer it is not feasible because of roundoff errors, and in practice it is not feasible because of external disturbances. So this state DID REALLY BECOME a high-entropy state. Nevertheless, we had an in principle reversible dynamics and we started from a low entropy state. In other words, the clear low-entropy state evolved into a FAPP high entropy state, with reversible dynamics.
Imagine for a moment a universe which is classical, Newtonian, and that we live in a "rubber ball particle" universe which started long ago with a big bang, when all rubber balls where densely packed and flew off radially from a "creation point". (no, I'm not going to suggest that this is what really happened !)
The dynamics in this universe is completely reversible Newtonian physics with some Newtonian interactions between the balls, such as gravity, and other interactions which allow to make such things which look like molecules and all that. After a long time, rubber ball people run around, and wonder at how their universe came about. And they do experiments in the lab and so on. Well, they will ALSO find an experimentally confirmed second law of thermodynamics.
In all their lab experiments, they will not notice that these ball configurations are in fact very special, and if they calculated everything backward, they'd arrive at the amazing conclusion that everything just fits as having them blow radially outward. They will simply notice a second law of thermodynamics.
Nevertheless, there is no deep mysterious asymmetry in time in their universe.
So such a second law of thermodynamics CAN be the result of a reversible dynamics and a special initial condition, because we only LOOK AT PART OF THE ENTIRE SYSTEM. And when we try to look at a specific isolated system, we can never avoid small disturbances.
 
  • #22
ZapperZ said:
http://www.math.rutgers.edu/~lebowitz/PUBLIST/lebowitz_370.pdf

There's another, newer article related to this in Physics Today, but it's not available online.

Zz.

Geat paper linked, thanks..would it be 'possible' or 'improbable' that the Physics Today article would evolve to be eventually online?
 
  • #23
vanesch said:
I'm not talking about quantum mechanics but about classical mechanics. The classical mechanics of elastic balls is 100% reversible, nevertheless, they produce well many aspects of an ideal gas.

Already said why that is wrong. Moreover your claim that an ideal gas is a classical mechanical system is completely wrong: a nonsense! An ideal gas is a kinetic system with a well defined concept of probability outside of pure mechanics. In fact in an ideal gas, collisions are probabilistics. Only the evolution before and after the collision is modeled via Newton equation. Do you know Boltzmann equation? It contains two parts. The free part is purely Newtonian and follows from pure mechanics; however, the collision part contains a probabilistic asumption and an nonunitary evolutor. This was rigorously proven by Bogouligov that the collision term does not follow from Newtonian physics. In Prigogine theory, that collision term follows from his Lambda transformation which is a nonunitary evolutor that generalizes both classical and quantum mechanics.

Again my remark that you would read advanced literature instead of undergradaute textbooks, Vanesch, this is a friendly advice.

vanesch said:
You can even change the model, and have red and blue balls, the red balls initially in one corner, the blue balls in another one, and let the computer calculate. After a while, there is no distinguishing this mixed state with any other mixed state EVEN THOUGH if you were to calculate backward, you would get them back in the corner again of course. About all statistical tests that you could perform upon this mixed state (such as n-particle correlation functions and so on) would agree perfectly with what you would have with a "high entropy state". So this IS a high-entropy state for all practical purposes.

That is false. You cannot derive irreversible laws of motion from a reversible law and all you are doing is 'forcing' the simulation on one side newer in the other side, which is also permited by the mechanics. Moreover, if the evolution is unitary, by Liouville theorem entropy is conserved and then people does in those 'tests' is not compute real entropy, only a coarse grained entropy which is defined ad hoc for each specific simulation.

For example, in the blue and red balls one computes the entropy due to 'color'. Compute the whole entropy, not only a part of them.

Moreover those 'statistical tests' performed are based in a posterior introduction ad hoc of 'averaging procedures', without direct link with underlying dynamics. Estrictly speaking violating the underlying dynamics. This is the reason of the name statistical mechanics that mean statistical procedures more pure mechanics. Statistical procedures are aliens to the pure dynamical evolution.

vanesch said:
I agree with you that if you KNEW that the state evolved from such a special "corner state" you should consider this as a low-entropy state,

Is NOT a low-entropy state, If you claim an unitary evolution by Liouville theorem entropy is conserved. The first step on trivializing irreversible phenomena is the definition of a wrong entropy.

vanesch said:
in that you could, IN PRINCIPLE, apply an action upon the system that reversed the motions, and you would then get a violation of the second law. But that's never going to happen ; FAPP, this is not feasible: in your computer it is not feasible because of roundoff errors, and in practice it is not feasible because of external disturbances. So this state DID REALLY BECOME a high-entropy state.

If the computer is doing roundoff errors, then it is NOT doing dynamics. Dynamics imply conservation of number of trajectories. If the computer is doing roundoff errors, then you are doing a nonunitary dynamics.

It is false that 'external disturbances' are the cause of the arrow of time. IF you take the environment into the dynamical description, the whole system continues to be time reversible and by Liouville theorem whole entropy (system + environment) is conserved.

vanesch said:
Nevertheless, we had an in principle reversible dynamics and we started from a low entropy state. In other words, the clear low-entropy state evolved into a FAPP high entropy state, with reversible dynamics.

Of course completely wrong. Your future may be not the research in the arrow of time. You would begin to read relevant literature before claiming that has solved 'some' question. This is as usual step in scientific methodology.

vanesch said:
Imagine for a moment a universe which is classical, Newtonian, and that we live in a "rubber ball particle" universe which started long ago with a big bang, when all rubber balls where densely packed and flew off radially from a "creation point". (no, I'm not going to suggest that this is what really happened !)
The dynamics in this universe is completely reversible Newtonian physics with some Newtonian interactions between the balls, such as gravity, and other interactions which allow to make such things which look like molecules and all that. After a long time, rubber ball people run around, and wonder at how their universe came about. And they do experiments in the lab and so on. Well, they will ALSO find an experimentally confirmed second law of thermodynamics.

Of course wrong, in Newtonian physics entropy is of course conserved by Liouville theorem. The experimenter newer had find the second law...

vanesch said:
In all their lab experiments, they will not notice that these ball configurations are in fact very special,

The appeal to initial conditions is wrong, as proved in published literature. I always find curious that those highly improbable initial conditions ALWAYS are here, doing their real proability exactly 1. Remember probability 1 is for a sucess that always is measured. Since we always measure the initial state, the initial state always is there with probability 1.

vanesch said:
Nevertheless, there is no deep mysterious asymmetry in time in their universe.

Of course there is no deep mysterious asymmetry in time in their universe.

There is just a beatiful asymmetry in time in our universe.

vanesch said:
So such a second law of thermodynamics CAN be the result of a reversible dynamics and a special initial condition, because we only LOOK AT PART OF THE ENTIRE SYSTEM. And when we try to look at a specific isolated system, we can never avoid small disturbances.

Completely wrong argument. Nobody has advanced by this wrong way in more than 100 years. The observation of part of an entire system does not introduce irreversibility. This is easily proven with rigorous math (remember mathematical funambulism).

In fact, if the whole system is reversible any part of them -by definition- is also.
 
Last edited:
  • #24
Juan R. said:
Of course wrong, in Newtonian physics entropy is of course conserved by Liouville theorem. The experimenter newer had find the second law...

Of course he would find a second law of thermodynamics, and yes he would know also Liouville's theorem. Both are not contradictory, as you seem to imply. They WOULD find evolutions of correlation functions suggesting an increase in a number they could call entropy.

For example, if you were to release the balls from a corner in a box, let it evolve, and give that box to someone else, not telling him about what you did, do you think that the other one would notice that peculiarity ? He would do some statistical tests on the average density of balls in space, and the fluctuations of the hits of the balls on the wall and so on, and that would correspond statistically exactly to what a RANDOM configuration does with maximal entropy.

This makes me think: do you ever do Monte Carlo simulations ?
If so, do you use a pseudo-random generator or a "real random" generator based, I don't know, upon cosmic radiation ? Because the pseudo-random generator corresponds to your "low entropy" state. Nevertheless, a monte carlo with a pseudo-random generator works very well. Even though its numbers are not "random" at all, but given by an (of course reversible) algorithm, because it counts down a long list.

Concerning your ad hominem statements, I don't think that your aggressive tone is a good idea to further discussions.
 
  • #25
ZapperZ said:
http://www.math.rutgers.edu/~lebowitz/PUBLIST/lebowitz_370.pdf

There's another, newer article related to this in Physics Today, but it's not available online.

Zz.

Ah, Zapperz, you save me from Juan R.'s infantilizing comments :smile: in the thread which was originally about my paper on the Born rule...
Couldn't find a better paper than yours here!
 
  • #26
vanesch said:
Of course he would find a second law of thermodynamics, and yes he would know also Liouville's theorem. Both are not contradictory, as you seem to imply. They WOULD find evolutions of correlation functions suggesting an increase in a number they could call entropy.

This, of course, is false, but you continue to trivializing stuff. Entropy is defined on rho and if rho is conserved entropy is also. In fact, i again remark -even if you ignore that i am writing- that you are not computing real entropy. You are just computing an ad hoc defined coarse grained entropy which does not coincide with entropy of the dynamical state and does not coincide with the thermodynamic entropy.

If the dynamics is reversible the 'correlations functions' computed are both compatible with both dS > 0 and dS < 0!

vanesch said:
For example, if you were to release the balls from a corner in a box, let it evolve, and give that box to someone else, not telling him about what you did, do you think that the other one would notice that peculiarity ? He would do some statistical tests on the average density of balls in space, and the fluctuations of the hits of the balls on the wall and so on, and that would correspond statistically exactly to what a RANDOM configuration does with maximal entropy.

If simulation follows rules of dynamics, there is no irreversibility. Entropy is conserved. Those statistical tests of 'average' density and 'fluctuations' are introduced ad hoc from outside of pure mechanics. In fact, at least one breaks the pure dynamical evolution (for example via a nonunitary contribution) the system newer correctly thermalizes.

vanesch said:
This makes me think: do you ever do Monte Carlo simulations ?
If so, do you use a pseudo-random generator or a "real random" generator based, I don't know, upon cosmic radiation ? Because the pseudo-random generator corresponds to your "low entropy" state. Nevertheless, a monte carlo with a pseudo-random generator works very well. Even though its numbers are not "random" at all, but given by an (of course reversible) algorithm, because it counts down a long list.
Concerning your ad hominem statements, I don't think that your aggressive tone is a good idea to further discussions.

Are you claiming that at Monte Carlo simulations one is doing mechanics? Or is one just using statistical methods even if the random generator is not random?

vanesch said:
Nevertheless, a monte carlo with a pseudo-random generator works very well. Even though its numbers are not "random" at all, but given by an (of course reversible) algorithm, because it counts down a long list.

Is the MC perfect and one can simulate all, or precisely there are problems with seudo-random generators?

Does work the MC in irreversible physics or only on simulation of equilibrium ensembles, just when there is not irreversibility and entropy is constant?

Remember van Kampen (that specialists who knew a bit more than you about random methods)

mathematical funambulism
 
  • #28
Spin_Network said:
Geat paper linked, thanks..would it be 'possible' or 'improbable' that the Physics Today article would evolve to be eventually online?

Physics Today IS available on line (for subscribers), but not the complete archive.

Zz.
 
  • #29
Hans de Vries said:
That's the GR version indeed, but as you say it's an initial condition.
There might be some equivalent border condition at the end of time as well.
I'm interested in how this keeps working each and every moment, what kind
of processes, if any, are responsible... quantum mechanically or other.
Off course this poll is more of a gut-feeling kind poll rather than a "what is
the answer" poll.
You won't like my answer but I will give it anyway: it's the special initial condition of the universe combined with classical black hole thermodynamics. If you want to stick to some from of QM, then you will indeed need a nonunitary formalism, in that respect I agree with Juan R (Sorkin has written a nice paper about that recently : ``ten thesis on [quantum] black hole thermodynamics´´, although 't Hooft is prepaired to die for unitarity and many other physicists too - contrary to the claim of Juan R). Moreover, I should mention that many physicists take the Hartle Hawking proposal seriously, but I don't.
 
Last edited:
  • #30
ZapperZ said:
Physics Today IS available on line (for subscribers), but not the complete archive.
Zz.

Zapper, I found an article by Joel Lebowitz on arxiv from 2000 that may be a partial substitute for what some of us can't get either from his site or from Physics Today. No guarantees but here it is:

http://arxiv.org/abs/math-ph/0010018

Statistical Mechanics: A Selective Review of Two Central Issues

Joel L. Lebowitz
36 pages, in TeX, 1 figure
Reviews of Modern Physics, 71, (1999), S346

"I give a highly selective overview of the way statistical mechanics explains the microscopic origins of the time asymmetric evolution of macroscopic systems towards equilibrium and of first order phase transitions in equilibrium. These phenomena are emergent collective properties not discernible in the behavior of individual atoms. They are given precise and elegant mathematical formulations when the ratio between macroscopic and microscopic scales becomes very large."

for some reason I cannot download the article you mentioned from his site so this is basically all I have from him on the topic
 
  • #31
marcus said:
Zapper, I found an article by Joel Lebowitz on arxiv from 2000 that may be a partial substitute for what some of us can't get either from his site or from Physics Today. No guarantees but here it is:
http://arxiv.org/abs/math-ph/0010018
Statistical Mechanics: A Selective Review of Two Central Issues
Joel L. Lebowitz
36 pages, in TeX, 1 figure
Reviews of Modern Physics, 71, (1999), S346
"I give a highly selective overview of the way statistical mechanics explains the microscopic origins of the time asymmetric evolution of macroscopic systems towards equilibrium and of first order phase transitions in equilibrium. These phenomena are emergent collective properties not discernible in the behavior of individual atoms. They are given precise and elegant mathematical formulations when the ratio between macroscopic and microscopic scales becomes very large."
for some reason I cannot download the article you mentioned from his site so this is basically all I have from him on the topic

There has been disruptions at :http://citebase.eprints.org/offline.php?id=oai:arXiv.org:math-ph/0010018

and new of :http://news.bbc.co.uk/1/hi/england/hampshire/4390048.stm

which would not have effected your linked paper search?, but thought it needs to be posted.
 
Last edited by a moderator:
  • #32
Careful said:
You won't like my answer but I will give it anyway: it's the special initial condition of the universe combined with classical black hole thermodynamics.

I like the first part of that statement :smile:

A question: do you think that a strictly Newtonian universe, with perfectly elastic particles, which starts out in a special condition, would also show a 'second law of thermodynamics' to its inhabitants (even though the mechanics is entirely reversible) ?
 
  • #33
vanesch said:
I like the first part of that statement :smile:
A question: do you think that a strictly Newtonian universe, with perfectly elastic particles, which starts out in a special condition, would also show a 'second law of thermodynamics' to its inhabitants (even though the mechanics is entirely reversible) ?

Hmmm cannot immediatly answer this. A quick worry would be that you will have to take into account Poincare recurrence times if you put the universe in a box. It might of course be that in the infinite volume limit, this is not an issue, but on the other hand Poincare recurrence times are usually dealt with by suitably coarse graining in *classical* statistical physics (something which you cannot do here). On the other hand, if you define entropy by caunting degrees of freedom on the event horiza of black holes, then the second law of thermodynamics is a *deterministic* statement following from the dynamical rules themselves (which is after all much more powerful). I have to think deeper about this if I want to give you a fair answer.

Cheers,

Careful
 
  • #34
Careful said:
I have to think deeper about this if I want to give you a fair answer.
Cheers,
Careful
Hi Vanesch, I do not think so. Gravitation will make matter clump together and lower the entropy of the matter degrees of freedom (unless you start out from a highly idealized stable state already). Moreover, in such a Newtonian universe (with elastic particles), total energy will be conserved, therefore the first law of thermodynamics - which always holds - (assuming that all processes run sufficiently slow, and the number of particles is conserved/which is the case when elastic classical particles scatter) gives:
T dS = p dV < 0 (since the matter clumps.)
I did also take into account radiation degrees of freedom here which play a part when chemical bounds are made (however all these processes are conservative and not relevant during the ``clumping´´ process, that is the point). So, I really think you need the gravitational degrees of freedom in order to get a second law out (I think Penrose argues something similar).

Cheers,

Careful
 
  • #35
Juan R. said:
Remember van Kampen (that specialists who knew a bit more than you about random methods)
This is really a funny discussion.. Juan R is right that in the first law of thermodynamics, the Shannon - Von Neumann entropy has to be used, although this one has certainly not the final word yet (since it is an equilibrium concept) and people are searching for dynamical (off equilibrium) notions of entropy. The Liouville theorem in classical mechanics and unitarity in QM obviously (you do not have to look into advanced textbooks for this, it is just a calculation of two lines) imply a conserved entropy of the ENTIRE closed system (and indeed, these coarse grained notions are just ad hoc concepts serving to avoid these problems - as far as I know this does not even work at the Unitary quantum level). But now opinons are again devided: there is a good bunch of people who think the entire universe conserves Shannon - Von Neumann entropy and that there does not exit a global future pointing thermodynamical arrow of time. This is logical since localized entropy lowering phenomena are observed every day and still we percieve ourselves as going to the future. :smile:
 
  • #36
Careful said:
Hi Vanesch, I do not think so. Gravitation will make matter clump together and lower the entropy of the matter degrees of freedom (unless you start out from a highly idealized stable state already).

? I don't see why this decreases entropy. The volume decreases all right, but the kinetic energy of the particles increases ; assuming there are radiation degrees of freedom (without exactly being Maxwell, because I want to stay Newtonian and leave relativity out for the sake of argument), you get emission of thermal radiation that way, and the "cloud + no radiation" might very well have a much lower entropy than the "lump of clustered matter + radiation".
But I really didn't want to observe this from a "cosmological" POV (although one is ultimately always led there).

The gravitational contraction you are talking about here could be replaced by a balloon, that is stretched by many strings attached to the inside of a hollow metal sphere to be in "under pressure". Do you think that cutting the strings, hence have the gas inside being compressed by the elasticity of the balloon, (exactly as gravity does), LOWERS the entropy of the system ? Wouldn't think so!

What I mean is: where does the second law come from in classical thermodynamics ? It comes from the observation that "heat" goes from "hotter" to "colder" objects and that it is "impossible" (in fact, STATISTICALLY IMPOSSIBLE) to do otherwise without doing the same somewhere else. In a small part of the universe.

The second law (at least, I understand it that way) is not an "absolute" law ; it is almost a "tautology": "only probable things happen". So sometimes it is violated, namely when something improbable happens. The only point is that you will have to WAIT A LONG TIME for something improbable to happen.
So the second law says that MOST OF THE TIME you heat water, it will boil off. REALLY REALLY most of the time. Because it is highly improbable that, for instance, all the molecules nicely vibrate up and down but do not leave the liquid. But this *can* happen, once in a while (a LONG while, say, 10^10000 years or so :-)
Now, the point was made that conservative systems have 1) recurrence times and 2) using canonical transformations, you could make the state "not move" a bit like the Heisenberg view in QM, so the "initial state" is "the state". That's true. Concerning recurrency times, I don't think it has anything to do with the second law, because it only means that ONCE IN A WHILE (a very very very long while) the second law will be violated. But that's exactly what she says :-) The second law has been empirically derived in a small corner of the universe, for small amounts of time, and being "close" to the initial condition (compared to any recurrency time). So it is very unlikely to have observed any violation. And you CAN BET ON IT that you won't see it (probabilistic argument).
But in order to even verify that law, you NEED to be able to *produce* hot and cold objects! So the environment of the lab can already not be in thermal equilibrium, which means it has to be in a "special macroscopic state". These macroscopic states are defined by the properties of low-order correlation functions over the phase space.
What really counts (as I understand it) is not the particularity that a certain detailled microconfiguration is on the phase space track of a specific initial condition. It is that during its evolution, it goes from smaller to larger "macrovolumes" (these macrovolumes being defined by coarse grained correlation functions between 1, 2, 3 and a *few* particles). There's nothing magical about it. It's just that it 'started off' in a small volume because the experimenter put it there (special initial condition). About just ANY evolution would soon put it in a larger volume, simply because the volume is larger. THAT is, to me, what the second law says.
Why are these macrovolumes defined by low-order correlation functions important ? Because they define the macroscopically observable things such as temperature, densities of different sorts, concentrations, reaction rates, ...
And THESE are the quantities where entropy plays a role, and which we test the second law against.
So I really think that, seen that way, the second law holds as well in a strictly Newtonian universe as in anything else as long as we had "special initial conditions" (and, you could add, that special condition occurred in a *recent past* as compared to the recurrency time, but given the VERY LONG recurrency time that doesn't really matter FAPP:smile: )
 
Last edited:
  • #37
Careful said:
This is really a funny discussion.. Juan R is right that in the first law of thermodynamics, the Shannon - Von Neumann entropy has to be used
Yes, but even in order to be able to define it, you need to define your macrostates. If you KNOW perfectly well the microstate of a system, then the Shannon entropy of that system is zero (and all the entropy is in your head!).

Of course, Juan is right that IF YOU KNOW the special initial state of a CLOSED SYSTEM, and you know perfectly well the (reversible) dynamics, so that you *can calculate* the microstate after some time, then the Shannon- Von Neumann entropy is zero to start with, and zero all along. For a kind of god creature who knows this, "nothing surprisingly" happens, no "irreversible phenomena" occur etc... This is what happens when you apply the reversible evolution theorems (Liouville in CM, unitary evolution to a pure state density matrix in QM).

But the second law doesn't apply to this case (well, it does: it says that *if you know all that, then entropy zero is conserved*, so it is trivial). It applies to SUBSYSTEMS. We found empirically the second law by looking at small pieces of universe over short amounts of time, and by looking at coarse-grained properties (temperature, chemical reactions...) which only depend upon the properties of low-order correlation functions. It is THERE that the second law is valid.

Only when these coarse-grained properties are defined, (the correlation functions selected which will matter), the entropy can be defined, because we have now sliced up the state space in macrovolumes and can count the microstates corresponding to it (or weight it with the probabities induced by these correlation functions). That's what the standard ensembles do, but one should realize that the very definition of entropy will depend on exactly how you choose to "slice up the state space".
And your microstate wanders happily from small volumes (small entropies) to big volumes (large entropies). So the entropy increases. Until it reaches the biggest volume, where it stays FOR MOST OF THE TIME. No matter exactly what track (what initial state), as long of course as the initial state was within a "small volume".
Whether or not it was part of a cosmic track that started out in a known state.
 
Last edited:
  • #38
vanesch said:
? I don't see why this decreases entropy. QUOTE]
Careful said:
I have to think deeper about this if I want to give you a fair answer.
Cheers,
Careful
Hi, I was just revising my anwer. I will give it here, I shall read your comments later on (I have to go away for some time now).
Hi Vanesch, I revise my old answer here. As far as I remember is the first law of thermodynamics only valid for near equilibrium situations (slowly running processes). The Von Neumann-Shannon entropy notion is an *equilibrium* concept and therefore, by definition, should not change. The formal verification of this is a check of one line. Obviously, we can increase/decrease entropy even for reversible processes by switching on force fields which enhances/reduces the total number of degrees of freedom of the system *acted* upon (This is in a sense what happens when you release the gas from a smaller box into a larger one). The second law is something heuristic which we observe even in off equilibrium situations (again except for black holes), so entropy there cannot be Von Neumann - Shannon entropy, but a ``dynamical entropy´´ whose definition you can realize by adaptively counting the effective number of degrees of freedom (on the other hand any dynamical notion of entropy should also undergo a ``thermalization´´ process even if the number of degrees stays fixed). However, the universe is a closed system and in principle all number of degrees should be known if you stick to a Newtonian picture. In GR the spatial universe might change of volume and therefore change the number of degrees of freedom. So everything I say below has to be interpreted in the following sense:
(a) S denotes a dynamical entropy which coincides with the Shannon Von Neumann equilibrium notion (b) the first law holds with respect to S.
Since in your Newtonian universe, the total energy is conserved, the first law says that
T dS = p dV
The latter expression should be SMALLER than zero since the gravitational force is going to make matter clump together. I do not think that the inclusion of radiation degrees of freedom due to chemical binding which occurs will change the outcome of this conclusion but there is no a priori reason why you should do this (you might as well assume that all particles are neutral). It seems to me that you have to take into account the gravitational degrees of freedom in order to compensate for this (Penrose argued that also I think). Other options which might avoid this conclusion are : (a) stick to Shannon entropy, but to go over to a classical mechanics with a time dependent Hamiltonian (so that my conservation of energy argument does not apply here, but total entropy is still conserved) (b) Stick to shannon entropy but allow for non-Hamiltonian dynamics (not every classical Newton equation can be derived from a Hamiltonian) - this is what Juan R adheres at the quantum level (by demanding non unitarity). Most people are convinced the gravitational degrees of freedom are important and that quantum gravity has a unitary dynamics with conserved Shannon entropy. This is of course very possible and not in conflict with observation.

Cheers,

Careful

PS: I might adapt this further.
 
  • #39
Careful said:
Hi, I was just revising my anwer. I will give it here, I shall read your comments later on (I have to go away for some time now).
Hi Vanesch, I revise my old answer here. As far as I remember is the first law of thermodynamics only valid for near equilibrium situations (slowly running processes).

Nah, the first law is just conservation of energy. It is *always* valid in a conservative system.

Of course, writing T dS = p dV is something else: it just says that for a given system THAT CAN BE DESCRIBED BY 2 EXTENSIVE QUANTITIES S AND V (and is as such in equilibrium), any change in internal heat energy has to be brought in by mechanical work IN A THERMALLY ISOLATED SYSTEM (no heat influx). You have to add terms to the right if there is also electrical or other energy coming in, and you add dQ if heat is allowed to flow in.

I don't think that this equation can, in any way, be applied to our situation, as it is not in equilibrium, and certainly not defined by just 2 extensive quantities S and V.

The Von Neumann-Shannon entropy notion is an *equilibrium* concept and therefore, by definition, should not change.

No, it is not (look up at Wiki for instance). But you can use the entropy as an extensive variable to parametrize the "equilibrium states" you want to consider (to slice up the state space !). Nevertheless, for just ANY state ("non-equilibrium" - note that equilibrium or not depends on what you consider as macrostates: if you consider every microstate individually, then you NEVER reach equilibrium of course), and ANY way of slicing up your state space, you can calculate an entropy.

Obviously, we can increase/decrease entropy even for reversible processes by switching on force fields which enhances/reduces the total number of degrees of freedom of the system *acted* upon (This is in a sense what happens when you release the gas from a smaller box into a larger one).

You've got it :-) That's what is the effective use of the second law! And this comes about because of the different sizes of phase space that are affected.

The second law is something heuristic which we observe even in off equilibrium situations (again except for black holes), so entropy there cannot be Von Neumann - Shannon entropy, but a ``dynamical entropy´´ whose definition you can realize by adaptively counting the effective number of degrees of freedom (on the other hand any dynamical notion of entropy should also undergo a ``thermalization´´ process even if the number of degrees stays fixed).

But, that's the same entropy, no ? And that's what we do when we write down the second law, no ?

(a) S denotes a dynamical entropy which coincides with the Shannon Von Neumann equilibrium notion (b) the first law holds with respect to S.
Since in your Newtonian universe, the total energy is conserved, the first law says that
T dS = p dV
The latter expression should be SMALLER than zero since the gravitational force is going to make matter clump together. I do not think that the inclusion of radiation degrees of freedom due to chemical binding which occurs will change the outcome of this conclusion but there is no a priori reason why you should do this (you might as well assume that all particles are neutral).

I don't agree with your use of T dS = p dV. This only describes the (non-existing) equilibrium situation of my universe in an S/V diagram.

cheers,
Patrick.
 
  • #40
The posts scattered around in several threads relating to the arrow of time have been put all here (in chronological order).
 
  • #41
vanesch said:
Nah, the first law is just conservation of energy. It is *always* valid in a conservative system.
I don't think that this equation can, in any way, be applied to our situation, as it is not in equilibrium, and certainly not defined by just 2 extensive quantities S and V.
.
Hem, is that not a contradiction in one and the same message :smile: (the first law is not just conservation of energy) Moreover, you might restrict yourself to the pure mechanical situation where no radiation and particle creation/annihilation is involved (this is perfectly allowed in Newtonian mechanics).
Concerning the equilibrium, I will express myself more accurately here: entropy is constructed by making a phase space average, which is for ergodic transformations the same as the time average over an infinite time period (independently of the intial conditions you start out from). Shannon entropy is NOT a time dependent concept, it is constructed by making exactly this average (still using the dynamics though), and a one line calculation confirms this. What I call adaptive counting is not entropy in the Shannon sense, it is a handyman's approach to describe (by hand) what happens when we enlarge our interest to larger systems by coupling it with another one. It is this ``by hand´´ what is not described into your reversible dynamics (it is the same issue as your FAPP reduction rule in QM in some sense). The entropy in your line of thinking would make discrete jumps, while an appropriate notion of dynamical entropy would undergo a ``thermalization´´ process as I mentioned before. And sure : one microscopic complete description can reach equilibrium. You have to be very careful here: equilibrium is a TIME average, it is entirely meaningless to speak about temperature at one moment in time in one particular place of the box. In statistical mechanics, this time average is over the entire real line, while in the dynamical situation of thermodynamics (which is an empirical science) the latter is over some small time interval required for thermalisation. Therefore you have two options: either you kick shannon to hell and develop some better notion of entropy (which is desirable), either you kick unitarity or Hamiltonian dynamics out of the window (which might be a bit too wild).

But you may be right in the practical sense that the universe did not have time to thermalise yet during the matter clumping and that you need to subdivide it into different areas with different macroscopic parameters so that you can still save dS_(total) >= 0. But then you propose something better.

Cheers,

Careful
 
  • #42
Careful said:
Hem, is that not a contradiction in one and the same message :smile:
Ah, you call T dS = p dV the first law of thermodynamics ?
To me, it is energy conservation, which, in the very specific case of a system in equilibrium described by two extensive variables S and V, reduces to the above expression. Matter of definition I guess. I wanted to state that the equation T dS = p dV is NOT applicable to the ENTIRE elastic ball universe because it is NOT in equilibrium during the time we are considering the application of the second law which is "shortly after the initial condition" (on the time scale of recurrency). But energy conservation IS, of course.
Moreover, you might restrict yourself to the pure mechanical situation where no radiation and particle creation/annihilation is involved (this is perfectly allowed in Newtonian mechanics).
Ok, but then you will also not see any gravitational contraction !
Concerning the equilibrium, I will express myself more accurately here: entropy is constructed by making a phase space average, which is for ergodic transformations the same as the time average over an infinite time period (independently of the intial conditions you start out from).
:approve: But you can even go further. You can slice up your phase space in smaller phase spaces of small chunks of the system (say, a container of gas), and apply ergodicity already here. So you can consider that each of these smaller chunks of the system have *their* phase space point distributed according to the time average of one such system in equilibrium.
Shannon entropy is NOT a time dependent concept, it is constructed by making exactly this average (still using the dynamics though), and a one line calculation confirms this.
Ok, I may be wrong here, but Shannon entropy I only know stricty in information theory http://en.wikipedia.org/wiki/Shannon_entropy
So to me it describes *your state of knowledge* of the microstate of the system (namely, the amount of information you would WIN over what you know already when one would tell you the exact microstate of the system).
As such, from this point of view, the second law only tells you that at best, you can know what you know already, or you might loose knowledge, but you'll never GAIN knowledge by having your system evolve in time.
You also see that it depends on "how you described your system" (what correlation functions you consider relevant and of which you have hope to retain the knowledge through dynamical evolution). In the microcanonical ensemble for instance, you assume that you know the energy, period.
I don't see why this cannot be "instantaneous".
What I call adaptive counting is not entropy in the Shannon sense, it is a handyman's approach to describe (by hand) what happens when we enlarge our interest to larger systems by coupling it with another one. It is this ``by hand´´ what is not described into your reversible dynamics (it is the same issue as your FAPP reduction rule in QM in some sense). The entropy in your line of thinking would make discrete jumps
I don't see what's so non-Shannon about it. I just describe the knowledge I have about the system's microstate as compared to knowing entirely the microstate. It does not necessarily have to "jump", because there can be smooth weighting functions instead of "hard slices".
equilibrium is a TIME average, it is entirely meaningless to speak about temperature at one moment in time in one particular place of the box. In statistical mechanics, this time average is over the entire real line, while in the dynamical situation of thermodynamics (which is an empirical science) the latter is over some small time interval required for thermalisation.
Yes, but it is only over a small amount of time that the second law has any practical meaning. For me, the second law is entirely FAPP, as a function of what you know and are interested in in the system.
Therefore you have two options: either you kick shannon to hell and develop some better notion of entropy (which is desirable), either you kick unitarity or Hamiltonian dynamics out of the window (which might be a bit too wild).
But you may be right in the practical sense that the universe did not have time to thermalise yet during the matter clumping and that you need to subdivide it into different areas with different macroscopic parameters so that you can still save dS_(total) >= 0. But then you propose something better.
Cheers,
Careful
I think that dS(total) doesn't make much sense if you KNOW the initial state of the universe. I think that dS/dt > 0 only has a FAPP meaning, during the first part of time evolution after that initial state, for a subsystem, and that what precisely you understand by S *IS* shannon entropy, namely your lack of information about the microstate (which I don't think is not possible to define instantaneously!).
You can of course add together all entropies of all subsystems in your universe and call that the entropy of the entire universe, but that then simply means your lack of knowledge of the *precise* initial state of the universe ; the only thing that you know about that initial state is that it was special concerning low-order correlation functions (which are usually what you HAVE as information about a system), and that information gets "lost" during the first part of its dynamical evolution. You will win it back at the end of a cycle, when you are reaching a recurrency time. But that's far far far in the future.
In the mean time, you'll have a practical law which says dS > 0, and then a long period of equilibrium, where dS = 0 (you won't be able to do any experiments during that period - in fact you will be dead).
 
  • #43
vanesch said:
QUOTE]
I will comment the details later, but if you go back to your FAPP arguments (wich wants me to say PAF to you :smile: ) then I agree, but then it is also impossible to give this law a fundamental meaning (which people nowadays seem to do)

Cheers,

careful
 
Last edited:
  • #44
Careful said:
but then it is also impossible to give this law a fundamental meaning (which people nowadays seem to do)

I didn't realize that people wanted to make this a fundamental law - it is an almost "tautological" law! Note that there may be OTHER causes of irreversibility which DO make this law more fundamental. In the whole discussion, I wanted to point out that there is no fundamental clash between reversible microdynamics and "apparent" irreversibility described by dS > 0, in that this can also occur in the situation I proposed (a Newtonian universe with a special initial condition). So the *empirical* observation of dS > 0 DOES NOT IMPLY NECESSARILY an irreversible microprocess. *that's* the point I wanted to make.

For instance, it is not because one person has seen once, in a million years, say, one tiny violation of the second law which hasn't been repeated since, that the second law would be "falsified", which would be the case if it had serious fundamental status. (of course, in practice, people would doubt about the mental ability of the poor observer :-)
 
  • #45
vanesch said:
I didn't realize that people wanted to make this a fundamental law - it is an almost "tautological" law! Note that there may be OTHER causes of irreversibility which DO make this law more fundamental.
Indeed, he would be crucified for making that observation :smile: If you mean it in this practical sense then I agree with you as was clear from my very first posting on this thread. I will come back to the details of the previous one later on (it is good to elaborate on these issues, since as you might have noticed I am looking for some kind of objective dynamical notion of entropy again) but have no time for now.

Cheers,

Careful
 
  • #46
CarefulI am looking for some kind of objective dynamical notion of entropy again [/QUOTE said:
Hi, I read your message now and what I have in mind is rather similar to what you want to say there but I would like to have it objective and quasi local (which I shall explain now - the quasi local aspect is non standard of course). The crux of what you say is that you have to adapt your notion of entropy when you notice that (on some timescale) the bunch of particles you are studying has access to a larger amount of degrees of freedom. Realistic averaging time scales are a function of the temperature and are of the order hbar/k_B T = 10^(-11) /T seconds. However, in the Unruh effect the temperature is T = hbar a/(2 pi k_B c), implying a timescale of 2 pi c / a = 10^9 /a seconds! Assuming that a rocket accelerates at 10 m/s^2 this gives around 10^8 seconds which is roughly of the order of one year (the same goes for the Hawking effect). Of course: for a lab, this is not an issue. So, I want to incoorporate the idea that entropy coincides with the number of degrees of freedom the system can click in a reasonable time scale (this is not the shannon definition). You might make a quasi local notion of this by subdividing space (not phase space) into tiny boxes of length L (or the order of the diameter of the particles) and introducing a momentum cutoff M = n L where n is a natural number running such that M stays constant; this introduces a subdivision of phase space. Fix a timescale T, and initial conditions for the system of particles under study (you might even assume you know them exactly): follow the particles for time T and compute the logarithm of the volume in phase space the particles went through. Between time T and 2T you can do the same and so on ... I should refine this still (for example when the dimensions of the spatial volume the particles can be in gets substantially larger, you might want to increase the time scale) and you might even take an average over realistic initial conditions. Anyway, you can spell out your comments, I could make this more formal if you want to.

But this goes all way beyond the standard textbook Shannon notion in the sense that you need a dynamical notion of the available degrees of freedom associated to a preferably dynamical timescale.

I know this does not coincide (in a straightforward way) with the idea that the system moves from smaller to bigger boxes (which you choose according to your notion of macroscopically distinguishable) in the ENTIRE phase space which you have fixed from the beginning (which of course you refer to and I am aware of). But I think it might capture it rather well ... it is based upon the intuition that motion becomes more chaotic when equilibrium is to be reached.

Cheers,

Careful
 
Last edited:
  • #47
Careful said:
I am looking for some kind of objective dynamical notion of entropy again

Entropy? Let's for now just say that high Entropy means less structure,
less organization, however you want to define this:


Attractive Forces decrease Entropy.

-Gravity organizes matter into stars and galaxies.
-The Strong Force brings us nucleo-synthesis.
-The EM field gives us atoms, molecules, solid matter.

They all turn chaotic matter into organized matter.


(Pseudo) Repulsive Forces increase Entropy

-Heat/Kinetic energy increases entropy. (Boltzman, 2nd law)
-Pauli's exclusion principle also acts as a pseudo repulsive force.

The both together save us from becoming black holes in no-time.


All Real forces are Irreversible in time

Gravity should be repulsive in order to organize matter into stars and
Galaxies backward in time. For the time-reversal of EM fields things
become even more wired: Equal charges have to attract each other
while opposite charges need to repel each other.

Only the pseudo repulsive forces (Heat, Pauli) seem to be symmetric
in time.



Again on Entropy:

Use Shannon? You'll get in this discussion that information is never lost
at all, not even if stuff is poured into a black hole, (Hawking...)

Use? Boltzman? 150 year old 2nd law intended for heat/kinetic energy.
Don't extrapolate poor, old laws outside their intended domain...


Regards, Hans
 
  • #48
Hans de Vries said:
Entropy? Let's for now just say that high Entropy
Regards, Hans
That's funny: you gave a description of what force is supposed to do what with entropy (what I already knew) without actually giving one particular definition :smile: What I am trying to adress here is the following : when we observe a system S which we want to study, entropy is the logarithm of its degrees of freedom which we can somehow estimate (by hand) at that moment in time through observation (for example the particles are in a bounded region of space and there is a momentum cutoff). S is usually open and can conquer more and more (or less) ``degrees of freedom´´ per time interval as it evolves. This picture is a quasi local one, it does not start from the a priori knowledge that S is a part of a closed system which can give rise to some a priori partition of macroscopically distinguishable configurations (as is usually done). It defines dynamically entropy by caunting the ``degrees of freedom´´ the system occupies in some small time interval. This has the advantage that the time derivative of the entropy can be instanteneously calculated while in the picture of Vanesch (with the c) I should wait until I know the ``final´´ phase space and the associated coarse graining in order to do this. I wondered wether someone knows something about this, or has some comments.

Cheers,

Careful
 
Last edited:
  • #49
Careful said:
when we observe a system S which we want to study, entropy is the logarithm of its degrees of freedom which we can somehow estimate (by hand) at that moment in time through observation.

Maybe you want to involve symmetries rather than degrees of freedom.
(Think for instance atomic grids). Symmetry provides the way to describe
a system with less parameters, is less information, is less entropy.

The whole point is indeed to correctly quantify this (The no. of bits needed)
Not so easy. You'll probably keep on finding tricks to reduce the number
of bits just a litle bit more, just like what we see in video compression.

I somehow doubt if there's a single, simple and elegant way to do this.Regards, Hans
 
Last edited:
  • #50
Hans de Vries said:
Use Shannon? You'll get in this discussion that information is never lost
at all, not even if stuff is poured into a black hole, (Hawking...)

I've always understood this as a weird way of saying that this is not going to be possible with reversible (unitary) dynamics...
 
Back
Top