Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Time and the 2nd law of thermodynamics

  1. Feb 20, 2008 #1


    User Avatar

    Time, or rather the arrow of time, is defined within the 2nd law of thermodynamics: The future is in a higher entropy state than the present, which in turn is in a higher entopy state than the past.

    But when the universe, our isolated system, reaches the maximum entropy state - heat death - what happens to the arrow of time? If the entropy is at a maximum state, time doesn't flow in any direction anymore, because we've defined the future as "the place, where the entropy is higher than now." Time simply stops.

    What is your view on this subject? This problem clearly arises because we've defined time through the 2nd law of thermodynamics, so is our definition faulty?
  2. jcsd
  3. Feb 20, 2008 #2
    Don't believe in everything they say !

    Don't believe in all hyphotesises the scientists present! Of course I have not read all
    theories and thoughts about thermodynamics and time - but the definition of time you mention I never thought was established, just a kind of thought experiment. But of course they could define time flow that way - it is hard to prove them wrong from experiments.

    I don't know that definition exactly. But if they mean entropy derived from classical definition of temperature from kinetic energy in molecules, you cannot help suspecting
    they must have forgotten something. Like internal energy fluctuations within atoms and other things we don't know much about.

    Everything is not just molecule velocities, while time must be expected involved in everything whatever it is. :zzz:
  4. Feb 20, 2008 #3


    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    Maybe you might want to read this article from Physics Today a while back:


  5. Feb 20, 2008 #4


    User Avatar

    There are a lot of different formulations of the 2LT (2nd law of thermodynamics). I think the arrow of time is derived from the macroscopic, phenomenological 2LT:
    "In an isolated system, the entropy has a tendency to increase."
    The "has a tendency"-part accounts for fluctuations. I think this formulation of the 2LT is the most precise (compared to for instance "It is impossible to convert heat into work, if that is the only result of the process.")
    So given this formulation of 2LT, we can easily define the arrow of time. However, if you use the 2LT derived from statistical thermodynamics, the formulation might not be completely
    rigorous because fluctuations are way more common on a molecular scale compared to a macroscopic scale.

    It seems like the author is talking about 2LT as derived from statistical thermodynamics - read the above.
  6. Feb 21, 2008 #5


    User Avatar

    I think these are interesting questions.

    My view if first of all that the notion of entropy is somewhat ambigous. Entropy is a measure of missing information. But how do we choose such a measure, it seems that the choice is not unique. There are many different measures of supposedly the same thing.

    When you think about this task: we are trying to estimate the chance that we are right, given what we don't know. But here is where it starts smelling, and this isn't a trivial task because it seems to suggest that should need to know the strucutre of what you don't know, to do this, and does that make sense?

    They way I think of this, I think of entropy as a kind of measure of the a priori probability that our information is correct. My point of view is also that of relational information, and in that perspective entropy and probability is relative.

    So entropy is a rating system for states. I think that this measure is bound to be defined by the observers in all it's incompleteness. This also suggests to me that "heat death" (maximum entropy) is a relative thing, because it is not obvious that everbody will agree of the maximum! The results of this should be that the heath death doesn't occur, rather that we may see ongoing dynamics, oscillations or similar.

    I think the logic of this naturally takes us to discuss changes, and rather than to look for a measure of rating probability of information states, we should also look for a measure of rating transformation of the same states. Here I think the step to interpreting the action is a natural extension to entropy is natural, which is similarly also relative.

    The idea of heat death suggest that there is a universal, objective, physical realisation of the measure of disorder. The usual semiclassical use of entropy as taken right from classical stat mech with minimal changes isn't convincing IMO. In classical stat mech, there is implicit background reference or universal equiprobability hypothesis behind everything, and this is the annoying arbitration that I think must be controlled.

    So in summary my view on this is that, the key to understanding, and resolving this issue is to focus on the construction of the magic measure here: What is a proper definition of entropy? which special attention to if the construction itself, contains speculations? This means we are constructing a measure for disorder, based on underlying information that is NOT similarly questioned.

    A first idea here, is to iterate the same idea, and now try to construct a measure of missing information OF the entropy measure. But this suggest an expansion, and what about convergence? or does this expansion even have a physical meaning in evolution? If so could it's progression relate to time? Could nature itself constrain this expansion by some self regulation?

    I don't have the answers but I think this is exciting stuff.

    Last edited: Feb 21, 2008
  7. Feb 21, 2008 #6


    User Avatar

    I agree with this preference. Almost the same thing, my favourite is this rewriting

    "In the absence of unexpected feedback, our state of information evolves as expected"

    In the absence of unexpected feedback ~ "isolated" system

    My reason for rewriting, is how do you KNOW a system is isolated? Clearly you don't. You can guess at best, which is often of course, good enough. But still, it's an important principal difference.

    By evolving as expected, implies a probability measure of expected changes. The default expectation is that give that we have a certain information, it seems more likely that we find out that we were wrong and that something that was a priori more likely in the first place is found to be right.

    This should mean that instead of assuming that only the most probably thing happens, everything are expected to happen, but only in proportion to their rated probability. And of course our entropy is related to this probability - but in disguise.

    In this formulation, the second law saying that we expect things to happen in proportion to how we rate the probability for it, seems self evident, and it is.

    I think the KEY, or the magic behind this, is the choice of the measure, and how the probability is calculated. IE. HOW is the expectations computed? Once it is computed, it's self evident thta our expectations follow that. And one may thus say that if we have complete information about the possibilities (isolated system) then these expectations should be met.

    I can't resist noting that actions and entropies can be interpreted on almost the same footing in this slightly abstracted view. The second law seem to connect the actions with entropy gradients somehow with simple types of dynamics (diffusion) (very loosely speaking, with details missing).

  8. Feb 21, 2008 #7


    User Avatar

    It's also interesting to note how similar this is to the notion of geodesic in geometry!

    Not to link to other forums but here is a refletion I made in another forum, it's my own posts to I'll just paste it in here as it relates to this.
    ----------In the goal of this thread, to develop more conceptual intutition of QM and GR and their similarities rather than differences, as a guide to learn more about this, here is another angle of reflection along the same lines.

    Anyone having more reflections on my relfections from a different viewpoint is appreciated.

    In short General relativity the dynamics is described at two levels.

    (I) The dynamics in - or relative to - spacetime.

    This is usually expressed so that a particle subject to no non-gravitational forces, follow a geodesic in spacetime.

    (II) The dynamics of spacetime itself.

    This is usually expressed by einsteins field equations, which is a relation between the geometry of spacetime and the matter and energy distribution in the stress energy tensor.

    If the mass of the test particle is small enough to not distort spacetime, the dynamics is effectively that of a particle moving in a fixed, but curved background. This background is determined by the energy and mass distribution of the environment.

    for example when a stellar dust particle circles the earth in space.

    But the nontrivial things happen when the particle is massive enough to significantly distort it's own environment. That means that for each infinitesimal change of position of this particle, there is an infinitesimal change of the entire geometry of spacetime!

    This means that, as the sytem evolves, the "geodesics" keep changing too.
    this is for example when several extremely massive neutron stars occur in many body problems, all of the participans make massive contributions to curving the spacetime.

    If we let that be a simple idea of classical GR. How can we now rethink, or reinterpret, those principles in terms of something that is easier to merge with the information nature of QM?

    I propose the following conceptual analogy as an alternative to "visualisations".

    In short, dynamics is described at two levels.

    (I') The dynamics in - or relative to - prior expectations.
    This can be trivially expressed so that a particle subject to no unexpected feedback, evolves as expected.

    Put this way (I') appears almost trivial.

    (II') The dynamics of the expectations themselves.
    The dynamics of expectations, can be decomposed into two parts.

    a) expected dynamics (as induced by constraints)
    b) unexpected feedback

    Obviously the unexpected dynamics is inherently unpredictable. So our best bet is to go base decisions one expected dynamics, but leave the door open for unexpected events, because given insight of our incompletness, the unexpected is still somehow expected.

    For me at least, this gives a fairly clear conceptual vision, suggesting some deeply interesting parallells between QM's information perspective and GR's relative views.

    Some key conceptual issues I personally see is:

    - The association of movement along geodesic, with the "expectated change". And if we from a pure informational view, can induce such an "expected change", then this defines a geometry of our information.

    - The idenficitation of dynamical geometry, with evolving expectations, because as changes occur, our set of knowledge changes, and this updates our expectations.


    - In describing the theory general relativity, there is a birds view present, which can also IMO be seen as a background expectation that isn't induced from a real observer. It's somehow an external observer or god. This is the sense in which GR is deterministic, and I also think that a superobserver does qualify as a kind of background.

    - In a sense one might be tempted to say that unexpected feedback is related to when we have "open systems". But IMO, closed vs open systems is not possibly a valid intial condition. Because how would you know if the system is closed or open in advance? It must clearly by an idealisation.

    So is this a "problem"? It seems so, but I think that this problem may also be part of the solution to the problem of time. Because at first it seems we are just drifting and drifting, it's not possible to come to a certain conclusion! Frustrating! But maybe this is nothing but the drive for time?

    I think this is extremely interesting.

    - What can one say, about the structure formation in such open crazy world? Is it possible to make any generic predictions about probable structures, in the case of some minimal assumptions? It may seem thay anything is possible, but that isn't necessarily a problem at all as long as everything isn't equally "probable". To make gigantic jumps from this chaos to everyday life probably isn't possible, because it would be a logical jump. Perhas the first ad simlpest thing to elaborate is the microstructure of reality. What are the simplest, nontrivial structures that we would expect in such crazy world?

    We talk about information, but where is this information encoded? How does even the concept of probability make sense at this point? At the same time it gets crazy and unpredictiable, it seems to get _simpler_. Because the complex things doesn't exists, so the "chaos" seems self-restraining.

    Ie. how is this very theory relating to a hypotetical simplistic prehistoric observer who might be nothing but a flip flop device? If you are a flip-flop device, how can you improve, and evolve?

    Einstein fought with trying to establish the relation between spacetime geometry and the matter and density coupling. His answer was Einstein field equations.

    Now we are at a similar situation, to establish the relation between the subjective expectations (defining expected "generalised geodesics") and the intrinsic information of an observer.

    IE. How does an observer, with incomplete information compute the optimum strategy for his actions? This contains many subquestions indeed, and one question is to what extent Einsteins field equations can be exploited to directly solve this, or do we simply have to invent a new fundamental relation that would be the generalisation of einsteins field equations, where we take the step from the mechanistic spacetime view, to a generic information space?

  9. Mar 27, 2008 #8
    entropy it is just an information measure. it measures the amount of ignorance about the microscopical state of the system given some macroscopical state defined by means of values of some function of
    the microscopical states.

    if for example u have average energy and average magnetization as the given macroscopical parameters values. then there is (maybe) a lot of microscopical states that match those macroscopical parameters values.

    and from the entropy maximization principle (the same that maximizes ignorance in the shannon formulation) then u have in thermodynamical equilibrium the boltzman probability distribution for the
    microscopical states.

    u can reformulate the shannon point of view for system out of thermodynamical equilibrium.

    then u can see that if u have a system that is not isolated but in contact with a heat reservoir (a source of noise), although u may know exactly the microscopical state, then u ignorance about the microscopical state of the system will rise up with time (even if u know the ¨real¨ dynamical equations
    and not some aproximation of it as for example newtown ones, or einsten´s field equiations), because non isolation lead to a perturbations of the microscopical dynamics that tend to destroy the initial information that u have about the system. that is, system entropy defined in a time window, will rise up.

    an isolated system, has ¨constant entropy¨ if it can be calculated along the infinite time range (obviously because this entropy measurement is independent of time, because u averaged over time). but if u want to measure a time window entropy (a local time measurement of entropy) then it depends on the
    system in consideration.

    in the case of universe. u must take in account more sharp considerations. if u think of universe
    as an isolated system.

    a partial information of the microscopical state of the system (the universe) given because u only known some averaged (in a window of time) macroscopic parameter values, means that u know the microscopical state of the system with no accuracy, that is, u have some error about the microscopical state of the system.

    given the chaotic nature of the universe implies that this error grows exponentially with time, so u ignorance about the future (or past) microscopical state of the system grows with time evolution (forward or backward), so u local time entropy also grows.

    best regards
    Last edited: Mar 27, 2008
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Time and the 2nd law of thermodynamics