Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is the universe a recycling bin? (Entropy)

  1. Jan 2, 2007 #1
    It would seem to me that the universe is made up of certain Low Entropies here and there that could potentially turn into allot more Low Entropy like Humans for instance becoming more intelligent and organizing the universe. Right now we view the universe as mainly high Entropy and moving toward that but how does intelligent life factor in. Life that seemingly gets more and more organized without and t logical explanation since Entropy tends toward High Entropy.

    The Big Bang theory says that the universe is going toward High Entropy from Low Entropy, but there are allot of clues that there is allot of low Entropy that evolves toward lower Entropy like on earth with animals and humans that could possibly overtake the mainly high entropy through the universe..

    Example: If humans spread throughout the universe and created more low entropy by organizing it and tapping into natural High Entropy resources could that eventually prove the Universe not to be one linear time arrow but several balanced ones creating a more multidirectional entropic universe?

    Is the universe creating Low Entropy potentials or something like that by growing planets.
    Last edited: Jan 3, 2007
  2. jcsd
  3. Jan 2, 2007 #2
    Even though humans appear to be at a lower state of entropy, and are defying the laws of thermodynamics, that is an illusion. The truth is in how you define your system. If one uses the earth as the system then the entropy of the earth is increasing as a result of human waste. This results in global warming, pollution, heat given off by industry and cars and escaping to open space. So from a system standpoint entropy is increasing in the Universe.
  4. Jan 2, 2007 #3
    But if galactic formations have a cyclical lifecycle, being compressed into dark matter through a singularity before re-emerging at a Swartzchild White Hole, could the overall entropy of the Universe be said to be decreasing over time?
  5. Jan 2, 2007 #4
    That is conjecture that Multi-Verses and White Holes (entrances to other Universes from a singularity or black hole in this Universe) exist. Wouldn't that defy the law of conservation of energy, that energy is conserved? Our Universe would be losing energy to another Universe through a white hole without replacement mass. Wouldn't that result in increased entropy?
  6. Jan 2, 2007 #5
    Why does a white hole imply other universes?

    If matter that enters a black hole re-emerges at a white hole, that wouldn't violate energy conservation.

    You'd just have to accept 2nd Law Neutrality on the Universal Scale.

    And the Universe would become a big perpetual motion machine powered by gravity and infinity.
  7. Jan 2, 2007 #6
    It could be that White Holes and Black Holes co-exist from the same singularity in this Universe but I would guess that they do not exist in the same brane. For any given singularity in this Universe the corresponding white hole may open up to a different brane or dimension. Again, this is all conjecture. I have not seen any info or papers that support this.

    In regard to the original question about entropy and the human race. As far as we know right now the entropy of the earth is increasing as a result of lost heat to the atmosphere from mans waste. Even the sun will run out of energy eventually, and the human race will end if it can't advance enough to travel and colonize the galaxies. So all indicators point to increased entropy unless something intervenes.

    If one can determine that we live in a cyclic Universe that expands and contracts indefinitely, that would change a lot of things.
  8. Jan 3, 2007 #7
    Entropy is proportional to the amount of energy that is not stored in mediums consisting of massive particles. It happens when matter is converted into radiation. Granted, conversion of radiation into chemical energy is possible via photosynthesis, and leads to more inertia (mass). Several processes also lead to the formation of fuels. Lowering of entropy corresponds with net gauge boson (e.g. photon) absorption, which is the case for the trees in growing Canadian forests. On the other hand, increasing of entropy occurs when embedded energy is being used up and released to a radiative rather than a mass form. It's rather simple really. The net work done to a vacuum is the entropy (simply the radiation emitted into clear space empty of massive particles), and it corresponds to a net loss in inertia (mass).

    Net heat flows go from from a hotter region to a colder region. A very cold region akin to Bose-Einstien Consendate can occur at regions of intense pressure (e.g. close to inactive black holes) due to gravitational sucking of light. However, black holes are thought to have really large entropy because the heat is considered unavailable, though large black holes tend to be colder than even the cosmic background radiation! They can eat up heat by the smidgeons.


    All substance-powered engines can exist when there is maintainence of available energy through their food/fuel. In the end however, they decrease entropy. The power ejected radiatively by star systems throughout the universe would have to be removed from the vacuum. It is easier to imagine this if the universe were arranged fractally such that higher (less inertial) potentials are always around and about lower (more inertial) potentials which can toss back the radiation. However, we would have to have something like dark-energy stars billions of light years wide to catch the excess of star systems (the photons). Certainly for it to work, the angular size of these systems would have to be very large. The would have a gravitational lens surrounding them which would tend to smear the radiation from objects significantly inside them[1] (ala cosmic background radiation).
  9. Jan 3, 2007 #8

    Chris Hillman

    User Avatar
    Science Advisor

    low entropy humans?

    Hi, Flux,

    Whoa! Back up a minute.

    Humans are hardly "organizing the universe". And a low entropy human (in the sense most commonly used in physics) is probably more like a frozen corpse than a functional cosmologist...

    Lurking in this I sense the idea that "higher entropy states exhibit, in some sense, greater disorganization", which one might think, implies that conversely, "more organized states should have lower entropy". But to try to make sense of this, you need to know what you mean by "state", "entropy of a state" and "organization of a state". And once you try to become precise, it all gets a lot more complicated--- and a lot more interesting!

    In fact, there are many possible definitions of "entropy", and not all are equivalent. This is particularly true when you start mixing up biology with physics.

    Some quite different looking definitions of "entropy" do turn out to have close relationships under various circumstances. For example, Shannon entropy [itex]H({\mathcal A}) = -\sum_{j=1}^r \, \mu(A_j) \, \log \mu(A_j)[/itex]
    can be formulated (following Kolmogorov) in terms of a "finite measureable partition" [itex]{\mathcal A}[/itex], i.e. [itex]X = \uplus_{j=1}^r \, A_j[/itex] where the [itex]A_j[/itex] are measureable subsets of a probability space [itex](X,\mu)[/itex]. Another, Boltzmann entropy, is formulated in terms of a finite partition of a set, namely the log of the obvious multinomial coefficient, i.e. the size of the orbit under a suitable group action by the symmeteric group [itex]S_r[/itex]. Yet these turn out to be closely related quantities. Indeed, as von Neumann pointed out to Shannon, by a strange historical accident, Shannon entropy originally arose in statistical physics as an approximation to Boltzmann entropy, even though most now agree that if history were logical, information theory could and should have predated twentieth century physics. Also falling into this group of close relatives is another important entropy from dynamical systems theory, the topological entropy.

    But some similar looking definitions turn out to capture rather different intuitive notions; for example, compare the notion of Shannon entropy--- I swear I'll scream if anyone calls this "Shannon-Wiener entropy", or even worse, "Shannon-Weaver entropy"--- with the notion of "Kullback-Liebler divergence", aka "cross-entropy", aka "discrimination", etc.) [itex]D({\mathcal A}, \mu | \nu) = \sum_{j=1}^r \, \mu(A_j) \, \log \left( \mu(A_j)/\nu(A_j) \right) [/itex]

    Some definitions have few if any known mathematical relations, but appear to be trying to capture somewhat related intuitive ideas. And some appear to have little relation to each other.

    (Similar remarks hold for "state" and "organization".)

    Let me try to elaborate a bit on my claim that biological notions of "complexity" might not be related in any simple way to the notions from dynamical systems theory/information theory which I mentioned above.

    There are many different definitions on entropy used in statistical mechanics, which certainly cannot define "the same quantity", if for no other reason than that they are not defined on the same domain, but in addition, these quantities are often numerically different even when both are defined; hence they are distinct.
    These entropies belong to the group clustering around Shannon entropy which I very roughly described above, and they do to some extent conform to the slogan ""higher entropy states exhibit, in some sense, greater disorganization". As others have already pointed out, however, this should be taken to refer to a "closed system" and the Earth is not a closed system; rather, we have an energy flux Sun -> Earth -> Deep space.) But the point I am trying to get at here is that intended sense of "organization" is probably different from what you have in mind when you spoke of human acitivity allegedly "lowering entropy".

    Now think about this: how much information does it take to define a bacterium? A redwood tree? A human? More than a decade ago I used to argue with biologists that the then common assumption that the complexity of an organism is simply something like "the Shannon entropy of its genome" is highly questionable. From what I've already said you can probably see that isn't even well-defined as stated, but there are reasonable ways to fix this. The real problem is: is this Shannon entropy an appropriate measure of "biocomplexity"?

    I struggled to explain my expectation that Shannon entropies are inadequate to capture biological intuition about the kind of "complexity" which often interests, let us say, evolutionary biologists. My point then as now was that depending upon context, there are many things one might mean by "biocomplexity" or "biotic organization" and there is no reason to expect that these notions must all be measured by the same mathematical quantity. Quite the opposite--- one should expect quite different theories to emerge once one has found appropriate definitions.

    For example, perhaps without conciously realizing it, many people think of complexity as superadditive, which means simply that "the complexity of the whole is greater than the sum of the complexity of its parts". But Shannon entropy (and Boltzmann entropy) are subadditive: "the entropy of the whole is less than the sum of the entropy of its parts". (Roughly speaking.) This is a feature which these entropies share with classical Galois theory (the lemma we need is a triviality concerning indices of subgroups, which is sometimes attributed to none other than Henri Poincare), and this is not a coincidence.

    At the level of a single organism, I also pointed out that biologically speaking, it seems that a genome by itself does not define a typical modern organism (not even a virus), because it requires rather complicated "cellular machinery" to transcribe the DNA (or RNA) into protein. If we admit that our mathematical theory should not presume to accomplish anything unnatural, it follows that our theory should not "define" the biocomplexity of an organism in terms of the genome alone. Presumably one must also take account of the "overhead" associated with having a working instance of all that complex cellular machinery before you can even start transcribing, i.e. "living" (at the level of a cell).

    And as you have probably noticed, defining the complexity of a biosphere is probably a rather different enterprise from defining the complexity of a single organism!

    I also used to caution biologists against assuming that in a typical biosphere, under natural selection we should expect a biosphere to become more and more "complex". For one thing, this doesn't mean much if one hasn't offered a well-motivated mathematical theory with a notion of complexity which can be applied to construct mathematical models of evolving biospheres. For another, there is really no reason to expect a monotonic increase of "biotic complexity". Much earlier, the noted biologist George C. Williams expressed some similar caveats.

    BTW, Claude Shannon's Ph.D. thesis applied abstract algebra to population genetics! (In his highly original master's thesis, he had previously applied mathematical logic to found the theory of switching circuits.)

    Again, it's not nearly that simple. In fact, I have often said that I know of no subject more vexed in modern science. Even worse, with the rise of political movements masquerading as fringe science, such as "intelligent design", this already vexed area has been further burdened with unwanted (and entirely spurious) political baggage.

    A good place to begin reading might be an old and often cited essay by Freeman Dyson on the notion of "heat death" and its malign implications for thought processes rather generally defined. Some of the specifics have been overtaken by subsequent revolutions in cosmology, but this still makes excellent and thought provoking reading. Much recent work traces its roots back to this essay, or even earlier. See http://prola.aps.org/abstract/RMP/v51/i3/p447_1

    Curious readers may also consult Peter Walters, An Introduction to Ergodic Theory, Springer, 1982, or Karl Peterson, Ergodic Theory, Cambridge University Press,1983 for the ergodic theory formulations of Shannon entropy used above and its relationship to topological entropy. Compare this with Cover and Thomas, Elements of Information Theory, Wiley, 1991. (There are many excellent books on ergodic theory and on classical information theory, but these are perhaps the best for the purpose at hand.)
    Last edited: Jan 3, 2007
  10. Jan 3, 2007 #9

    Chris Hillman

    User Avatar
    Science Advisor

    Uh oh!

    Hi, kmarinas86,

    That's not really true. But maybe you were just speaking loosely?
  11. Jan 3, 2007 #10
    So is there a basic definition of entropy in terms of possibilities, etc? And can all these definitions find common connection with each other?

    For example, would the entropy of the event horizon of a black hole be equivalent to ALL forms of entropy inside? I think they are suggesting that black hole entropy is derived from the micro states of spacetime itself, from which everything else is made. So I would think that the BH entropy does encompass all types of entropy inside the BH.
  12. Jan 3, 2007 #11
    Yeah. I'm not well versed in statistical entropy. But I'm not clear on how much non-thermodynamic entropies have to do with recycling of matter. That's why I jumped in and only thought about thermodynamic type of entropy without labeling it as such. However, I made no mention of the absolute temperature of the system which would have to be mentioned when actually determining the entropy. Entropy is in units Joules per Kelvin, not Joules.

    Surely, there is not a majority of processes which are known to be able to reverse the work being done by fusion in stars. Since radiation is produced in fusion reactions, certainly the binding energy which was gained by the merging of light nuclei would have to be lossed for the "reverse of the fusion reaction" to occur. I'm not even sure if massive doses of radiation would break these down (photodissociation):



    Before supernovas occur, the binding energy of nuclear matter may decrease in the process of making heavier elements from elements heavier than iron and lighter elements from elements lighter than iron. These involve endothermic reactions (going down either side of nuclear binding energy/nucleon curve).

    In fact, one may characterise the increase of thermodynamic entropy as the result of exothermic events at a given absolute temperature (divergence of matter and radiation), which humanity surely produces, and the decrease of thermodynamic entropy as a result of endothermic events at a given absolute temperature (convergence of matter and radiation), such as photosynthesis and absorption of solar radiation, to be lowering entropy. Of course there is a point where the endothermic capability of a planet, such as early Venus with its blanket greenhouse atmosphere, prevents the formation of life as we know it. Without heating from the sun, Venus' atmosphere would be more uniform and would be describable by more microscopic configurations, hence, it would have higher configurational entropy.


    The entropy of mixing (configurational entropy) caused by colliding nebula may be undone by the gravitational sorting of particles of different density. But the materials would then be subjected to different temperatures and pressure. Work would be done, and about half of that work would be released as heat. An exothermic event no doubt.

    The reason why the universe is shown to be irreversible in nature is the lack of endothermic events comparable to those exothermic events which are so common. More heat is being released than is being captured. Any entity that picks this "net heat" up must end up increasing the amount of matter and therefore the amount of inertia (and even the moment of inertia). When heat flows from regions of lower pressure to regions of high pressure, that is when we have endothermic events taking place. But this cannot be a net heat flow if the region of higher pressure is hotter. But with the effects discovered by Einstien, black holes (and perhaps - more probably - theoretical equivalents such as gravastars), may be the high pressure low temperature objects needed for converting large amounts of radiation into matter. If such objects do not exist, then I am at a loss of justifying the idea of a recycling universe.
    Last edited: Jan 3, 2007
  13. Jan 4, 2007 #12
    Hi, all and Chris Hillman,
    I am reading your post and also am on decoherence in the book I'm reading so far. Thus will lead to superstring theory at the end. I will post more later.

    As far as black holes, they could probably show that there are many different levels of entropy throughout the universe (different black hole sizes, from tiny to huge, thus relating to a holistic entropic approach) but as far as the large universe of ours, we would have to discover how Black Holes play into it first to understand exactly which way its time arrow is headed, I think. If we really don't get our there (in a timely fashion) I don't see how only math could predict it's total purpose.
  14. Jan 13, 2007 #13
    It's rather difficult to talk about entropy on the scale of the univese.
    The 2nd law of thermodynamics applies only when the system of concern is closed in the thermodynamical sense.
    Does that apply to the universe as a whole?
  15. Jan 13, 2007 #14
    And as a question, suppose we have a rectangular box, measuring exactly two squares, with an imaginary line dividing the rectangular box in two sections.
    Suppose we have only two kind of balls, white and red.
    Now one configuration is: all red balls in the left square, all white balls in the right square.
    Another configuration is: a perfect symetric arrangement of white and red balls.

    What is the measure for the ordening in both cases, and how does that relate to entropy?
  16. Jan 13, 2007 #15
    I have to wonder if the constraint of entropy inside an imaginary sphere to the entropy calculated for the surface of the sphere is not related to the degree of interconnection between all things inside the sphere. If everything inside a volume is connected (through ZPE perhaps, or perhaps through quantum entanglement), then things do not have as much freedom to arbitrarily arrange themselves as if they were totally disconnected to each other. So it might be that the larger the volume the more the networks of interconnectivity grows so that freedom is restricted even more so that the entropy/volume decreases with the radius of the sphere as 1/r. So the question becomes, how does entanglement reduce entropy for a given volume and/or density? Or given a constant density, how would the number of connections grow with volume? I suppose that if we don't have to worry about how things outside the sphere affect what's inside, then entropy would not be restricted by a connection to outside.
    Last edited: Jan 13, 2007
  17. Jan 13, 2007 #16

    Chris Hillman

    User Avatar
    Science Advisor

    Classical thermodynamical entropy, yes. There are many kinds of entropies; this is just one.

    I agree when you counsel caution about drawing rash conclusions without knowing much about "entropy" (such as, knowing enough to know why that term requires extensive qualification), only I would put this much more strongly.
  18. Jan 13, 2007 #17

    Chris Hillman

    User Avatar
    Science Advisor


    Hi all,

    First, I hope it is clear that I was discussing mathematical definitions of quantities called (by the inventor/discoverer) "entropies" or something similar (these include quantities which are commonly applied in various ways in physics, but also include hundreds more).

    The answer depends upon context. Many information theorists would consider Shannon entropy to the basic definition, with considerable justice in that this quantity lies at the heart of their field, has been extensively developed therein, and has proven enormously useful and flexible, with important applications throughout applied mathematics.

    However, a subtle feature of entropies which can be difficult to convey in a short space is that some of these notions are so general that they in some sense "include each other", without being in any true sense "completely equivalent"! For example, the "inclusion" might involve limiting cases.

    So mathematicians who are fond of the various notions of "algorithmic entropy" could say (with justification) that Shannon's notion of entropy is in some sense encompassed with algorithmic entropy. And information theorists will tell you (correctly) that algorithmic entropy, in a specific sense, can be said to arise from Shannon's probabalistic entropy. Yet no-one, I warrant, would claim that these are "logically equivalent" notions!

    As a simple example of how distinct notions of entropies can be quantitatively related to each other, consider Boltzmann's "combinatorial approach", in which we assign an "entropy" to a partitition of a finite set, [itex]n = n_1 + n_2 + \ldots n_r[/itex] (where [itex]n, n_1, n_2, \ldots n_r[/itex] are nonnegative integers), by writing
    [tex] H(\pi) = \log \frac{n!}{n_1! \, n_2! \ldots n_r!}[/tex]
    This turns out to have many of the same formal properties which makes Shannon's entropy so useful, which might not seem so surprising when you realize that (applying Stirling's approximation to each term, when we expand the above expression as a sum of logarithms) [itex]H(\pi) \approx n \, H(p)[/itex], where
    [tex] H(p) = -\sum_{j=1}^r p_j \, \log p_j [/tex]
    where we set [itex]n_j/n = p_j[/itex]. Here, in terms of probability theory, we might say that Boltzmann's entropy approximates Shannon's entropy when we use "counting measure". (Interestingly enough, historically, Shannon's entropy first appeared in physics, as an approximation to Boltzmann's entropy, which in turn had arisen in statistical mechanics, in connection with the attempts by Boltzmann and others to reduce classical thermodynamics to statistical phenomena arising in the atomic theory of matter. Later, Jaynes applied Shannonian ideas to put statistical mechanics on a "Bayesian" foundation.)

    Boltzmann's entropy is a special case of an algebraic formulation in terms of actions by some group G, in which we replace numerical quantities ("entropies") with algebraic objects (certain sets equipped with a transitive action by G), and these algebraic objects (which Planck called "complexions") also satisfy the same formal properties. This approach relates the "trivial part" of classical Galois theory (the so-called Galois correspondence between stabilizers and fixsets) to classical information theory. This might interest the budding category theorists amongst you since the category of G-sets (sets equipped with an action by G) forms an elementary topos, which implies for example that the "space" of "morphisms" from one G-set to another automatically is itself a G-set, and roughly speaking guarantees that in the case of groups G and G-sets with extra structure (e.g. if G is a Lie group and we consider smooth actions by G on smooth manifolds), good things will happen.

    If this intrigues you, I'd recommend Cover and Thomas, Elements of Information Theory, Wiley, 1991, which offers a fine survey of some of the most important notions (including Shannon and algorithmic entropy), as well as a good indication of why Shannon's notion of entropy has been so hugely successful. (Indeed, IMO classical information theory is without doubt one of the most successful theories in all of applied mathematics--- Shannon's notion of entropy is right up there with the notion of a differential equation as one of the most applicable ideas in mathematics.)

    Some pairs of the most important notions of "entropy" are not obviously related to one another, but turn out to have rather specific quantitative relationships to each other (when both are defined). Other pairs appear very similar but are actually quite different.

    The vast majority have few known relationships to the others.

    The question of black hole entropy and the so-called "information paradox" is one of the most vexed questions in physics. It would take a book to begin to explain why. Actually many books. Some of which have been published and can be found in good physics libraries.
    Last edited: Jan 13, 2007
  19. Jan 16, 2007 #18
    If something is 100% certain, then there is no information gained by knowing that it occurred. Is this the same as complete thermodynamic equilibrium?
  20. Jan 29, 2007 #19
    The second law of thermodynamics does not state that entropy must decrease. It states that overall entropy of the whole system increases, but entropy will increase in some places and decrease in others. The second law of thermodynamics is not a certainty but possibilities. There is a greater possibility of entropy increasing then decreasing, therfore overall it increases but in some places it decreases.

    For example a box with a pendulum swinging freely without resistance from air or friction. let's say there are 3 particles of gas in box and 11 units of energy in the box. All 11 units of energy comes from the pendulum at the start. If we have 10 units of energy in the pendulum and one in the particles of gas, we have 1 state, being (1,0,0). 1 in the 1st particle 0 in the other 2. if we have 2 units of energy in the particles and 9 in the pendulum, we have 2 possible states. (2,0,0) and (1,1,0). but if we have all 11 units in the particles and none in the pendulum, we will have a total of 16 possible states. The total number of states is calculated to be 83. 1/83 would be 1 unit of energy in particles. 16/83 would be 11 units in particles. THerfore, there is a higher possibility that the final result would be 11 units in the particles and none in the pendulum. BUT it is still possible that the pendulum has more energy at the end. But there is no space with only 3 particles of gas. The more particles the greater the possibility that the particles will have the energy and pendulum none.
  21. Feb 1, 2007 #20
    The second law is based on small systems with a thermodynamic boundary.
    For example a box which contains a gas. If we have a smaller box, containing the gas molecules, and release the gas, it becomes spread out, filling the whole container. It is very unlikely that the gas at some later point in time would assemble itself again in some clusters.

    This is however totally different on the scale of the universe, since the universe started out as unordered, and due to gravity, we see local clustering of matter. Which is contrary to what the second law would predict. And another thing is of course that the size of the system is part of the dynamics of the system itself.

    See also this lecture by Penrose:
    http://streamer.perimeterinstitute.ca/mediasite/viewer/?peid=a5af1a59-b637-4974-8eb8-c55ef34b9d7f [Broken]
    Last edited by a moderator: May 2, 2017
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook