How Does the Universe Use Temperature Differences to Create Structures?

  • #51
PeterDonis said:
More precisely, differences in gradients/flows would be different "microstates" (detailed states of the universe) that correspond to the same "macrostate" (average values of the intensive quantities).

That makes sense.

PeterDonis said:
This is still an open question, because, as I said before, we don't know how to count the "possible universes", so we don't know how to quantitatively estimate how "fine-tuned" our universe really is.

Thanks. Just knowing that it is a real question helps. I couldn't tell if I was being conned. Some of the more vocal proponents of fine-tuning have motivations that are at best unrelated to scientific understanding.

In your opinion, is the question "why are there heat engines" a real question?
 
Space news on Phys.org
  • #52
techmologist said:
In your opinion, is the question "why are there heat engines" a real question?

Well, it's led to a real thread. :wink:

I think the answer is "sort of". It's certainly true that our local observation that there are heat engines must be consistent with what we know of the universe as a whole, so in that sense it's a real question.

But our concept of a "heat engine" is based on our concept of "useful work", and that's not really a physics concept; it depends on what we find to be "useful", so it's more of a subjective concept. Physically, something we call a "heat engine" is no different from any other system; it obeys all the same laws. It just happens to have an output that we consider "useful". So in that sense, "why are there heat engines" isn't a real question, or at least not a real physics question; it's a question about how we choose to describe certain portions of reality, not a question about the laws that govern reality.
 
  • #53
Thanks for making it a real thread Peter! I'm trying to read G.Crooks paper from 1999 talking about the fluctuation theorem. This after realizing J. England is sort of starting with that. Very interesting. He is generalizing the work done by a heat bath coupled classical system in transitioning over a path in configuration space, whether the path exchanges heat with the bath or is isothermal but selects between microstates (I may be botching that) - so I got to think about your statement that "useful work" is observer dependent.
 
Last edited:
  • #54
PeterDonis said:
But our concept of a "heat engine" is based on our concept of "useful work", and that's not really a physics concept; it depends on what we find to be "useful", so it's more of a subjective concept. Physically, something we call a "heat engine" is no different from any other system; it obeys all the same laws. It just happens to have an output that we consider "useful". So in that sense, "why are there heat engines" isn't a real question, or at least not a real physics question; it's a question about how we choose to describe certain portions of reality, not a question about the laws that govern reality.

I think you're right that the "useful" in "useful" work is not strictly a physics concept. But what I have in mind is not completely subjective, either. I definitely do not mean only useful to humans. I would say "usefulness" has a certain objectivity in the context of organization. The "purpose" of any organization is simply to persist, to keep producing itself. How it does this depends on how it fits into a larger network of relations among organizations. This larger network of relations is itself a higher-order organization. Within the context of that higher-order organization, the organization performs a "function". But it is only performing this "function" because by doing so, it directs resources to itself and persists--produces itself, renews itself, repairs itself. So to an organization, "useful work" is self-repair.

As an economic example, a steel-producing firm performs an essential function as part of a larger economy. But the owners of the firm aren't doing it out of the goodness of their hearts, or patriotism, or whatever. To the extent they have an interest in the continuation of that business, they will consider "useful" any action that tends to grow the business, or at least maintain it. Actually it is more complicated than that, because in any modern firm of that type management and labor also have their own interests, all pulling in somewhat different directions. So the organization, the firm, ends up "acting" as if it had a personality of its own, not identical to that of any of its constituents. It's actions are useful the extent that they tend to keep that organization going.

At the physics level, useful work performed by a Benard cell is work that overcomes viscous drag, keeping the Benard cell from fizzling out. Similar things can be said of a thunderstorm or hurricane. These may or may not have some direct use to humans, but the usefulness referred to here is from the perspective of the organization itself.

I realize that the second law of thermodynamics doesn't explicitly refer to "engines" in the sense of "useful to somebody"--to power their car. It just says that if you have a system and two heat baths at different temperatures, it is possible to arrange a cyclic process with the result that thermal energy is absorbed from the hotter bath, some of which energy is used by the system to do work on its environment, and some of which is passed on as thermal energy to the colder bath. The second law is completely agnostic about whether such a work-producing cyclic process will ever happen. It only puts limits on what such a process could achieve, should it happen. That's where my question is coming from. Is it just a case of anything that can happen will happen?

Jimster41 said:
Thanks for making it a real thread Peter!
Seconded. :)
 
  • #55
Jimster41 said:
Thanks for making it a real thread Peter! I'm trying to read G.Crooks paper from 1999 talking about the fluctuation theorem. This after realizing J. England is sort of starting with that. Very interesting. He is generalizing the work done by a heat bath coupled classical system in transitioning over a path in configuration space, whether the path exchanges heat with the bath or is isothermal but selects between microstates (I may be botching that) - so I got to think about your statement that "useful work" is observer dependent.

Everything in those papers seems to hinge on the condition of microscopic reversibility relating the probability of a forward process to the probability of its reverse process.

P(A->B)/P(B->A) = e^(beta*Q)

where Q is the heat delivered to the surrounding bath during the forward process.

This idea is new to me. I am familiar with detailed balance, which applies at equilibrium, but this microscopic reversibility condition is claimed to apply away from equilibrium. How do they know that? Is there some way to see why it must be so?
 
  • #56
techmologist said:
I would say "usefulness" has a certain objectivity in the context of organization.

But what counts as an "organization" is subjective. There's no law of physics that says what an "organization" is; it's just a particular piece of reality that someone picks out as being of interest.

techmologist said:
At the physics level, useful work performed by a Benard cell is work that overcomes viscous drag, keeping the Benard cell from fizzling out. Similar things can be said of a thunderstorm or hurricane.

True, but again, it is not physics that picks out the Benard cell or the thunderstorm or hurricane; it's us. True, these systems are usually thought of as being "natural", whereas a refrigerator or an engine is thought of as "artificial"; but even those are distinctions made by us, not physics.

techmologist said:
The second law is completely agnostic about whether such a work-producing cyclic process will ever happen. It only puts limits on what such a process could achieve, should it happen. That's where my question is coming from. Is it just a case of anything that can happen will happen?

Not every possible work-producing process that could happen, actually does happen. Since the underlying microscopic physics is chaotic (i.e., it has a sensitive dependence on initial conditions), we really have no way of knowing what picks out which work-producing processes actually happen (except in the obvious cases where somebody deliberately arranged for a particular process to happen).
 
  • #57
PeterDonis said:
But what counts as an "organization" is subjective. There's no law of physics that says what an "organization" is; it's just a particular piece of reality that someone picks out as being of interest.

Right, there's no law of physics that says so. But who says physics is all there is? Everything that happens is founded in physics, in the sense that the underlying laws of physics provide the background for everything. But most things aren't objects of physics. Like algorithms, for example. At some level, it's physics that makes your graphing calculator work. But it isn't physics that makes it give you the right answer. The same physics governs a calculator that gives you the wrong answer.

And while it's true that we do pick out things of interest, we aren't totally at liberty to pick out just anything, or ignore just anything. Our minds organize around a real world that we find ourselves in. They have to or we wouldn't be here.
 
  • #58
techmologist said:
who says physics is all there is?

It isn't, but it's all that's on topic for this forum. :wink: If your question "why are there heat engines" wasn't a question about physics, then it's off topic. I was assuming it was a question about physics.
 
  • #59
techmologist said:
Our minds organize around a real world that we find ourselves in.

Quite true. But there's still a difference between our models of reality, and the reality that is being modeled.
 
  • #60
PeterDonis said:
But what counts as an "organization" is subjective. There's no law of physics that says what an "organization" is; it's just a particular piece of reality that someone picks out as being of intetest.
Chaisson's breakdown of "complexity" as "energy flux density" is pretty objective isn't it?
 
Last edited:
  • #61
Jimster41 said:
Chaisson's breakdown of "complexity" as "energy flux density" is pretty objective isn't it?

I'm not familiar with Chaisson's work, so I can't really comment on it. But a definition of "complexity" in terms of some physical observable is not the same as picking out a system as a "heat engine" or an "organization" and separating it from the rest of reality. That's the part that is subjective.
 
  • #62
PeterDonis said:
I'm not familiar with Chaisson's work, so I can't really comment on it. But a definition of "complexity" in terms of some physical observable is not the same as picking out a system as a "heat engine" or an "organization" and separating it from the rest of reality. That's the part that is subjective.

I agree with that for the most part...

Maybe the fact that there are multiple ways of decomposing the same set of differentiable things we can see, as a "complex dissipative structure" or "organized heat engine" is because those terms are pure subjective projection, totally anthropocentric, or personally subjective. It also may be because everything we see, including ourselves, is one big "complex dissipative structure" or "organized heat engine", one with multiple kinds of symmetry. Seems like an equally consistent explanation, but better in some respects.

And I think it's hard to argue it is completely subjective which is why such a physical observable as "energy rate (or flux) density" - even if only roughy quantifiable (Chaisson takes great care to say this) is available. He positions the term as having useful qualitative meaning over a very broad landscape of interest.
 
Last edited:
  • #63
PeterDonis said:
It isn't, but it's all that's on topic for this forum. :wink: If your question "why are there heat engines" wasn't a question about physics, then it's off topic. I was assuming it was a question about physics.

I couldn't very well post it in General Discussion, could I? They have very stringent guidelines there...

I would like to suggest two lines of thought. First, that some organizations really are more "physical" than others, and belong to physics if they belong to any discipline at all. Second, that most of the objects of physics that we take for granted don't meet the strict requirement of objectivity that you are using to rule out all organizations as objects of physical study.

To come back to the example of the Benard convection cells, there really is a physical reason for their organization, whether you want to call it organization or not. Some patterns in nature are objectively better than others at getting themselves amplified. Once the critical temperature difference is reached, the static, conducting configuration of the water in the dish is unstable. The solutions to the linearized approximate equation for the stream function are swirling modes. Any small perturbation has components of these modes, and they get amplified. For reasons I don't entirely understand, viscosity damps out the higher modes, leaving the first one. This can be used to predict the width and velocity distribution of the cells, but their exact configuration in the dish is random. This is all true regardless of whether there is somebody there to say "hey, look at that!". Benard cells form and maintain themselves in a pretty physical way.

Then there are atoms. Are atoms objective enough? Don't they also have to be picked out as things of interest? They represent solutions to the Schrodinger equation with a very idealized Hamiltonian, one that typically ignores the existence of most everything else in the universe. But this simplified picture helps us understand things like atomic spectra, so it is our explanation of what we see. The fact that we pick out things of interest doesn't make them arbitrary. There might be a good rationale for picking out certain things rather than others. Dan Dennett uses the expression, "carving nature at its joints," which I think he got from Plato.
 
  • Like
Likes Jimster41
  • #64
"Carving nature at it's joints" Love that. I have a book by Dennet On the way.

That G. Crook paper just sent my head spinning. It clarifies a few things I feel I do understand and don't quite understand. I'd like to dive into the first few equations here, maybe relate them to the first page of England's paper. (If only tomorrow wasn't Monday).

My understanding of the Benard cells is consistent with the way @techmologist describes them. They represent a non-linear reconfiguration that allows a step change in convection efficiency, and there is the puzzle of what triggers the sudden change, what drives and constrains the re-configuration to become what it does, rather than something else.

The book "Why Stock Markets Crash: critical events in complex financial systems" by Didier Sornette (a geo-physicist, turned market analyst) really left an impression on me. Specifically w/respect to the role "re-normalization" under scaling operations and discrete scale invariance (power laws) play in emergence. Not physics per se, but believe it's relevant, in that it is the same general process mathematically (and so arguably at some level, to some dgree - a similar process "physically"). More strongly than that, I think it's an example of the symmetry and scale invariance of the "emergence" process in and of itself. Sornette proposes a "log periodic" model of approach to the critical points in the market price signal case. Interestingly the "condensate" past the critical point re-configuration is essentially a "stampede". The market becomes superconductive to fear. Generally, the market it is not well organized over short Time periods and large price ranges. Rather prices are stablized by disorganized individual responses to what is considered ambiguous market information.
 
Last edited:
  • Like
Likes techmologist
  • #65
Jimster41 said:
That G. Crook paper just sent my head spinning. It clarifies a number of things I feel I do understand and don't quite understand. I'd like to dive into the first few equations here, maybe related them to the first page of England's paper. (If only tomorrow wasn't Monday).

If I could understand how the condition of microscopic reversibility is arrived at, I think that goes part way toward answering my question. It actually talks about the relative probabilities of a process and its reverse in terms of the entropy produced in the surroundings. This is more than you can get from the SLOT, which doesn't talk about the probability or rate of any process.

I messed up the equation earlier. I should have written P(forward)/P(reverse) rather than P(A->B)/P(B->A), because it matters that it is the time reversed path. Detailed balance is where you only have to consider the initial and final state.

That Didier Sornette book sounds like a winner. He gets a mention in Per Bak's book, How Nature Works. He sounds like my kind of scientist. According to Bak, he generates all sorts of crazy ideas, and thus has a very low batting average. But it only takes one good one.
 
  • Like
Likes Jimster41
  • #66
techmologist said:
some organizations really are more "physical" than others, and belong to physics if they belong to any discipline at all.

...

The fact that we pick out things of interest doesn't make them arbitrary.

True. I'm just pointing out that our models of reality are not the same as reality.

Take your example of atoms. You correctly point out that our model of an atom is greatly oversimplified. But even in that oversimplified model, atoms have no boundaries; there is no sharp line where an atom "ends" and the rest of the universe "begins". Any such line we might pick out is arbitrary, even though the atom itself is not. And once atoms start interacting, forming molecules, forming crystals, forming metals, etc., the boundaries we draw get even more arbitrary, even in our oversimplified models.

techmologist said:
There might be a good rationale for picking out certain things rather than others.

Yes; the rationale is that we want to explain and predict things, and we need models to do that, and the models we have come up with that make good predictions require us to draw boundaries and pick out particular systems and interactions and ignore everything else. But is that because those models are really the best possible models, the ones that really do "carve nature at the joints"? (Btw, I think you're right that that phrase originated with Plato.) Or are they just the best models we have come up with thus far? Could there be other even better models, that we just haven't conceived of yet, that carve nature at different "joints"?

Before you answer "how could that happen?", think carefully, because that's exactly what did happen when we discovered many of our current models. Take GR as an example. In GR, gravity is not even a force; it's spacetime curvature. So many questions that a Newtonian physicist would want to ask about gravity aren't even well formed in GR--at least not if you look at the fundamentals of the theory. Of course we can build a model using GR in the weak field, slow motion limit and show that in that limit, Newton's description of gravity works well enough. But conceptually, GR carves gravity at very different "joints" than Newtonian physics does. The same thing might happen to GR when we have a theory of quantum gravity; we might find that theory carving nature at different "joints" yet again, and explaining why GR works so well within its domain of validity by deriving it in some limit.

What I get from all this is that we should be very careful not to get overconfident about the "reality" of the objects that we pick out in our models. That doesn't mean our models are bad--after all, they make good predictions. Newtonian gravity makes good predictions within its domain of validity. But it does mean that the fact that a model makes good predictions should not be taken as a reliable indication that the entities in the model must be "real". One saying that expresses this is "all models are wrong but some are useful".
 
  • Like
Likes techmologist
  • #67
I
techmologist said:
If I could understand how the condition of microscopic reversibility is arrived at, I think that goes part way toward answering my question. It actually talks about the relative probabilities of a process and its reverse in terms of the entropy produced in the surroundings. This is more than you can get from the SLOT, which doesn't talk about the probability or rate of any process.

I messed up the equation earlier. I should have written P(forward)/P(reverse) rather than P(A->B)/P(B->A), because it matters that it is the time reversed path. Detailed balance is where you only have to consider the initial and final state.

That Didier Sornette book sounds like a winner. He gets a mention in Per Bak's book, How Nature Works. He sounds like my kind of scientist. According to Bak, he generates all sorts of crazy ideas, and thus has a very low batting average. But it only takes one good one.
Yeah, those equations. I'm looking at eq1 from G. Crooks P(+sigma)/P(-sigma) = e^t*sigma etc. Just an exponential function of time, to see positive entropy production. But I feel like thinking about it. Is it more connotative-ly interesting when read left to right or right to left? I like it read right to left. Suddenly I see Entropy is not fundamental. It is the just a "quality" describing microscopic change, via comparison of likelihood of any two events. The arrow of time. Of course I've heard that more or less. I'm just stating it pretty typically, and it's obvious mathematically, but it seems like we describe entropy often as a thing, something fundamental. So forgetting entropy, in the context of this discussion, what term could be placed "equal" left of the left side to say "the difference between any two candidate events" can be how else defined? Maybe "relative dissipative efficiency" would be one candidate. Or maybe "synchronistic identity" with some associated but separate events (entanglement?). Minimization of space-time curvature (ala Verlinde) in the presence of "bulk pressure", "dark energy" "Lambda" etc.

Per Bak. I was going to get Per Bak.
 
Last edited:
  • #68
Now i see you are talking about eq 5 in Crooks. And after third read I follow the distinction between path independent probability, and reverse path probability.

\frac { P\left[ x\left( +t \right) |\lambda \left( +t \right) \right] }{ P\left[ \overline { x } \left( -t \right) |\overline { \lambda } \left( -t \right) \right] } =exp\left\{ -\beta Q\left[ x\left( +t \right) ,\lambda \left( +t \right) \right] \right\}

I am confused a few paragraphs later by the "Entropy change of the bath = -\beta Q" (I thought it would be positive, though I am guessing it's negative because \beta is an "inverse temperature") , and by the expression "odd under time reversal". I have a lame bucket I throw that in, labeled "Matrix nomenclature, basically like a minus sign or conjugate", but then later I think I missed something really important about "odd".

More dumb questions that betray my mathlessness. exp just means "expectation value" right. I get confused as to how interchangeable that term is with powers of e.
 
Last edited:
  • #69
link to the paper http://arxiv.org/abs/cond-mat/9901352
The Entropy Production Fluctuation Theorem and the Nonequilibrium Work Relation for Free Energy Differences
Gavin E. Crooks
(Submitted on 29 Jan 1999 (v1), last revised 29 Jul 1999 (this version, v4))
There are only a very few known relations in statistical dynamics that are valid for systems driven arbitrarily far-from-equilibrium. One of these is the fluctuation theorem, which places conditions on the entropy production probability distribution of nonequilibrium systems. Another recently discovered far-from-equilibrium expression relates nonequilibrium measurements of the work done on a system to equilibrium free energy differences. In this paper, we derive a generalized version of the fluctuation theorem for stochastic, microscopically reversible dynamics. Invoking this generalized theorem provides a succinct proof of the nonequilibrium work relation.then eq (6) he says

\omega \quad =\quad ln\rho \left( { x }_{ -\tau } \right) -ln\rho \left( { x }_{ +\tau } \right) -\beta Q\left[ x\left( +t \right) ,\lambda \left( +t \right) \right]

which I understand as combining the entropy terms associated with an "isothermal" configuration change, and the term associate with heat exchange. At some level this seems like a circularity, or a redundancy, or something since it doesn't seem clear to me that the entropy change due to heat exchange isn't the same thing/process as the entropy change due to a an isothermal configuration change. But then maybe that's why it's fair to "add them up".

Right after that he references the importance of "odd under time reversal" and I realize I'm pretty confused about what "odd" means. The following section seems pretty crucial, and I don't feel confident I am taking all the implications of the setup into the parts after (eq7). It seems like he's just claiming that "microscopic entropy production is symmetric under time reversal". At some level that seems simple (simple enough to suggest the possibility I don't get it at all).

"This condition is equivalent to requiring that the final distribution of the forward process { \rho }_{ F }\left( { x }_{ +\tau } \right), is the same (after a time reversal) as the initial phase space distribution of the reverse process, { \rho }_{ R }\left( { \overline { x } }_{ -\tau } \right)... two broad types of work process that fulfill this condition. Either the system begins and ends in equilibrium, or the system begins and ends in the same time symmetric nonequilibrium steady state."
 
Last edited:
  • #70
Also, since I can't get Verlinde out of my head. I keep wondering about the relationship between Crooks' "work relation" and the Unruh Temperature/holographic principle invoked in his paper below.

http://arxiv.org/abs/1001.0785
On the Origin of Gravity and the Laws of Newton
Erik P. Verlinde
(Submitted on 6 Jan 2010)
Starting from first principles and general assumptions Newton's law of gravitation is shown to arise naturally and unavoidably in a theory in which space is emergent through a holographic scenario. Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. A relativistic generalization of the presented arguments directly leads to the Einstein equations. When space is emergent even Newton's law of inertia needs to be explained. The equivalence principle leads us to conclude that it is actually this law of inertia whose origin is entropic.These are both somewhat old papers at this point and there appears to be a lot of work discussing each respectively. But they both seem somewhat pivotal in separate threads - having generated a lot of discussion. Maybe someone is connecting them.
 
Last edited:
  • #71
Jimster41 said:
##exp## just means "expectation value" right.

I think it means the exponential function.

Jimster41 said:
I am confused a few paragraphs later by the "Entropy change of the bath = ##-\beta Q## "

As I understand it, the particular process being modeled in this example is reversible, so the total entropy change is zero. That means the entropy change of the bath must be minus the entropy change of the gas. But I may be misunderstanding, since I've only skimmed the paper.

Jimster41 said:
and by the expression "odd under time reversal".

As I understand it, that just means that, if some process has a given entropy change, the time reverse of that process must have minus that entropy change. But again, I have only skimmed the paper so there may be subtleties I'm missing.
 
  • #72
Thanks Peter. Much appreciated. It's encouraging to know they were understandable questions.

I was on-board with the entropy conservation between the system and bath. I was just confused about the convention of sign. I was expecting negative entropy change for the system, and positive entropy change for the bath. But I realize I am imagining that the energy change to the system is decreasing disorder. It seems like it could be described either way...
 
Last edited:
  • #73
Jimster41 said:
I was expecting negative entropy change for the system, and positive entropy change for the bath.

Assuming I'm correct that the process being modeled is reversible, then this will be true for one direction of the process. The entropy changes for the other direction of the process would have the opposite sign, positive for the system and negative for the bath.
 
  • Like
Likes Jimster41
  • #74
when I look at G. Crooks "Generalized Formulation of the Fluctuation Theorem"

\frac { { P }_{ F }\left[ +\omega \right] }{ { P }_{ R }\left[ -\omega \right] } ={ e }^{ +\omega }

ω=lnρ(x−τ)−lnρ(x+τ)−βQ[x(+t),λ(+t)]

and ask what would it take for that { e }^{ +\omega } to vary, in a way that would "select" some transitions over others, in a non-linear (perhaps periodic) way due to the selectee having a smaller or larger ω, I can't help wondering about those natural logarithms and { e }^{ { i }^{ n } }. I know that i is discrete scale invariant (pattern repeating under exponentiation). If the "entropy" terms due to the configuration probability of the initial and final configurations were to sum to zero, or a lower value than some random pair of states, or transitions, in some periodic way, then transition probabilities would support non-linearly varying likelihoods of configuration selections - a population of favored configurations producing significantly "more" or "less" entropy than random selections. Just a whack thought. And it seems to be consistent with what "larger coarse graing regions" means. But it's a slightly different perspective on why those regions might be what they seem to be. Granted the eq above is looking at forward and reverse transition probabilities, but it seems like a super case of any old transition probability comparison.

Then I read something over in the thread below, where { i } is some sort of candidate for the value of the spooky Imirzzi parameter \gamma.

https://www.physicsforums.com/threa...ntropy-from-three-dimensional-gravity.810372/

Whiiiich, i do not understand, though it sounds tantalizingly related, if Gravity is entropic.
http://en.wikipedia.org/wiki/Immirzi_parameter

Random googly-eye connections?:)... but time and scale periodic in-variance is in there somewhere... just got to be... else where in the heck does it all come from?
 
Last edited:
  • #75
PeterDonis said:
True. I'm just pointing out that our models of reality are not the same as reality.
Absolutely.

PeterDonis said:
there is no sharp line where an atom "ends" and the rest of the universe "begins". Any such line we might pick out is arbitrary, even though the atom itself is not.
I like that way of putting it.

PeterDonis said:
Yes; the rationale is that we want to explain and predict things, and we need models to do that, and the models we have come up with that make good predictions require us to draw boundaries and pick out particular systems and interactions and ignore everything else.

Yes. That's why I don't think picking out organizations as objects of interest is fundamentally different from anything else done in physics. In fact, organizations tend to suggest themselves to the observer, because one of their main self-repair tasks is boundary maintenance. They maintain a boundary that is necessarily permeable to the outside world. They be able to must take in "food" and get rid of "waste". But they have to maintain some distinction between outside and inside, because otherwise they would wear themselves out trying to control the entire world around them. In the social-political context, this is why control freaks tend to crack up, or at least cause lots of trouble for the rest of us.

PeterDonis said:
But is that because those models are really the best possible models, the ones that really do "carve nature at the joints"? (Btw, I think you're right that that phrase originated with Plato.) Or are they just the best models we have come up with thus far? Could there be other even better models, that we just haven't conceived of yet, that carve nature at different "joints"?

Before you answer "how could that happen?", think carefully, because that's exactly what did happen when we discovered many of our current models. Take GR as an example. In GR, gravity is not even a force; it's spacetime curvature. So many questions that a Newtonian physicist would want to ask about gravity aren't even well formed in GR--at least not if you look at the fundamentals of the theory. Of course we can build a model using GR in the weak field, slow motion limit and show that in that limit, Newton's description of gravity works well enough. But conceptually, GR carves gravity at very different "joints" than Newtonian physics does. The same thing might happen to GR when we have a theory of quantum gravity; we might find that theory carving nature at different "joints" yet again, and explaining why GR works so well within its domain of validity by deriving it in some limit.

I really like these paragraphs. Very well put.

After my passionate defense of a version of commonsense realism, you might be surprised to hear me say this: I very much doubt that we will ever carve nature perfectly at the joints. To my mind, it must always be a work in progress. But to me, the important part is that it is progress. Newton's physics really is an improvement on Aristotle's physics. Einstein's physics really is an improvement on Newton's. By improvement, I mean that it captures more of reality--makes more of it available to perception.

When thinking about how it is we can know things, an idea that I like is that of a stable perception. A perception that you keep coming back to, even after actively trying to get multiple points of view, multiple opportunities to disconfirm it, is a perception that you can't help holding on to. It is a stable perception. Models that provide a more stable perception of the world are better than ones that don't. They may not be the most stable possible, but they have something of reality in them.

I am also open to the idea that there could be several distinct but equally good ways of carving nature at the joints. It is hard to picture it how it would work in the sciences, but I can draw an analogy with mathematics. There are mathematical structures that can be axiomatized in several different ways, each system having its own benefits and drawbacks. Each axiom system is a window on the underlying mathematical object, but the object is distinct from anyone of these systems.

PeterDonis said:
What I get from all this is that we should be very careful not to get overconfident about the "reality" of the objects that we pick out in our models. That doesn't mean our models are bad--after all, they make good predictions. Newtonian gravity makes good predictions within its domain of validity. But it does mean that the fact that a model makes good predictions should not be taken as a reliable indication that the entities in the model must be "real". One saying that expresses this is "all models are wrong but some are useful".

Agreed.
 
  • Like
Likes PeterDonis
  • #76
Jimster41 said:
Now i see you are talking about eq 5 in Crooks. And after third read I follow the distinction between path independent probability, and reverse path probability.

\frac { P\left[ x\left( +t \right) |\lambda \left( +t \right) \right] }{ P\left[ \overline { x } \left( -t \right) |\overline { \lambda } \left( -t \right) \right] } =exp\left\{ -\beta Q\left[ x\left( +t \right) ,\lambda \left( +t \right) \right] \right\}

Yes, that's the "condition of microscopic reversibility" that I was talking about. I wish I knew where that came from. That is awesome because it is claimed to apply to non-equilibrium processes.

I don't think the forward and reverse paths are "reversible" in the thermodynamic sense of not producing any net entropy. I think in this context, "reversibility" is just referring to the fact (?) that at the lowest level, everything is reversible. That's what I understand Lochschmidt's paradox to be about--how do you get macroscopic irreversibility out of microscopic reversibility? Couldn't you just play the tape backwards without violating the laws of physics? I still don't have a good answer to that question.

So that equation gives quantitative form to the intuitive notion that even though some process can in principle go in both forward and reverse directions, you will more often see it go in the one that generates positive entropy in the surroundings. At least I think that's what it's saying. It sounds great, but how do they know that?

Oh, the reason for the minus sign in front of the Q is that Crooks is using a different sign convention for heat. He is counting Q as heat absorbed from the surroundings, while England is taking Q to be heat rejected to the surroundings, if I recall correctly. That's how I was using it.

EDIT: which Dennett book are you getting? I've read several and I haven't been disappointed.
 
Last edited:
  • #77
"Darwin's Dangerous Idea"
 
  • #78
techmologist said:
Yes, that's the "condition of microscopic reversibility" that I was talking about. I wish I knew where that came from. That is awesome because it is claimed to apply to non-equilibrium processes.

I don't think the forward and reverse paths are "reversible" in the thermodynamic sense of not producing any net entropy. I think in this context, "reversibility" is just referring to the fact (?) that at the lowest level, everything is reversible. That's what I understand Lochschmidt's paradox to be about--how do you get macroscopic irreversibility out of microscopic reversibility? Couldn't you just play the tape backwards without violating the laws of physics? I still don't have a good answer to that question.

So that equation gives quantitative form to the intuitive notion that even though some process can in principle go in both forward and reverse directions, you will more often see it go in the one that generates positive entropy in the surroundings. At least I think that's what it's saying. It sounds great, but how do they know that?

Oh, the reason for the minus sign in front of the Q is that Crooks is using a different sign convention for heat. He is counting Q as heat absorbed from the surroundings, while England is taking Q to be heat rejected to the surroundings, if I recall correctly. That's how I was using it.

EDIT: which Dennett book are you getting? I've read several and I haven't been disappointed.

Yeah, it bears a lot of thought...One minute I think I get it then I'm not sure...

I took his argument to be something like:

  1. The vanilla fluctuation theorem applied to macroscopic states describes the probability of transitions between those macroscopic states as a signed real value, proportional to the relative frequency of states indistinguishable from the start and end state in the total phase space, and the energy dissipated over the transition. Just good old entropy, observing that although macroscopic states are reversible, they have a probabalistic tendency to do some things rather than others.
  2. If you assume the microscopic domain of some controolled macroscopic transition, is a stochastic Markovian one, and that the phase space distributions of state and control parameter are "the same" at the start and end of the state transition, then according to all available observables, they are reversible (I think this is his big point)
  3. Two types of systems obey the rules of indistinguishability at start and end of transition (and so reversibility). 1) A process traveling from equilibrium back to equilibrium 2) A system traveling from a non-equilibrium steady state back to the same non-equilibrium steady state. (also a big claim he's making that needs support, but I can't see any big flaw it)
I think he's kindof saying, "what's the difference between macroscopic and microscopic when identifying a reversible proces... Same rules apply. (this is all talking about classical systems)And to me it all seems to make sense. I guess I buy it.

The part that intrigues me is the path-work definition (equivalent to the entropy production), that applies scale and direction to these transition or path probabilities. This evokes the opening lines of Verlinde's paper on Entropic Gravity where he uses the Unruh temperature and the example of polymer elasticity to claim that entropy is a force that does work. The question then, I think, is nicely set at the microscopic level, to wonder - what is doing that work? What is the cause of the "force" that does work, we call entropy?

and his (Crook's that is) equation..

ω=lnρ(x−τ)−lnρ(x+τ)−βQ[x(+t),λ(+t)]

looked at now down in the microscopic "path-work" context, is saying that the configuration terms (along with dissipation) are part of what entropy is. This sound obvious but here we are talking about microscopic system paths not about macroscopic ensembles.What is it about one microscopic configuration path that makes it a path of less work? It does not seem remotely sufficient in this context, where we are defining the mechanics entropy itself, to say it's because the path is "more probable". Rather these are the terms that define that statement. The question here is why it is more probable and how? It is because it requires, or is, a different amount of work. Configuration differences themselves contain and require work. Information is energy, or rather energy is information. This is just so... Verlinde.

BTWI just started this https://www.amazon.com/dp/0786887214/?tag=pfamazon01-20] - Holy crap is it interesting.

[Edit] I'm not familiar with that paradox, but if I had to guess how you get macroscopic irreversibility, which is only probabilistic, from microscopic reversibility, is because whatever it is that is "choos.ing" some paths and not others, whatever it is which is assigning "cost" to microscopic paths, is distributable, and assigning tthat work (unevenly) over the microscopic parts that make up the macroscopic ensembles. There are LQG-ish notions to this I think.
 
Last edited:
  • #79
Jimster41 said:
"Darwin's Dangerous Idea"

You couldn't have picked a better place to start.

Jimster41 said:
If you assume the microscopic domain of some controolled macroscopic transition, is a stochastic Markovian one, and that the phase space distributions of state and control parameter are "the same" at the start and end of the state transition, then according to all available observables, they are reversible (I think this is his big point)

I wasn't taking it to mean the start and end distributions were the same, just that the system starts in equilibrium and is then allowed to relax to equilibrium again after being driven for a finite time. Could be a different equilibrium state. Since the start and end states are both equilibrium states, you can meaningfully define \Delta F. And then he was able to relate this to the work done on the system during the finite time it was driven.

I would write the equation but the procedure for using latex has changed since I used it last. Have to get up to date.

Jimster41 said:
This evokes the opening lines of Verlinde's paper on Entropic Gravity where he uses the Unruh temperature and the example of polymer elasticity to claim that entropy is a force that does work. The question then, I think, is nicely set at the microscopic level, to wonder - what is doing that work? What is the cause of the "force" that does work, we call entropy?

Could be he is just talking about the way it appears in the thermodynamic potential (i.e. free energy):

G = U + pV - TS

orF = U-TS

A process that increases the internal entropy of a system decreases its thermodynamic potential, and that thermodynamic potential can be converted into work done on the environment. I haven't read Verlinde paper but it looks neat. Possibly a little over my head, but worth taking a look at.

Jimster41 said:
looked at now down in the microscopic "path-work" context, is saying that the configuration terms (along with dissipation) are part of what entropy is. This sound obvious but here we are talking about microscopic system paths not about macroscopic ensembles.What is it about one microscopic configuration path that makes it a path of less work?

The system is in contact with a heat bath, so it is getting random bumps and jolts from outside. That can affect how much work it takes to drive it from one state to another. I might be missing your point.

Jimster41 said:
BTWI just started this https://www.amazon.com/dp/0786887214/?tag=pfamazon01-20] - Holy crap is it interesting.

Hey...now there's one. Anything by Strogatz is bound to be reliable. You don't have to worry that he's just some crank throwing around jargon. Thanks! I have so many new books for my reading list :) I will get to them "in the fulness of time", as I used to hear growing up.
 
  • #80
I think you are correct observation that it could be "different" equilibrium. I'm a bit confused to be honest.

I see that his precise claim is that the two groups of applications are both "Odd under time reversal" which is clearly a technical concept, and I don't quite feel I understand what it means well enough. Reading again I see he clarified it to just mean that entropy production would be equal but inverse if run from the other direction. So I think you are more correct. I don't think it affects his claim that the transitions contain equal but opposite amounts of work? Do you?

I think the meaning is the same, as in the thermodynamic potential. But what I was trying to convey earlier is that I find it most interesting that he is saying the path selection of the system does work, is a term into total value of entropy. I know this is obvious at some level. We define entropy as a property of a state, in relationship to the frequency of states like it in the phase space of a system, and more importantly how likely those stares are to occur over time evolituon of the system. But that is in some sense a post hoc observation used as a definition (why entropy is so slippery sort of). What I think England is getting ready to talk about (I have only started his paper) is the way that path selection is a causal term of work production. This opens up types of path selection dynamics that support "improbable structure"... which must be constructed, without violating the second law. Which is arguably what we have.

in other words the way to read it is more like.

lnρ(x−τ)-lnρ(x+τ)=ω+βQ[x(+t),λ(+t)]

lnρ(x−τ)-lnρ(x+τ) = \triangle { path }_{ x+\tau }\\ \\ \triangle { path }_{ x+\tau }=ω+βQ[x(+t),λ(+t)]

In other words here is an "entropic potential entergy" that literally does work through path selection. The reason is because I'm interested in the idea(of Verlinde and others) that Quantum Mechanical Gravity may be sort of configuration-ally specific, sensitive to, or varying through configuration or "information" ? This is I think what Verlinde is getting at with Holographic Entropic Gravity

And oh yeah, this is all over my head, but that doesn't stop me one bit (in the ensemble average anyay):woot:. Actually Verlinde's paper, is pretty readable of the first bits. But it is conceptually a twistor.:confused: Pretty controversial I think. But there is a lot going on in the Loop Quantum Gravity side that I am of a beer betting mind, is going to crack the mystery of entropy, at least in half.

I'm making a concerted effort to get better with Latex, because I want to understand the very equations - not translations of them, or to clarify translations straight from the source.

This is probably all just me getting a better, or at least fuller, understanding of the subtleis of thermodynamicso_O
 
Last edited:
  • #81
Jimster41 said:
I don't think it affects his claim that the transitions contain equal but opposite amounts of work? Do you?

That sounds right to me. Crooks is just talking about pairs of processes, forward and reverse, where the reverse is the complete time-reversed version of the forward path. So the reverse path starts in the final state of the forward path, and ends in the initial state of the forward path. If the forward path releases heat Q to the bath, the reverse path absorbs Q from the bath. If it required work W from outside to drive the system along the forward path, then the reverse path does work W on its surroundings. All the quantities change sign in the reverse process.

The two types of scenarios he is talking about are 1) A system starts in equilibrium state A, is driven for a finite time, then relaxes to equilibrium state B, and 2) A system starts and ends in the same non-equilibrium, stationary state, and is driven in a time-symmetric way.

I still can't get my head around the "condition of microscopic reversibility". I need to learn some more statistical mechanics.

Jimster41 said:
In other words here is an "entropic potential entergy" that literally does work through path selection.

I'm unfamiliar with this stuff about path selection, which you have referred to several times. For example, I'm not sure what you're getting at here...

Jimster41 said:
I'm not familiar with that paradox, but if I had to guess how you get macroscopic irreversibility, which is only probabilistic, from microscopic reversibility, is because whatever it is that is "choos.ing" some paths and not others, whatever it is which is assigning "cost" to microscopic paths, is distributable, and assigning tthat work (unevenly) over the microscopic parts that make up the macroscopic ensembles. There are LQG-ish notions to this I think.

Can you explain it a little more? Oh yeah, I meant Loschmidt's paradox, not Lochschmidt. Ha ha.

Jimster41 said:
I'm making a concerted effort to get better with Latex, because I want to understand the very equations - not translations of them, or to clarify translations straight from the source.

Yep, it's better to be able to have direct access to what's being said. When I come across something that looks important, like in a technical paper, I'm willing to put in some work to understand the math.

I think England is using the standard quantitative definition of fitness, the net growth rate g-δ (births minus deaths). So he is assuming replication as a given. Based on the article about him, I was thinking he was going to tell us why we should expect there to be things that replicate. Maybe I read it with wishful thinking. But with the assumption that things do replicate, he puts a lower bound on the amount of heat they must produce in the process. Then, making the plausible assumption that there is pressure on living organisms to get the most bang for their thermodynamic buck--to approach the bound--, this bound can itself be thought of as a thermodynamic measure of fitness.
 
Back
Top