Memory, Entropy and the Arrow of Time

In summary: The more complex the system, the greater the chance that some part of it will have to be erased in order to establish the correlation. So, yes, remember what you ate for lunch yesterday? Sure, if you can remember that it involved a complex system with lots of correlated states. If you can remember that it involved a single, low-entropy state (like remembering your phone number), forget it.In summary, Sean Carroll has stated several times that the reason we can remember the past and not the future is because entropy is increasing, i.e. because there is an arrow of time. Is this statement justifiable?Yes, entropy is an important concept and is used to explain many things
  • #36
I'm sorry but you're really not making any sense here. I've explained logically and in detail why you are wrong, and you haven't even bothered to respond.

"Then it's a good thing I didn't say that. I said the cases where a memory is reliably stored can't be random fluctuations. Random fluctuations can't create stable correlations, which is what is required for memory storage."

Ok... As I said, and as you agreed, the case in which entropy decreases is no more or less a random fluctation than the case in which it increases, so this is irrelevant.

"Yes, it does. I've already explained why, repeatedly."

You haven't at all. I have explained (and sourced) why entropy within the memory circuit must decrease, and why entropy within the environment may increase or decrease without influencing the success of this memory storage. You're only recourse is to a poorly defined concept of "random fluctuations", without any reference to what is fluctuating, why it would influence memory storage or why it would differ between realisations of the stochastic system which happen to decrease or increase entropy. After addressing your misunderstanding of the microscopic and macroscopic basis of entropy and fluctuations, you just repeated the same point which had already been refuted.
 
Last edited:
Science news on Phys.org
  • #37
madness said:
I've explained logically and in detail why you are wrong, and you haven't even bothered to respond.

Since my response to you would be exactly the same, I'm not sure there's any point in continuing the discussion. Just one possible misunderstanding to clear up:

madness said:
entropy within the environment may increase or decrease without influencing the success of this memory storage

Does the "environment" include whatever the memory is a memory of? I should have asked that before. If it doesn't, what you say here is true, but the "environment" doesn't include everything besides the memory storage. If it does (which is what I was assuming in my previous post, but I should have asked instead of assuming), then what you say here is not true, because the entropy of memory storage + thing being remembered must increase for the interaction to be a valid memory storage, for the reasons I already gave.
 
Last edited:
  • #38
"Since my response to you would be exactly the same, I'm not sure there's any point in continuing the discussion."

In that case, I would agree.

"Does the "environment" include whatever the memory is a memory of? I should have asked that before. If it doesn't, what you say here is true, but the "environment" doesn't include everything besides the memory storage. If it does, then what you say here is not true, because the entropy of memory storage + thing being remembered must increase for the interaction to be a valid memory storage, for the reasons I already gave."

It includes everything outside the memory circuit. We could also consider a simplified model in which there is only a presented pattern (the environment) and a memory circuit which stores this pattern. The success of the pattern storage is not determined by the direction of entropy change in the environment - there are no valid reasons why it would.
 
  • #39
madness said:
If you still don't believe me, read this paper titled "Self-organization and entropy decreasing in neural networks" http://ptp.oxfordjournals.org/content/92/5/927.full.pdf and this paper, titled "Pattern recognition minimizes entropy production in a neural network of electrical oscillators" http://www.sciencedirect.com/science/article/pii/S0375960113007305.

I don't think that the "entropy" being calculated there is the same as thermodynamic entropy. In that paper, for the Traveling Salesman problem, the entropy is defined as: (equation 8, page 930)

[itex]S_{en} = - \sum_p f(n,p) ln(f(n,p))[/itex]

where [itex]f(n,p)[/itex] is the probability of pattern [itex]p[/itex] at discrete time step [itex]n[/itex]. Similarly, the "energy" being discussed there is not thermodynamic energy. On that same page, it says:

The energy of each pattern depends on the total length of its path, and it takes minimum
minimum if the path is the shortest one (best solution).

I just glanced at the paper, but I don't think it is saying what you seem to be claiming it is saying. The author (I think) is not talking about thermodynamics, at all. He's describing the learning process as the process of homing in on one pattern from a set of possible patterns. That process can be described as a kind of decrease of entropy in the information sense, but it isn't thermodyamic entropy.

What is the relationship between the two notions of entropy? I'm not exactly sure, but expecting them to be exactly the same seems naive to me. I don't see an immediate connection between the two. The idea behind pattern recognition is that initially, before the machine has received any data, the pattern could be anything. That's a high-entropy situation in the space of possible patterns. As the machine gets more data, it homes on one particular pattern. That's a low-entropy situation in the space of possible patterns.

To relate this to thermodynamics, you have to know what is the cost (in thermodynamic entropy) for the processors to run. It does not seem to me that the author of that paper is trying to do that.
 
Last edited:
  • Like
Likes Torbjorn_L and Greg Bernhardt
  • #40
The rough idea behind the claim that memory is connected with an increase of entropy is that in order to make a permanent record of something, you have to assume that the initial state was something special. When we take notes on a pad of paper, the pad starts off in a very special state, the state which initially has no pencil or pen marks. As you take notes, the pages fill up with marks. When it is filled, it is no longer useful for recording memories. If you start off with a pad that is already filled with marks, then you can't tell which new marks are yours, and which ones were already there.

The "blank" state has to be special, but does it have to be thermodynamically lower in entropy than the state after marks are made? I'm not 100% sure.
 
  • #41
I don't think storing memory necessarily has anything to do with entropy. Memory formation without an increase in entropy is discussed in https://www.cs.princeton.edu/courses/archive/fall06/cos576/papers/bennett03.pdf.

At the fine grained level, entropy does not increase. Roughly, predicting the future is remembering the future. If I have fine grained knowledge of the system's past, I will be able to remember (predict) its future exactly. However, the second law says entropy increases. Again roughly, this means that if I remember the past with a particular resolution, my ability to remember (predict) the future with the same resolution will be worse. This is the rough idea behind saying that the second law of thermodynamics explains why we remember the past more than we remember (predict) the future. However, there is a finer distinction between predicting the future and remembering the past in that has been proposed by Mlodinow and Brun, who discuss the issue extensively.

http://physics.aps.org/articles/v7/47
Why We Can’t Remember the Future
Philip Ball

http://arxiv.org/abs/1310.1095
http://dx.doi.org/10.1103/PhysRevE.89.052102
On the Relation between the Psychological and Thermodynamic Arrows of Time
Leonard Mlodinow, Todd A. Brun
(Submitted on 18 Sep 2013)
In this paper we lay out an argument that generically the psychological arrow of time should align with the thermodynamic arrow of time where that arrow is well-defined. This argument applies to any physical system that can act as a memory, in the sense of preserving a record of the state of some other system. This result follows from two principles: the robustness of the thermodynamic arrow of time to small perturbations in the state, and the principle that a memory should not have to be fine-tuned to match the state of the system being recorded. This argument applies even if the memory system itself is completely reversible and non-dissipative. We make the argument with a paradigmatic system, then formulate it more broadly for any system that can be considered a memory. We illustrate these principles for a few other example systems, and compare our criteria to earlier treatments of this problem.
 
Last edited:
  • #42
Thanks for the comments guys, it's good to get some more inputs.

Stevendaryl:

We discussed briefly why the Shannon entropy and Gibbs entropy are the same (up to a constant) for a Hopfield network. It's just a kind of Ising model, and since the probability in the paper is the probability of a particular pattern (microstate of the system), the Shannon entropy is equivalent to the Gibbs entropy.

You are correct that the author is discussing the memory retrieval ("homing in on a pattern" as you say) rather than storage. You are also correct when you say that he does not consider the thermodynamic cost for these processes to run, which presumably result in an entropy increase in the environment (statistically speaking).

I think it's helpful to explain how the Hopfield model of memory works, since it seems to generate a different intuitive picture of memory than the one which seems to be arising in this thread.

The Hopfield model is a set of units which can be in two states - 0 or 1. These units are coupled to each other, so that the the state of each unit depends partially on the states of the other units. Note that so far this is just an Ising model, but instead of spin we interpret the states as active or inactive neurons, and instead of magnetic coupling we think of synaptic coupling. So how does this relate to memory? A memory of a particular pattern is stored as a set of couplings between units in the network. When the network is initialised in a state which partially overlaps with this stored pattern, the network will evolve towards the full stored pattern - this is memory retrieval.

Now, let's compare your analogy of a blank slate to the Hopfield model of memory storage.

"When we take notes on a pad of paper, the pad starts off in a very special state, the state which initially has no pencil or pen marks."

To begin with, the network has completely random couplings between its units, so that no information is stored.

"As you take notes, the pages fill up with marks."


As memories are stored, certain patterns are stored in these couplings.

"When it is filled, it is no longer useful for recording memories."

After too many memories are stored, interference occurs resulting in retrieval of the wrong patterns. It is no longer useful for retrieving stored memories.

"If you start off with a pad that is already filled with marks, then you can't tell which new marks are yours, and which ones were already there."

The network is "full of marks" to begin with, but these organise into meaningful patterns during memory storage. In a sense, the network with far too many patterns and the network with no patterns are the same thing, implying that entropy in the network may have a minimum at some intermediate number of stored patterns.
Atyy:

Thanks for the links, they look like just what I'm looking for. I'll check them out soon.
 
  • #43
Atyy:

As a general point, I think there is something missing from these generic physicists' treatments of memory which are important for psychological memory. For psychological memory, it is important not only that a record be kept of something which happened previously, but that there should be some retrieval mechanism by which the system can dynamically reactivate these stored patterns given some inputs. This is what it means to remember something. For example, a water wave is clearly not a memory in the psychological sense, although it could be in the physicists' sense (they use it as an example in the paper you linked to). In the physicists' view, the stored memory is reconstructed by an external observer with knowledge of the system's states and the laws of physics, so that all of the remembering is done outwith the memory system. It's not at all obvious to me that this treatment is relevant to the psychological arrow of time.
 
Last edited:
  • #44
"For psychological memory, it is important not only that a record be kept of something which happened previously, but that there should be some retrieval mechanism by which the system can dynamically reactivate these stored patterns given some inputs. This is what it means to remember something."

For a potential causal mechanism for record keeping and retrieval, the mathematical model in the following paper is intriguing and may provide fertile ground for your thoughts.

http://lsa.colorado.edu/papers/plato/plato.annote.html
A Solution to Plato's Problem:
The Latent Semantic Analysis Theory of Acquisition, Induction and Representation of Knowledge
Thomas K. Landauer


The concept is essentially that we can model our brains as keeping track of co-occurences--things/elements of perception that tend to happen together. It seems a nice starting place, since co-occurrence is an intuitively simple concept, seems to provide a nice fundamental unit of measurement for quantifying and building up theories of pattern-recognition, and was also able to successfully model (in the author's study) learning in higher dimensions. A model of the brain as having an instinct for accretion of these co-occurrences also seems to feel natural. The more I think about it, perhaps inevitable.

I was exposed to this article in a Psychology of Reasoning and Problem Solving class. I found it quite fascinating.

Edit: the discussion of the math is limited in the article proper. There is an appendix that touches upon the matrices used. I'm sure if you like the article, you could get more details by contacting the author!
 
Last edited:
  • #45
I think there are two different ways of thinking about this. One is to look at the physical mechanisms of how memory is implemented. But a second way to think about it is in terms of the reliability of prediction versus retrodiction.

Remembering the past means that it is possible, given our limited, approximate, knowledge of the present state of the universe, we can make an accurate deduction of what the previous states of the universe must have been. Of course, we can also use our knowledge of physics to predict the future state of the universe, at least approximately, given the present. But for retrodiction to count as memory of the past in a way that prediction doesn't count as memory of the future, it must be that retrodiction is much more reliable than prediction. To give an example: We have much more certain knowledge of who was President 20 years ago then we do of who will be President 20 years from now.

So the question is: what makes the present state of the universe much more firmly connected to a unique past than it is to a unique future? Is having a lower entropy in the past a sufficient explanation?

Mathematically, it seems that it has to do with coarse graining and branching. We don't know the exact state of the universe today, we only know it to a certain approximation. Presumably (at least, with classical physics, which is deterministic) if we knew the current state with perfect precision, we could predict the future just as precisely as we can retrodict the past. But if we only know the present approximately, then there are a number of possible futures that are consistent with that knowledge, and similarly there are a number of possible pasts that are consistent with that knowledge. So it seems to me that if the present is more strongly connected to a unique past than to a unique future, it must be because the forward branching of possibilities is much "bushier" than the backwards branching. Back to the earlier example, there are more possibilities for who will be President 20 years from now than there are for who was President 20 years. (The latter is uniquely determined.)

I don't think entropy by itself implies the bushiness of forward branching, or at least, I don't immediately see it. Being low entropy now means that there are very few microscopic states that are consistent with our approximate knowledge of the state of the universe (the macroscopic state). So it seems clear that if the current state is high entropy, then it means that the branching is bushy in both directions, because there are so many microscopic states that are consistent with our macroscopic state, and presumably, different microscopic states are associated with different pasts and futures. So high entropy would imply that neither the past nor the future can be known with much certainty. But in the case of low entropy, it's not clear why the past would be known more certainly than the future.
 
  • #46
stevendaryl said:
So it seems clear that if the current state is high entropy, then it means that the branching is bushy in both directions, because there are so many microscopic states that are consistent with our macroscopic state, and presumably, different microscopic states are associated with different pasts and futures. So high entropy would imply that neither the past nor the future can be known with much certainty.

This is interesting.

In order to model psychological memory with physics, one must consider the behavior of systems where these psychological phenomenon happen. It goes without saying that this is human beings. Now, while there is much wanting with our current vast store of experimental observations of human beings' psychological manifestations (i.e., experimental psychology), there is also much to gain in terms of a starting ground for a physics approach. Stevendaryl, your last musings there, on the current state being high entropy, elicits a concept that captures what actually happens in experimental psychology labs: People overestimate how well they remember the past (consistently). Our memories of the past appear crystal clear to us, but more often than not, they are fuzzy and full of distortions.

If two people remember the same event in two different ways, and we model these two people as a "system of N psychological particles" (I am not sure if that verbiage sounds ludicrous or not), can we then say the divergence in their memories of a mutually experienced event causes entropy to increase in our system of N psychological particles (where N=2)?

Some other interesting questions would be: How would entropy explain short versus long term memories? Does entropy play a role in determining whether or not a memory encoded into short term memory subsequently becomes encoded in long term memory? Does attention/lack of attention respectively reduce entropy/increase entropy? Is there a critical condition at which the encoded short term memory is lost after enough noise accumulates? Is there a critical threshold for which sustained attention finally encodes a short term memory into long term memory, and if so, can we say that entropy has been reduced by such sustained attention?
 
Last edited:
  • #47
"I think there are two different ways of thinking about this. One is to look at the physical mechanisms of how memory is implemented. But a second way to think about it is in terms of the reliability of prediction versus retrodiction."

This is certainly interesting, but I wonder how relevant this is to memory in the psychological sense. Specifically, the prediction or retrodiction should be done within the memory system, otherwise you are neglecting the most important components of memory.

Models of psychological memory (such as the Hopfield model), consider a system which forms certain dynamical patterns in response to certain inputs (http://en.wikipedia.org/wiki/Encoding_(memory)), undergoes some structural changes based on these dynamical patterns (http://en.wikipedia.org/wiki/Storage_(memory), http://en.wikipedia.org/wiki/Hebbian_theory) which in turn cause the system to revisit those same dynamical patterns whenever a somewhat similar input occurs in future (http://en.wikipedia.org/wiki/Recall_(memory)).

The kinds of systems you (and others) are considering here at best do the first two (and this is questionable), but certainly do not do the third.
 
  • #48
madness said:
"I think there are two different ways of thinking about this. One is to look at the physical mechanisms of how memory is implemented. But a second way to think about it is in terms of the reliability of prediction versus retrodiction."

This is certainly interesting, but I wonder how relevant this is to memory in the psychological sense. Specifically, the prediction or retrodiction should be done within the memory system, otherwise you are neglecting the most important components of memory.

The prediction vs. retrodiction accuracy to me is a necessary condition for being able to remember the past. If the past is not deducible from the present, then it doesn't matter what model of memory you use. It's not sufficient, however. There were billions of years after the Big Bang during which there were no memories in the modern sense.
 
  • #49
stevendaryl said:
The rough idea behind the claim that memory is connected with an increase of entropy is that in order to make a permanent record of something, you have to assume that the initial state was something special. When we take notes on a pad of paper, the pad starts off in a very special state, the state which initially has no pencil or pen marks. As you take notes, the pages fill up with marks. When it is filled, it is no longer useful for recording memories. If you start off with a pad that is already filled with marks, then you can't tell which new marks are yours, and which ones were already there.

The "blank" state has to be special, but does it have to be thermodynamically lower in entropy than the state after marks are made? I'm not 100% sure.

That is what PeterDonis described by his example here: https://www.physicsforums.com/threads/memory-entropy-and-the-arrow-of-time.773493/#post-4868659 . A physical memory is never "a blank slate" to begin with.

atyy said:
I don't think storing memory necessarily has anything to do with entropy. Memory formation without an increase in entropy is discussed in https://www.cs.princeton.edu/courses/archive/fall06/cos576/papers/bennett03.pdf.
At the fine grained level, entropy does not increase.

It makes no sense to speak of entropy "at the fine grained level". Classical entropy is macroscale, statistical physics entropy is microscale but statistical. In both cases we need large enough systems, including a large environment, that we can define sensible thermodynamic observables.

The OP is asking about biological memory re an arrow of time. Inasmuch as we replace the cosmological expansion arrow of time with its correlate in local entropy increase and ask for how biological systems such as those who contain memories work, they (the whole organism or the brain) have to increase entropy (well, duh) since they are disequilibrium systems. See https://www.physicsforums.com/threads/memory-entropy-and-the-arrow-of-time.773493/#post-4868465

The rest of the thread veers into a discussion of philosophical questions that are erroneous and/or irrelevant re the OP question. As long as we look at large enough systems that they are thermodynamic disequilibrium systems they will increase entropy for function and they can contain memories. For memories specifically there is an auxiliary condition, related to what must increase entropy in computers by irreversibility, i.e. erasing memory: from dynamical system theory we can derive that they need to have signal amplification > 1. And that is all we need to remember. ;)
 
  • #50
madness said:
Atyy:

As a general point, I think there is something missing from these generic physicists' treatments of memory which are important for psychological memory. For psychological memory, it is important not only that a record be kept of something which happened previously, but that there should be some retrieval mechanism by which the system can dynamically reactivate these stored patterns given some inputs. This is what it means to remember something. For example, a water wave is clearly not a memory in the psychological sense, although it could be in the physicists' sense (they use it as an example in the paper you linked to). In the physicists' view, the stored memory is reconstructed by an external observer with knowledge of the system's states and the laws of physics, so that all of the remembering is done outwith the memory system. It's not at all obvious to me that this treatment is relevant to the psychological arrow of time.

Yes, these don't address the psychological arrow of time. Presumably that could be delusional, and it's hard to say that there are no organisms that delude themselves into remembering the future. Rather, these scenarios are "optimal" scenarios, and argue that even with reversible memories, a physical memory cannot remember the future. Biological memories are more limited, and they too cannot remember the future more than they remember the past, except delusionally.
 
  • #51
madness said:
I wasn't referring to the energy. The Hopfield network before the memory is embedded has random connections, and after the memory is embedded it has a specific pattern of connections which generate attractor dynamics within the network. This is what constitutes a decrease in entropy.
madness said:
If you still don't believe me, read this paper titled "Self-organization and entropy decreasing in neural networks" http://ptp.oxfordjournals.org/content/92/5/927.full.pdf and this paper, titled "Pattern recognition minimizes entropy production in a neural network of electrical oscillators" http://www.sciencedirect.com/science/article/pii/S0375960113007305.

Here you seem to be referring to memory formation. But is there any memory formation in these articles? It seems the weights are fixed, and the articles are discussing recall.

Also, what you are arguing is that the initial state of the Hopfield net is a high entropy state (which I don't think the articles you link to show). Even if that turns out to be true, that does not mean that the Hopfield net is remembering a high entropy past, since after training, the memories stored in the network are not of the network's initial state. Rather, the memories in the network are of the examples presented during its training. The training period requires a "teacher" who turns on Hebbian plasticity, presents a limited selection of examples, and then turns off the Hebbian plasticity. This period of training seems much more like a low entropy period, so it would seem the network is remembering a low entropy past.
 
Last edited:
  • #52
I wrote a paper on this topic: "We do have memories of the future; We just cannot make sense of them": http://philsci-archive.pitt.edu/11303/

I show the way the arrow of time, through global increase of entropy in processes associated with memories, impacts the theoretical possibility of remembering the future given the reversibility of fundamental laws of nature.

Stephane
 
  • #53
Stephr said:
I wrote a paper on this topic: "We do have memories of the future; We just cannot make sense of them": http://philsci-archive.pitt.edu/11303/

I show the way the arrow of time, through global increase of entropy in processes associated with memories, impacts the theoretical possibility of remembering the future given the reversibility of fundamental laws of nature.

Stephane

I'm not sure I understand. It is an observational fact that there are no future light cones moving reverse-casually that would make "remembering the future" (and why it isn't happening) a process to look at.
 
  • #54
Torbjorn_L said:
I'm not sure I understand. It is an observational fact that there are no future light cones moving reverse-casually that would make "remembering the future" (and why it isn't happening) a process to look at.

Well, the usual way to see this problem is the following: laws of physics are reversible in time, so states in the present (our memory) can be seen as the result of future events the same way they can be seen as the result of future events, with regard to the laws of nature. The distinction of the future part of a light cone you are referring to and its past part can be made because there is an "arrow of time". This arrow of time is the result of the statistical nature of entropy and the fact that the big bang was a low entropy event. The remembering process, to be effective, has to follow this arrow of time (and that's why we can't remember the future). In my paper, I just analyze a bit deeper the reasons why it has to be that way.
 
  • #55
Stephr said:
Well, the usual way...

Sorry, I made a typo: "states in the present (our memory) can be seen as the result of future events the same way they can be seen as the result of PAST events"
 
  • #56
Stephr said:
The distinction of the future part of a light cone you are referring to and its past part can be made because there is an "arrow of time". This arrow of time is the result of the statistical nature of entropy and the fact that the big bang was a low entropy event. The remembering process, to be effective, has to follow this arrow of time

That correlates with my understanding of the physics. There is no remembering 'of the future' since there is no such information flow (no such light cones) in physics. It isn't the brain-body chemistry that has to be predicted anymore than chemistry in general, it is the thermodynamic arrow (that EM obeys, so light cones and chemistry both) that needs a prediction.

Stochasticity, that fuzzifies the thermodynamic arrow, is part of the reason why active memory requires an amplification > 1. (Um, I saw the claim and its proof some years back, it applies for electronics. But I seem to have forgot where I put it... =D So take it for what it is worth.) That is, it requires energy, same as one can show that semi-static memory requires energy to be erased as well.
 
  • #57
Here's something I do not understand...going back to the fundamentals of entropy. I have been reading Julian Barbour's obscenely fascinating book "The End of Time". He discussed Ludwig Boltzmann and his definition of entropy. Barbour states that Boltzmann defined entropy as the probability of a given thermal state. The greater the order within a system, the less probability of occurring that state has, and therefore the less entropy that system is in. Barbour likened it to a rectangular formation of 100 deep holes onto which we drop 1000 marbles. The probability that all 1000 marbles will drop into one hole is very slim, so Barbour likens such a result to very low entropy.

Now perhaps Barbour used a very misleading metaphor, but my problem with his example is that any given result of this marble-dropping business will have the exact same probability. If this marble situation is truly translatable to the thermal state of a system, then it would seem to me that any given thermal state of some defined closed system would have the exact same probability of some other arbitrary thermal state of that same system.

My guess is - as I've said - that Barbour chose an unfortunate and misleading metaphor. Could someone please enlighten a mere un-physics-educated mortal such as I?
 
  • #58
David Carroll said:
my problem with his example is that any given result of this marble-dropping business will have the exact same probability

If you look only at each individual configuration, yes. But that's not what Barbour was suggesting. What he was suggesting was: compare the number of states where all 1000 marbles are in one hole (one state) with the number where, say, there are 500 marbles in each of two holes (many more than one state because of all the different ways you can distribute the individual marbles), all the way up to the number of states in which there are 10 marbles in each hole (vastly more states still, because of the vastly greater number of possible permutations).

Each of those groups of states corresponds to a single "macroscopic state" of the system--i.e., we lump together microstates according to some macroscopic property that they all share (such as the overall distribution of marbles in holes). The more microstates there are that all share a given macroscopic property, the higher the entropy of the system when it has that particular macroscopic property.
 
  • #59
David Carroll said:
Now perhaps Barbour used a very misleading metaphor, but my problem with his example is that any given result of this marble-dropping business will have the exact same probability. If this marble situation is truly translatable to the thermal state of a system, then it would seem to me that any given thermal state of some defined closed system would have the exact same probability of some other arbitrary thermal state of that same system.

That metaphor captures the essence of statistical mechanics and the thermodynamic principles that are derived from it.

You are right that every single configuration of the marbles (marble one in a particular one of the one hundred holes, marble two in another of the one hundred holes, and so forth) is equally likely. Thus, there is exactly one configuration in which all one thousand marbles end up in hole number one, and exactly one hundred configurations in which all the marbles end up in one hole. But what is the total number of configurations? Marble one can go into any of one hundred holes, and then marble two can go into any of one hundred holes, and ... The total number of configurations is ##(100)^{1000}##, a number that is so huge as to defy our imagination. So what are the odds of finding all the marbles in one hole? 100 chances in ##(100)^{1000}## is a chance as imagination-defyingly small as ##(100)^{1000}## is large... Every atom in the universe doing that experiment every nanosecond since the big bang... still a negligible chance of getting that outcome.

On the other hand, how many different ways of putting marble one in a particular one of the hundred holes, marble two in another, and so forth will end up with between eight and twelve marbles in each hole? Calculate it, and you'll find that there are a lot of ways doing that... so many that the chances of not getting that outcome are negligible.
 
  • #60
Okay. So we're basically making both the holes and the marbles anonymous (i.e. interchangeable, lacking particular identity)?
 
  • #61
David Carroll said:
Okay. So we're basically making both the holes and the marbles anonymous (i.e. interchangeable, lacking particular identity)?

My description is based on the assumption that the marbles are distinguishable, so that marble one in hole one and marble two in hole two (and the other 998 in the same places) is a different configuration than marble two in hole one and marble one in hole two (and the other 998 in the same places). If you drop this assumption, then the statistics become interestingly different. The problem changes from "How many different ways are there to arrange 1000 different marbles in 100 holes?" to "Suppose I toss 1000 identical marbles at random onto a surface with 100 holes. How likely is it that they'll all end up in the same hole?". The answer is still unimaginably small, but it's an interestingly different unimaginably small number/.
 
  • #62
Okay. I see now. I was imagining that Barbour was suggesting that second quoted question, when he was really suggesting the first. Thanks.
 
  • #63
David Carroll said:
So we're basically making both the holes and the marbles anonymous (i.e. interchangeable, lacking particular identity)?

From the standpoint of defining macroscopic states, yes. It's worth noting, though, that the details of the statistics involved actually do depend on whether or not the "marbles" and "holes" are distinguishable or not at a microscopic level. Distinguishable particles give Boltzmann statistics, which is what is standardly assumed classically. Quantum mechanically, particles of the same type are considered indistinguishable, so you get either Bose-Einstein or Fermi-Dirac statistics, depending on whether the particles have integer or half-integer spin. (In quantum field theories, there are even more kinds of statistics possible.)
 
  • Like
Likes David Carroll
  • #64
PeterDonis said:
so you get either Bose-Einstein or Fermi-Dirac statistics, depending on whether the particles have integer or half-integer spin. (In quantum field theories, there are even more kinds of statistics possible.)

Respectively?

So if we have a closed system, where one atom of each of the first 105 elements, and each of which has integer spin, is bouncing around off the other atoms, any arbitrary thermal state of this closed system has lower entropy than some other closed system where 105 atoms, all of which are, say, hydrogen, are bouncing around...according to Bose-Einstein statistics?
 
  • #65
David Carroll said:
Respectively?

Yes.

David Carroll said:
if we have a closed system, where one atom of each of the first 105 elements, and each of which has integer spin, is bouncing around off the other atoms, any arbitrary thermal state of this closed system has lower entropy than some other closed system where 105 atoms, all of which are, say, hydrogen, are bouncing around...according to Bose-Einstein statistics?

I'm not sure how you're imagining these two scenarios. If the 105 atoms are all of different elements, they're distinguishable, so you would use Boltzmann statistics. If they're all hydrogen atoms, they're not, so you would use Bose-Einstein statistics. This would result in a different count of microstates for the two cases, yes. Is that what you mean?

(I believe the count of microstates would be lower for the Bose-Einstein case, i.e., the 105 hydrogen atoms. But I haven't done the calculation to confirm that.)
 
  • #66
Yeah. That's what I meant. Thanks.
 

Similar threads

Replies
14
Views
982
  • Special and General Relativity
Replies
9
Views
2K
  • Thermodynamics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Programming and Computer Science
Replies
2
Views
161
  • Other Physics Topics
Replies
8
Views
2K
  • Thermodynamics
Replies
8
Views
927
  • Other Physics Topics
Replies
14
Views
3K
Replies
5
Views
1K
Back
Top