# Some confusion re entropy

• Buzz Bloom
In summary, according to statistical mechanics, a system can have a finite but indefinite number of microstates, each with a specific energy. The number of microstates is proportional to the number of gas particles in the system, and the number of microstates is also proportional to the number of positions that the gas particles can occupy.f

#### Buzz Bloom

Gold Member
I find my self quite confused about some aspects of the concept of entropy. I will try to explain my confusion using a sequence of examples.

All of the references I cite are from Wikipedia.

Ex 1. Does the following isolated system have a calculable value for its entropy?

Assume a spherical volume of radius R1 containing N1 moles of hydrogen H2 molecules at a temperature T1 (Kelvin), at which the hydrogen has a gaseous state (except perhaps for an extremely small fraction of molecules). Also assume that the boundary of the sphere consists of a perfect insulator which maintains the temperature at T1, and that sufficient time has elapsed for the system to be in a state of equilibrium. To the extent that H2 is close to being an ideal gas, the temperature, volume, and pressure are related by
https://en.wikipedia.org/wiki/Ideal_gas_law
PV=nRT
where P, V and T are the pressure, volume and absolute temperature; n is the number of moles of gas; and R is the ideal gas constant.
The volume is
V1 = (4/3) π R13.​
https://en.wikipedia.org/wiki/Entropy
In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is closely related to the number Ω of microscopic configurations (known as microstates) that are consistent with the macroscopic quantities that characterize the system (such as its volume, pressure and temperature). Under the assumption that each microstate is equally probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB.
My understanding is that the answer to (1) is YES. My confusion at this point is that the concept of "microstate" is unclear. Here is a definition.
https://en.wikipedia.org/wiki/Microstate_(statistical_mechanics)
Treatments on statistical mechanics, define a macrostate as follows: a particular set of values of energy, the number of particles, and the volume of an isolated thermodynamic system is said to specify a particular macrostate of it. In this description, microstates appear as different possible ways the system can achieve a particular macrostate.​

This definition does not help me understand how many microstates exist in the (1) system. One interpretation might be that because the (1) system is assumed to be in equilibrium, there is only one microstate, the equilibrium state, and therefore it's entropy is zero. This implies that for all non-equilibrium macrostates, the entropy is always negative, since as the system changes moving closer to equilibrium its entropy will increase towards zero. (Deeper considerations that I plan to discuss later have convinced me that this interpretation makes plausible sense. Although I find this interpretation to be plausible, I have no confidence that it is correct.)

If my YES answer above is incorrect and/or my interpretation that the entropy of an equilibrium system is zero is wrong, I would much appreciate someone helping me understand the correct answers, especially regarding microstates.

This definition does not help me understand how many microstates exist in the (1) system.

@PeterDonis referred me to some statistical mechanics reading a while back, and based on what I have studied so far and recollect -

The entropy is the count of microstates that would result in a given macrostate. Any real system is only in one such microstate at any given instant, but that does not mean it has an entropy of zero.

Doesn't the coin example from the same Wikipedia article that you linked help?

https://en.wikipedia.org/wiki/Microstate_(statistical_mechanics) With 32 coins, you have ##2^{32}## microstates, but still on one macrostate with all H and one with all T.

With gas in a room, the number of microstates is directly proportional to the number of gas particles. Now consider the thermal equilibrium macrostate and how many different ways there are to arrange those particles that satisfy that state.

#### Attachments

Last edited by a moderator:
Now consider the thermal equilibrium macrostate and how many different ways there are to arrange those particles that satisfy that state.
Hi anorlunda:

Thank you for your response. I may be dense, but I am unable to make a reasonable guess about how to count the microstates for the Example 1 system. I can count the number of H2 molecules, but all I know now is that each definitely does not have the two microstates of heads and tails. How many energy levels are relevant. There are an infinite number of possibilities, but that can be made finite by arbitrarily setting a finite range for each defined energy "microstate". Is that the way the counting of energy microstates should be done? If so, what criteria should be used to set these ranges? Also, how about a molecule's position? It seems to be a similar problem as with energy. If a larger volume is assumed, does that that mean that there are more position states? (BTW, the examples I expect to introduce in a few days will suggest that this is not so about volume, but I have little confidence this reasoning is correct.)

Regards,
Buzz

Let me recommend an excellent way to learn. It is a course on Statistical Mechanics taught by Leonard Susskind.
He is a good teacher, the math is not very difficult, and the videos are available free on Youtube. In the first 45 minutes of lecture 1, he explains entropy in principle. In later lectures he does it for a thermodynamic gas.

That is a much better way to learn, than a simple question and answers consisting of a few sentences on an Internet forum.

Any real system is only in one such microstate at any given instant, but that does not mean it has an entropy of zero.
Hi Grinkle:

Thank you for your response. I get the concept of the above quote. My "guess" that a system which is in a state of equilibrium has only one microstate is intended to mean that this single microstate is its state at all times, and it does not change with time. This means there is only one microstate range for energy to exist in (the entire range from zero to infinity), and also only one microstate range for where a molecule is (the entire volume). I hope to present an explanation for this odd interpretation when I prepare my next examples.

Regards,
Buzz

Let me recommend an excellent way to learn. It is a course on Statistical Mechanics taught by Leonard Susskind.
Hi anorlunda:

Thank you for your suggestion. I will take some time to look at this lecture. However, I do not expect it to help me understand the confusion I now have which leads me to the odd interpretation I presented (one microstate with some minimum elaboration in my post #6). My deep confusion comes from considering a more complex example involving a hypothetical GR based finite spacially curved universe in thermodynamic equilibrium with only protons and electrons as matter particles (no neutrons). More details to come later.

Regards,
Buzz

is intended to mean that this single microstate is its state at all times, and it does not change with time.

No, it means that the thermodynamic state of the system is not changing with time. But since there are many microstates that are all equivalent with respect to the thermodynamic state, the system can and does change from one microstate to another. For instance, a gas that is at equilibrium still has its molecules moving around. This movement does not change any of the thermodynamic state variables of the overall system if it is at thermodynamic equilibrium with its environment.

No, it means that the thermodynamic state of the system is not changing with time.
Hi Grinkle:

If I abandon my interpretation and accept (for the purpose of this discussion) that the microstate does change with time, can you help me qualitatively understand the consequences of a simple change to Example 1?

Ex 2. The system for this example is identical to system (1) with the exception of the following changes:
R2 = 2 R1
T2 = (1/2) T1
V2 = 8 V1
Using the Ideal Gas Law to calculate approximate values for pressure P:
P2 ~= (1/16) P1
Does system (2) have more, or less, or the same entropy as compared with system (1)? I understand that this question may not have an answer based on the laws of thermodynamics, but if that is the case, then I would like to understand the reason why it does not.

When I consider the system I will describe at a later time, involving a hypothetical universe (briefly introduced in my post #7) a violation of the second law occurs unless the above answer is "the same".

The Wikipedia presentation of the 2nd law omits any statement about regarding reversible processes
https://en.wikipedia.org/wiki/Laws_of_thermodynamics
Second law of thermodynamics: In a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems increases. Equivalently, perpetual motion machines of the second kind (machines that spontaneously convert thermal energy into mechanical work) are impossible.​
The universe example I am thinking about involves a single dynamic system undergoing a reversible equilibrium process, so I found another reference.
The second law of thermodynamics states that the total entropy of an isolated system can never decrease over time. The total entropy can remain constant in ideal cases where the system is in a steady state (equilibrium), or is undergoing a reversible process.​

Perhaps this is a sufficient clue about where I am headed with the universe example.

Regards,
Buzz

Assume a spherical volume of radius R1 containing N1 moles of hydrogen H2 molecules at a temperature T1 (Kelvin), at which the hydrogen has a gaseous state (except perhaps for an extremely small fraction of molecules). Also assume that the boundary of the sphere consists of a perfect insulator which maintains the temperature at T1, and that sufficient time has elapsed for the system to be in a state of equilibrium. To the extent that H2 is close to being an ideal gas, the temperature, volume, and pressure are related by
https://en.wikipedia.org/wiki/Ideal_gas_law
PV=nRT
where P, V and T are the pressure, volume and absolute temperature; n is the number of moles of gas; and R is the ideal gas constant.
The volume is
V1 = (4/3) π R13.https://en.wikipedia.org/wiki/Entropy
In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is closely related to the number Ω of microscopic configurations (known as microstates) that are consistent with the macroscopic quantities that characterize the system (such as its volume, pressure and temperature). Under the assumption that each microstate is equally probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB.
My understanding is that the answer to (1) is YES. My confusion at this point is that the concept of "microstate" is unclear. Here is a definition.
https://en.wikipedia.org/wiki/Microstate_(statistical_mechanics)
Treatments on statistical mechanics, define a macrostate as follows: a particular set of values of energy, the number of particles, and the volume of an isolated thermodynamic system is said to specify a particular macrostate of it. In this description, microstates appear as different possible ways the system can achieve a particular macrostate.

One of the microstates of your system is: all molecules packed at the middle.

That microstate looks like a non-equilibrium state, and it looks like a low entropy state too. But the odd looking microstate has the same energy, number of particles and volume as the more normal looking microstates. So the odd looking state is a microstate of the same macrostate as the normal looking microstates.

Does the system have low entropy when all molecules are packed on one point? No. Large number of microstates are available to the system, so entropy is large.

Is the system in equilibrium when all molecules are packed on one point? No.

So don't say your system is in equilibrium, if you are planning to calculate entropy by counting all available microstates of your system. Or don't assume the system stays in equilibrium.

Does system (2) have more, or less, or the same entropy as compared with system (1)?

I don't think so. Any two state variables will fix entropy (and all other thermodynamic state variables) for an ideal gas. You definitely changed volume, and you did something I don't understand the implications of by changing temperature and the ideal gas constant inversely at the same time. I can't say for sure that you changed the thermodynamic state, because its been too long since my undergrad thermo class, but I suspect you did.

Edit:

Also, I should note that your changes are macrostate changes, not microstate changes.

Perhaps this is a sufficient clue about where I am headed with the universe example.

I don't see where you are headed.

I may be dense, but I am unable to make a reasonable guess about how to count the microstates for the Example 1 system.

The only way you can have a finite number of microstates is to divide position, velocity, and energy into a finite number of "compartments", each of finite size. So if you are considering position, velocity, and energy to be continuous quantities, you don't have a finite number of microstates. The general pattern of thermodynamic arguments based on microstates is that certain formulae are true when we use a finite number of microstates and we take the limit of these formulae as the finite size of the compartments approaches zero. to get the thermodynamic laws.

One interpretation might be that because the (1) system is assumed to be in equilibrium, there is only one microstate, the equilibrium state, and therefore it's entropy is zero.
That's not the right interpretation. Equilibrium is one macrostate: we have a static cloud of gas with ##PV=nRT##.

A microstate is what you have when you specify the position and velocity of each and every molecule in the gas - and note @Stephen Tashi's post above about discretizing these continuous variables and then taking the limit as the compartment size goes to zero. Each molecule has six degrees of freedom (position and velocity along three axes), and if any of these change (for example, two molecules collide changing their direction of motion) the system moves from one microstate to another. Clearly a huge number of microstates are consistent with any macrostate; and the system will continuously evolve from one microstate to another as the molecules bounce around at random. The number of microstates consistent with the equilibrium macrostate is so enormous that statistically the system will always be in one of these microstates, hence will stay in equilibrium.

Don't write off the coin example mentioned by @anorlunda too quickly. The coins have one degree of freedom with only two possible values (heads or tails) instead of six degrees of freedom with many possible values, but the logic is the same. A macrostate is something like "There are fifteen heads and seventeen tails" and a microstate is something like "coin one is heads, coin two is tails, coin three is...". Random shaking means we'll get a random microstate, but the odds heavily favor always being in a macrostate with roughly equal numbers of heads and tails.

I don't think so.
Hi Grinkle:

I do not understand your statement which I quoted above. You seem to be saying:
(a) you do not think that system 2 has greater entropy than system (1),
AND
(b) you do not think that system 2 has less entropy than system (1),
AND
(c) you do not think that system 2 has the same entropy than system (1).

You definitely changed volume, and you did something I don't understand the implications of by changing temperature and the ideal gas constant inversely at the same time. I can't say for sure that you changed the thermodynamic state, because its been too long since my undergrad thermo class, but I suspect you did.

My intention is that system (2) is does not have the same macrostates as system (1). The ideal gas constant kB is not changed. I used the equation
PV=nkBT.​
if we put this into the form
P=nkBT/V,​
then
P2/P1 = T2/T1 / V2/V1 .​
The assumptions I specified regarding the system (2) R and T variables in terms of the system (1) R and T variables then lead to the ratio of the given V ratio, and then the ideal gas law leads to the ratio for the P variable.

Systems (1) and (2) are both in equilibrium state, but this is not the same macro state. I have in the back of my mind that it should be possible to define a reversible process by which system (1) can be transformed into system (2), and back again. If this is so, then this would imply (by the 2nd law statement as I quoted it from google.com) that the entropy of system (1) and the entropy of system (2) are the same.

I hope this clarifies what was unclear to you.

I don't see where you are headed.
I apologize for suggesting that the "hint" I indicated might be sufficient to suggest the role of the universe examples I will describe in a few days. What I bolded above is my reason for intending to post examples about a hypothetical universe. The examples will be analogues of system (1) and (2) involving the hypothetical universe. The analogy will attempt to provide a reason to believe that there is a theoretical possibility that there is a reversible process between (1) and (2).

Regards,
Buzz

The only way you can have a finite number of microstates is to divide position, velocity, and energy into a finite number of "compartments", each of finite size.
A microstate is what you have when you specify the position and velocity of each and every molecule in the gas
Hi Stephen and Nugatory:

Thank you for you posts. I do understand the general process about establishing a series finite ranges for the position and velocity of each molecule (see my post #4) , and also that the prediction method of Statistical Mechanics takes the limit of this series (as the ranges converge to zero) when calculating predictions.

The problem that arises for me for systems (1) and (2) is the question about comparing their respective entropy values. (See my post #9.) If the values are the same, (which I tend to believe and will explain why in later posts) then how is it possible for the method of using finite ranges for position and velocity result in equal numbers of microstates for the two systems which have different values for temperature and volume?

Regards,
Buzz

Last edited:
One of the microstates of your system is: all molecules packed at the middle.
Hi jartsa:

Thank you for your post. I do not mean to nit pick, but this example of a microstate introduces some factors that I think need to be taken into account. As I understand the Statistical Mechanics methods, each microstate has a probability of occurrence associated with it, and if the probability of a state is extremely small compared with others, then for the purposes of making predictions, this microstate can be ignored as insignificant. If my understanding is correct, then your example seems to be of this type.

Regards,
Buzz

Hi jartsa:

Thank you for your post. I do not mean to nit pick, but this example of a microstate introduces some factors that I think need to be taken into account. As I understand the Statistical Mechanics methods, each microstate has a probability of occurrence associated with it, and if the probability of a state is extremely small compared with others, then for the purposes of making predictions, this microstate can be ignored as insignificant. If my understanding is correct, then your example seems to be of this type.

Regards,
Buzz

Well, every microstate has the same probability, for the same simple reason that every lottery result has the same probability.

I claimed that all molecules packed together is a state with large entropy (when there is lot of available volume for the molecules). Well, this fluctuation theorem seems to have the view that that is a situation where the entropy fluctuated to a low value: https://en.wikipedia.org/wiki/Fluctuation_theorem.

Last edited:
I have in the back of my mind that it should be possible to define a reversible process by which system (1) can be transformed into system (2), and back again. If this is so, then this would imply (by the 2nd law statement as I quoted it from google.com) that the entropy of system (1) and the entropy of system (2) are the same.

I see. A process that does not change the entropy of the system is called "isentropic". Reversibility is one requirement and no heat transfer across the system boundary is another requirement. The no-heat-transfer part is "adiabatic". You sent me back to my old thermo textbook. If you Google isentropic process, you will get the background you are looking for I think.

Edit: And I was meaning to say in my previous post that I don't think the two entropies are the same, sorry for the garbled answer. If you are still interested in that example after looking at isentropic on the internet, I will give you a more thought out response.

Last edited:
Well, every microstate has the same probability, for the same simple reason that every lottery result has the same probability.
Hi jartsa:

I should have thought of that concept, but it skipped my mind. However, a conceptual difficulty still remains. Are there restrictions about the way microstates are counted for a particular choice of variable ranges. For example:
https://en.wikipedia.org/wiki/Molecule#Molecular_size
The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å.​
https://en.wikipedia.org/wiki/Ångström
[An] angstrom [Å] is a unit of length equal to 10−10 m.​
Consider a cube with 1 meter edges. In angstrom units, the volume is 1030 Å3. Suppose we choose a position microstate for a particular H2 molecule as being a specific one of the 1030 small cubes each with a 1 Å edge. For sake of simplification, let us also assume that each small cube can hold in it's interior exactly one H2 molecule.

The Avogadro constant ... has the value 6.022140857(74)×1023 mol−1
Therefore one mole of H2 consists of approximately 12×1023 molecules.

https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)
The entropy S is defined as
S = kB ln ⁡ Ω
where kB is Boltzmann's constant and Ω is the number of microstates consistent with the given macrostate.
This implies that the number of positional microstates would be (approximately)
Ω = C(1030,12×1023)
= 1030!/(12×1023! × (1030-12×1023)!)

Unfortunately I do not have a tool to use to conveniently calculate this hypothetical approximate value of Ω. However, I hope you can tell me if the approach I have presented seems in principle to be correct.

If this is OK, then what happens when we take a step with the process of making the granularity of the position microstates smaller. If we made the small cubes to have a side of 1/10 Å, the number of possible positions would increase by a factor of 1000, BUT now each possible location is too small to hold any H2 molecule. So, how does the concept of calculating the limit actually work as the size of the positional microstates converges to zero?

Regards,
Buzz

Well, every microstate has the same probability, for the same simple reason that every lottery result has the same probability.

Given a system of gas particles in thermal equilibrium with the surroundings, is there an equal probability of -

1. Spontaneously transitioning to a specifically identified state where all molecules compressed to a volume 1/10 the equilibrium volume

or

2. Spontaneously transitioning to a specifically identified state that is thermodynamic-ally indistinguishable from the prior state

and the only reason equilibrium is sustained at a macro level is because the population of possible states contains so many more (2) states than (1) states?

If so, that is not intuitive to me - I want to think that there is a greater probablistic barrier to transitioning to a far-from-equilibrium state than only counting the number of available states. Is my intuition just wrong?

Edit:

I think I see the answer. A transition to (1) in my example is not allowed in classical systems. An allowed transition that is away from equilibrium is only a little distance away, and the odds of taking successive little steps away get very long very quickly.

Last edited:
If you are still interested in that example after looking at isentropic on the internet, I will give you a more thought out response.
Hi Grinkle:

Thank you very much for your offer. I have started to read the Wikipedia article on isentropic process, and I expect to finish it in a day or so. After that I expect to be prepared for a further discussion with you about my examples.

Regards,
Buzz

BUT now each possible location is too small to hold any H2 molecule
The position of the molecule is determined by the position of its center of mass, and that's a point so will always gfit in a cell, no matter how small. The spatial extent of the molecule is irrelevant until it starts to affect the probability of collisions with other molecules.

Last edited:
• Buzz Bloom
Given a system of gas particles in thermal equilibrium with the surroundings, is there an equal probability of -

1. Spontaneously transitioning to a specifically identified state where all molecules compressed to a volume 1/10 the equilibrium volume

or

2. Spontaneously transitioning to a specifically identified state that is thermodynamic-ally indistinguishable from the prior state

and the only reason equilibrium is sustained at a macro level is because the population of possible states contains so many more (2) states than (1) states?

If so, that is not intuitive to me - I want to think that there is a greater probablistic barrier to transitioning to a far-from-equilibrium state than only counting the number of available states. Is my intuition just wrong?

(2) is correct.

Transition from a specified high entropy microstate to a specified low entropy microstate is exactly as likely as a transition from a specified high entropy microstate to an other specified high entropy microstate. The transitions are all very unlikely, because we specified the microstates.

Hmm, I should say that we are talking about transitions that take a second or so, only then the transition is a random jump from one microstate to another. Short times mean short jumps.

I'm assuming that every microstate has an associated entropy, because everyone else seems to assume that. So, is there such thing as entropy of a microstate?

Last edited:
is there such thing as entropy of a microstate?

I woudn't say so, myself. Systems have entropy, and a system is in a specific microstate at any snapshot in time, and the entropy of the system comes from counting the number of other microstates the system might be in that would be themodynmaic-ally indistinguishable from the current microstate.

I think one needs to define the system and the microstates, and from there entropy is a quantity that applies to the system.

I'm assuming that every microstate has an associated entropy, because everyone else seems to assume that. So, is there such thing as entropy of a microstate?
There is not, and I'm not seeing anyone assuming that there is. When you hear someone speaking of a "low-entropy microstate" or "high-entropy microstate", that's just a convenient natural-language shortcut for the more precise "a possible microstate of a high(low) entropy macrostate".

The spatial extent of the molecule is irrelevant until it starts to affect the probability of college signs with other molecules.
Hi Nugatory:

Thank you much for your post. The concept you described is quite useful, and definitely helps me understand better the process of using a series of microstates to calculate entropy. The following is what I now understand that I did not understand before.

When the size of the volume units for a microstate is chosen which is too small for a molecule to fit into these small units, then, there are constraints that the immediate neighboring spatial units will not be able to hold any molecules, and also that this constraint might apply to some neighboring small units which are not immediate neighbors. Is this correct? If so, then this also implies that a larger system volume has more position microstates than a smaller system volume. Right?

Regards,
Buzz

I'm afraid I don't have a direct answer to your query. However, the topic has the flavor of the Maxwell-Boltzman kinetic theory of (ideal) gases. What does a good reference have to say about the entropy of a M-B ideal gas?
All your thermodynamic variables - the observable macrostate properties - are constant. No work is being done by the system because of fixed V. All energy is the kinetic energy of the molecules. Only the speeds of the molecules can vary. Therefore, the way I see it, the most important result of the theory was the derivation of the distribution function of molecular speeds, which can be determined from molecular weights and temperature. Knowledge of the distribution gives you the distribution of momenta (m ) and kinetic energies (1/2 m v2 ). The sum of kinetic energies (i.e. the integral of the kinetic energy distribution) equals the total energy of the system, which is conserved in an ideal gas. That's because the internal energy is a function only of T and S in a system of constant volume (no work). Just a guess, but can the entropy be related to the variance of the speed distribution? That may be true because the 'spread' in the distribution is a measure of how many distinguishable states (speeds) the gas has. When the gas is heated, as I recall, the variance of the speed distribution increases. That's reassuring because you'd expect the entropy to increase along with the increased kinetic energy (a result of increased T (since T>0) ). BTW, I'm assuming that you're thinking in terms of classical, not quantum, ideal gases.

What does a good reference have to say about the entropy of a M-B ideal gas?
Hi Mark:

That is a good question which I am unable to answer since I have no convenient access to any such reference except Wikipedia.
Neither of these gives any detailed explanation of the method of specifying finite ranges for microstates and the process of convergence as the ranges approach zero (discussed in other posts of this thread).
I still have not even finished watching the almost two hour lecture included in post #5.

Knowledge of the distribution gives you the distribution of momenta (m )
I noticed a typo in this quote. I believe you intended "momenta (mV)".

Regards,
Buzz

Last edited:
What does a good reference have to say about the entropy of a M-B ideal gas?

If I understand the question correctly, one should search for the "Sackur-Tetrode formula" or "Sackur-Tetrode equation".

• Mark Harder
Yes - Look at the Wikipedia article "Sackur-Tetrode equation" (https://en.wikipedia.org/wiki/Sackur–Tetrode_equation).

You can get into trouble defining cells which are of the order of the size of a gas molecule, etc. because you are entering the quantum realm, and your conclusions can be wrong or nonsensical. The Sackur-Tetrode equation approaches things from a somewhat simplistic quantum viewpoint and gives a very good estimate of the entropy of an ideal monatomic gas, even down to close to absolute zero. Another good article is the thermal wavelength (https://en.wikipedia.org/wiki/Thermal_de_Broglie_wavelength) which tells you when you are entering the quantum realm.

If I understand the question correctly, one should search for the "Sackur-Tetrode formula" or "Sackur-Tetrode equation".
The Rap approaches things from a somewhat simplistic quantum viewpoint and gives a very good estimate of the entropy of an ideal monatomic gas, even down to close to absolute zero
Hi @Lord Jestocost and @Rap:

Thank you both for your posts. Wikipedia gives the Sackur-Tetrode equation in the following form:
S/kN = ln V + (3/2) ln (2πemkT) - 3 ln h - (1/N) ln N!
≈(5/2) + ln ( (V/N) + ln (2πmkT/h2)(3/2) )
where V is the volume of the gas, N is the number of particles in the gas, ..., k is Boltzmann's constant, m is the mass of a gas particle, h is Planck's constant, and ln is the natural logarithm, [and T = temperature kelvin].​

The approximation is based on Stirling's:
ln N! ≈ N ×(-1+ln N).​
(One detail I don't get with confidence is whether "e" is intended to be the base of the natural logarithms. )

If we take into account that k, m, and k are natural constants, and N is an assumed constant for both systems (1) and (2), the approximation for S can be written in the following form:
S = C1 + C2 × ln ( C3 × V × T3/2 ).​
where C1, C2, and C3 are constants.
Since V = const × R3, if the entropy S1 and S2 (corresponding to system (1) and (2) respectively), are equal, then
R1 × T11/2 = R2 × T21/2.​
This was not what I was expecting. I was expecting:
R1 × T1 = R2 × T2
because the corresponding universe models requires this in order to avoid a conflict with the second law of thermodynamics. I will explain this further in a later post about the universe models.

Regards,
Buzz

Last edited:
Hi @Grinkle, @anorlunda, @jartsa, @Stephen Tashi, @Nugatory, @Mark Harder, @Lord Jestocost, and @Rap:

Thank you all for participating in this thread. I was able to learn from all of you enough so that I could properly complete this post.

Below is my description of a hypothetical GR based universe as a thermodynamic system. I am a bit shaky regarding technical nomenclature, so I will try to be extra careful in explaining the properties of this universe.
1. The shape of the hypothetical universe system is nearly exactly a finite 3D hypersphere. That is, the ratio of the density of stuff (matter and energy, including dark stuff) to the critical density is represented by
I use the subscript "s" (for stuff) here to distinguish this ratio from sum of the four Ω ratios in the Friedmann equation.
2. The Friedmann equation For a scale factor of a = 1, H = H0, so
Ω = ΩR + ΩM + Ωk + ΩΛ = 1.​
Now, Ωk is not related to stuff. It is related to the curvature of the universe. So,
ΩS = ΩR + ΩM +MΛ
and
Ω = ΩS + Ωk = 1​
Since for the closed universe,
ΩS > 1,​
this means that
Ωk < 1.​
3. Calculating the radius of curvature
The radius of curvature for the universe corresponding to a = 1 is
Ra=1 = c/( H0 sqrt(-Ωk) )​
4. For the purpose of this hypothetical universe, I am interested in the calculation of total entropy in the universe for the values a = 1 and a = 2. I assume the following initial conditions:
ΩM << 1
ΩR ≈ 1+|Ωk|
Ωk = - c2 / (Ra=12 × H02)
ΩΛ << 1​
Thus the hypothetical universe expansion and contraction effects are assumed to be almost entirely depending only on the radiation and curvature, the matter and dark energy contributions being negligible. The time it takes for the hypothetical universe the expand from a=1 to a=2 depends on a value assumed for H0, and it can be calculated, but I will omit that calculation since I assume it to be irrelevant.
5. I choose initial conditions such that at a=2 the universe will stop expanding and begin its collapse. This requires that
Ha=2 = 0,​
which implies
ΩR = 4 × |Ωk|.​
This combined with the above relationship between ΩR and Ωk implies that
Ωk = -1/3,​
and
ΩR = 4/3.​
6. It is assume the universe between a=1 and 1=2 to be filled with H2 gas and photons in thermodynamic equilibrium. The relevant variables for calculating entropy changes depend only on R (radius of curvature) and T (temperature) as discussed in my previous post. Here the useful forms are:
Ra=2 = 2 Ra=1
Ta=2 = (1/2) Ta=1
The difference between entropy at a = 2 and at a = 1 is
Sa=2 - Sa=1 = C2 × ln( (Ra=2/Ra=1)3 × (Ta=2/Ta=1)3/2 )
= C2 × ln( 8 × (1/8)1/2 ) = C2 × (1/2) ln 8 ≈ C2 × 1.0397​

CONCLUSION
Let t* be the time between a=1 and a=2. Then
t' (> ta=2) = ta=1 + 2 t*​
is the time when a is again =1. Therefore as time increases from ta=2 to t', entropy decreases from Sa=2 back again to Sa=1. This seems to violate the second law.
However,
Post #10 by @kimbyd
Gravity changes entropy dramatically. In fact, nobody knows how to actually calculate the entropy for a generic gravitational system. We know a few extreme cases (like black holes), but that's about it.​
[Underlining above is mine for highlighting purpose.] So, maybe the apparent violation of the second law really does not really happen because of the assumed hypothetical universe system has a definite gravitational component.

Regards,
Buzz

#### Attachments

Last edited:
Hi @Grinkle, @anorlunda, @jartsa, @Stephen Tashi, @Nugatory, @Mark Harder, @Lord Jestocost, and @Rap:

Thank you all for participating in this thread. I was able to learn from all of you enough so that I could properly complete this post.

Below is my description of a hypothetical GR based universe as a thermodynamic system. I am a bit shaky regarding technical nomenclature, so I will try to be extra careful in explaining the properties of this universe.
1. The shape of the hypothetical universe system is nearly exactly a finite 3D hypersphere. That is, the ratio of the density of stuff (matter and energy, including dark stuff) to the critical density is represented by
I use the subscript "s" (for stuff) here to distinguish this ratio from sum of the four Ω ratios in the Friedmann equation.
2. The Friedmann equation
View attachment 227275
For a scale factor of a = 1, H = H0, so
Ω = ΩR + ΩM + Ωk + ΩΛ = 1.​
Now, Ωk is not related to stuff. It is related to the curvature of the universe. So,
ΩS = ΩR + ΩM +MΛ
and
Ω = ΩS + Ωk = 1​
Since for the closed universe,
ΩS > 1,​
this means that
Ωk < 1.​
3. Calculating the radius of curvature
The radius of curvature for the universe corresponding to a = 1 is
Ra=1 = c/( H0 sqrt(-Ωk) )​
4. For the purpose of this hypothetical universe, I am interested in the calculation of total entropy in the universe for the values a = 1 and a = 2. I assume the following initial conditions:
ΩM << 1
ΩR ≈ 1+|Ωk|
Ωk = - c2 / (Ra=12 × H02)
ΩΛ << 1​
Thus the hypothetical universe expansion and contraction effects are assumed to be almost entirely depending only on the radiation and curvature, the matter and dark energy contributions being negligible. The time it takes for the hypothetical universe the expand from a=1 to a=2 depends on a value assumed for H0, and it can be calculated, but I will omit that calculation since I assume it to be irrelevant.
5. I choose initial conditions such that at a=2 the universe will stop expanding and begin its collapse. This requires that
Ha=2 = 0,​
which implies
ΩR = 4 × |Ωk|.​
This combined with the above relationship between ΩR and Ωk implies that
Ωk = -1/3,​
and
ΩR = 4/3.​
6. It is assume the universe between a=1 and 1=2 to be filled with H2 gas and photons in thermodynamic equilibrium. The relevant variables for calculating entropy changes depend only on R (radius of curvature) and T (temperature) as discussed in my previous post. Here the useful forms are:
Ra=2 = 2 Ra=1
Ta=2 = (1/2) Ta=1
The difference between entropy at a = 2 and at a = 1 is
Sa=2 - Sa=1 = C2 × ln( (Ra=2/Ra=1)3 × (Ta=2/Ta=1)3/2 )
= C2 × ln( 8 × (1/8)1/2 ) = C2 × (1/2) ln 8 ≈ C2 × 1.0397​

CONCLUSION
Let t* be the time between a=1 and a=2. Then
t' (> ta=2) = ta=1 + 2 t*​
is the time when a is again =1. Therefore as time increases from ta=2 to t', entropy decreases from Sa=2 back again to Sa=1. This seems to violate the second law.
However,
Post #10 by @kimbyd
Gravity changes entropy dramatically. In fact, nobody knows how to actually calculate the entropy for a generic gravitational system. We know a few extreme cases (like black holes), but that's about it.​
[Underlining above is mine for highlighting purpose.] So, maybe the apparent violation of the second law really does not really happen because of the assumed hypothetical universe system has a definite gravitational component.

Regards,
Buzz
Classically-speaking, the entropy of a comoving volume of radiation is constant.

You can see this here:
https://en.wikipedia.org/wiki/Photon_gas

The entropy of a photon gas is proportional to ##V T^3##. As volume increases proportional to ##a^3##, and temperature decreases proportional to ##1/a##, the entropy of the expanding (or contracting) volume does not change. This shouldn't really be a surprise since a uniform expanding (or contracting) photon gas doesn't change over time in any way other than the change in redshift (or blueshift) of the photons that make it up.

Classically-speaking, the entropy of a comoving volume of radiation is constant.
Hi @kimbyd: