# Interpretation of temperature in liquids/solids

• I
• dextercioby
In summary, the physical interpretation of temperature of a mass of liquid or a solid can be defined as a measure of the tendency of an object to spontaneously give up energy to its surroundings. This is exemplified by the thermodynamic definition of temperature as the reciprocal of the slope of its entropy vs. energy graph. However, in more complex systems such as liquids and solids, the energy is distributed among different degrees of freedom, making it difficult to define temperature as simply the average kinetic energy. In these systems, temperature is better understood as how it behaves in terms of heat flow and thermoelectricity.
Philip Koeck said:
The distribution among kinetic energies contains the factor √E,
This is a consequence of ##d^3p\propto \sqrt{E}\,dE##. There is no ##\sqrt{E}## when you consider probability density in the 3-momentum space, but it appears if you transform it to the probability density in the kinetic energy space.

Demystifier said:
What if the gravitational potential energy was proportional to ##z^2##, would you see the connection in that case?
Very good point!

Demystifier said:
The heat also goes to the ability of particles to go up in the gravitational field.
But then some of the heat is used to do work. That's a different story, isn't it?

I'm getting my mind blown off a little bit more. I just realized that Joule heat depends on the sign of S, the Seebeck coefficient. It's clearly written in Domenicali's paper I quoted above (and trivial to derive starting from the generalized Ohm's law), but I actually never realized it before. I'm having fun with finite elements right now. Lots of crazy stuff occur even without considering the Thomson effect and anisotropy in general. The program computes anything I ask it to.

Philip Koeck said:
But then some of the heat is used to do work.
No. The work is ##PdV##, but ##dV=0## because the volume is not changed. Moving up only means that density at high ##z## becomes larger. In the probability distribution proportional to ##e^{-\beta mg z}##, adding heat means changing ##\beta##, so that the particles are redistributed while the volume remains the same. In particular, for the infinite temperature we have
$$e^{-\beta g z}=e^{-0 g z}=1$$
which means that density does not longer depend on ##z##.

Demystifier said:
No. The work is ##PdV##, but ##dV=0## because the volume is not changed. Moving up only means that density at high ##z## becomes larger. In the probability distribution proportional to ##e^{-\beta mg z}##, adding heat means changing ##\beta##, so that the particles are redistributed while the volume remains the same. In particular, for the infinite temperature we have
$$e^{-\beta g z}=e^{-0 g z}=1$$
which means that density does not longer depend on ##z##.
I was referring to the work for lifting the atoms in a gravitational field, not PV-work.

I think we might be talking about different things here.

What I'm really trying to get at is whether temperature is connected to an equilibrium distribution of molecules or atoms among internal energy levels (translation, rotation and vibration) only or whether one should also include external energies such as the potential energy in a gravitational field.

If we take the more general definition of T from post 3, what is the meaning of U? Is it just the inner energy or does it include even potential energies in external fields?

Philip Koeck said:
What I'm really trying to get at is whether temperature is connected to an equilibrium distribution of molecules or atoms among internal energy levels (translation, rotation and vibration) only or whether one should also include external energies such as the potential energy in a gravitational field.

If we take the more general definition of T from post 3, what is the meaning of U? Is it just the inner energy or does it include even potential energies in external fields?
It's all energy, defined by the full Hamiltonian ##H## of the system. After all, the notion of "inner energy" is not even well defined in general.

Perhaps the following will convince you. The temperature ##T## is a parameter that defines a state of statistical equilibrium. Why is the statistical equilibrium related to the energy in the first place? Because the energy is conserved, so energy is a quantity which does not change while the closed system evolves from non-equilibrium to equilibrium. But only the full energy, defined by the full Hamiltonian, is conserved. The "inner energy" alone (whatever that means) is not conserved.

Philip Koeck and fluidistic
Demystifier said:
It's all energy, defined by the full Hamiltonian ##H## of the system. After all, the notion of "inner energy" is not even well defined in general.

Perhaps the following will convince you. The temperature ##T## is a parameter that defines a state of statistical equilibrium. Why is the statistical equilibrium related to the energy in the first place? Because the energy is conserved, so energy is a quantity which does not change while the closed system evolves from non-equilibrium to equilibrium. But only the full energy, defined by the full Hamiltonian, is conserved. The "inner energy" alone (whatever that means) is not conserved.
Just to see what this leads to "in practice":
If I have 2 identical containers filled with a monoatomic ideal gas and the only difference is that one is in zero gravity whereas the other is on the surface of a planet.
That should mean that they have different values for Cv, right?
How different would depend on the value of g on the planet, I would think.

Demystifier
Philip Koeck said:
Just to see what this leads to "in practice":
If I have 2 identical containers filled with a monoatomic ideal gas and the only difference is that one is in zero gravity whereas the other is on the surface of a planet.
That should mean that they have different values for Cv, right?
How different would depend on the value of g on the planet, I would think.
Yes, but not only on ##g##. It also depends on the vertical size of the volume. If the volume has a small height ##h## so that ##mgh\ll kT##, then the effect of gravity is negligible. For typical atom mass ##m## at room temperature on Earth, I think this inequality is something like ##h\ll 1##km (you can check it by yourself).

Philip Koeck
Demystifier said:
Yes, but not only on ##g##. It also depends on the vertical size of the volume. If the volume has a small height ##h## so that ##mgh\ll kT##, then the effect of gravity is negligible. For typical atom mass ##m## at room temperature on Earth, I think this inequality is something like ##h\ll 1##km (you can check it by yourself).
For fun, one can also compute how large the temperature must be so that the thermal fluctuations can significantly lift up a stone with ##m=1##kg.

Philip Koeck
A little update. In this simple model, there's a wire (1 cm long, 0.01 cm x 0.01 cm cross section) whose 2 ends are kept at different temperature (300 K and 360 K), while electric current is injected near those 2 ends (but not quite at the same surfaces, 6 amperes). I assumed a -1100 V/K Seebeck coefficient, I assumed that this Seebeck coefficient does not depend on temperature (if this wasn't the case, the temperature profile would get more complicated, and similarly for the voltage profile). I assumed reasonable values for the thermal conductivity for a metal (so around 100 W/Km^2) and so on and so forth. No temperature dependence of any parameter.

It's a finite element model with over 8.5 k nodes. Mesh picture:

The total computed Joule heat is 0.032 W.
The temperature profile along the center of the wire looks like so:

The temperature gradient (direction of "cold to hot") looks like so:

It looks like it points along the wire's length, changes direction and magnitude, passing to 0 as suggested by the temperature profile.

Now the long awaited heat flux:

It is dominated by the electric current in this case. Its direction is not from "hot to cold", it is more complicated than this, and it depends on the direction of the electric current (and sign of Seebeck coefficient). What you cannot see in the picture above, is that in the tiny region between the hot reservoir and where the current enters the material, the heat flux actually reverses direction.

I could plot more things, such as the current density, energy flux, entropy flux, voltage profile, etc. I could also deal with the case where I short circuit 2 spots on this wire, without passing any extra current but the one generated thanks to the voltage created by the Seebeck effect, to leave the system as unperturbed as possible.

Last edited:
Personally, I like the phenomenological approach of Caratheodory: https://eclass.uoa.gr/modules/document/file.php/CHEM105/Άρθρα/Caratheodory-1909.pdf
This has been brought to modern mathematical language by Lieb and Yngvason: https://www.sciencedirect.com/scien...n6FXF-_Ey0CR_eTbjMqwnJUsOSf349E__koQ6XIoC7w0A

There empirical temperature is derived from the zeroth law, i.e. transitivity of equilibrium. This allows one to define equivalence classes of equilibrium states with are labeled by empirical temperature theta. Heat flow is already a topic beyond equilibrium thermodynamics. As remarked before, it can also be from cold to hot if other driving forces dominate (e.g. chemical potential differences). The exception is the reversible heat exchange between closed systems. Absolute temperature T can be shown to be an integrating factor of the reversible heat exchanged between closed systems (which yields entropy), which is only a function of empirical temperature T= T(theta). The sign of temperature difference is then by definition linked to the direction of reversible heat exchange.

DrClaude
dextercioby said:
TL;DR Summary: What is the physical interpretation of temperature of a mass of liquid or a solid?

Usually, the mental image of temperature is: an internal property of a bulk of matter, which typically describes the average kinetic plus rotation/vibration energy of molecules, so we imagine a gas in which temperature is a measure of how quick molecules are, and how frequently they collide one to another. The higher the temperature, the more energy of molecules.

Let's switch to a liquid at room temperature (water) or a crystalline solid (a bar of pure iron). How would you define their temperature? Would it for example be a measure of the average energy of the "electron gas" in the conduction band? What about liquid water, which lacks a rigid crystalline structure, being just a scattter of molecules kept together with vdW forces and "hydrogen bonds"?
I'll be honest. As a condensed matter physicist who worked at cryogenic temperatures, to me, temperature is what a thermocouple measures. But let me see if I can say a few things you might find helpful.

In crystalline solids, temperature modifies the Fermi-Dirac distribution for the electrons and the Bose-Einstein distribution of the phonons. Magnetism is a thing, but it's not my jam. Both electrons and phonons are travelling waves, so they have kinetic energy. The change in distribution let me know electrons and phonons move up to higher states.

Is it just the conduction electrons? The slope of the band determines the velocity of the wave. The core electrons will not overlap much at all, so those bands are essentially flat, and flat bands will not have a velocity. They are much like core electrons in atoms. The conduction and valence electrons will have non-zero slopes, so they will have kinetic energy.

Now here's where I start getting confused. It's not that all electron bands of higher energy have greater slopes (it is true in the free electron model, though), so I can imagine a pathological example where the electrons are in an s-like conduction band, which is highly dispersive (high slope), and transition to d-like valence bands of lesser slope. The kinetic energy would go down, and also the temperature, with an input of energy to the system. I have never heard of that. Is there some principle or symmetry at work that ensures this doesn't happen? I don't know. Maybe I need to think about changing internal energy with entropy.

That's it for my rough interpretation.

If we were to do a calculation, on the other hand, we would use the well-known thermodynamic and statistical relations. This would be the realm of computer calculations.

As for liquids, in my view, temperature is a measure of the average kinetic energy of the particles. Any quantum mechanical goings-on would act in a similar manner to the classical degrees of freedom we learn about and act like energy sinks that increase heat capacity (recall 1/2 kT).

Dr_Nate said:
I'll be honest. As a condensed matter physicist who worked at cryogenic temperatures, to me, temperature is what a thermocouple measures. But let me see if I can say a few things you might find helpful.
Thermocouples measure a voltage, which gives information about the temperature difference between the junction (tip of it, where you want to know the temperature), and the reference temperature of the apparatus/voltmeter. Only then, this voltage is translated into a temperature. This is alright, unless you want to measure cryogenics temperatures (near absolute zero, or even with liquid He below 1.5 K). The reason is that the Seebeck coefficient of materials drop to 0 near absolute zero. So the voltage reading will drop to zero. This is no good for accuracy. Other kinds of thermometers are better suited.

dextercioby

• Other Physics Topics
Replies
14
Views
3K
• Atomic and Condensed Matter
Replies
2
Views
1K
• Atomic and Condensed Matter
Replies
2
Views
1K
• Thermodynamics
Replies
5
Views
2K
• Thermodynamics
Replies
32
Views
2K
• Thermodynamics
Replies
2
Views
1K
• Classical Physics
Replies
1
Views
703
• Thermodynamics
Replies
19
Views
2K
• Materials and Chemical Engineering
Replies
11
Views
3K
• Thermodynamics
Replies
1
Views
1K