I Interpretation of temperature in liquids/solids

  • #51
Philip Koeck said:
Is that because those effects would lead to perpetual motion machines?
I honestly do not know the reason(s). Of course not.
Here is what I can think on the top of my head:
I would guess that early scientists (Thomson, Bridgman, Altenkirch, Callen, Domenicali and Ioffe amongst others) considered cases that made sense with the kinds of materials available at their times. They did not consider more general cases, or "exotic" cases that we have nowadays. And scientists from nowadays are still missing, due to inertia (and possibly the lack of good up to date textbook on thermoelectricity), what they can achieve with thermoelectricity. Some of them (e.g. Uchida https://aip.scitation.org/doi/full/10.1063/5.0077497, and Snyder's https://journals.aps.org/prb/abstract/10.1103/PhysRevB.86.045202) are hinting at nice novelties, but they are still missing a lot (that I can think of).

Also, what you may be missing is that maybe you shouldn't focus on heat (or heat flux/transfer) only when you deal with the thermoelectric case, but maybe you should look at the energy flux ##\vec J_U = \vec J_Q+\overline{\mu}/e\vec J_e##. Energy as in the "internal energy U" of thermodynamics. I'll repeat myself hopefully a last time, there's a very quick way to see that heat can flow from cold to hot just by looking at the generalized Fourier's law that appear in most thermoelectric books (and Wikipedia), ##\vec J_Q =-\kappa \nabla T + ST \vec J_e##. Where ##S##, the Seebeck coefficient can have either sign, ##\vec J_e## can have any direction since it's the current density and you can engineer the current to go in any direction you want and also any reasonable magnitude. It is clear that you can make a system where ##ST \vec J_e## outweights ##-\kappa \nabla T##. When this happens, heat flows from cold to hot (or in another direction if you want it to be so and arrange it that way). If you are unable to convince yourself by looking at this simple relationship, there's nothing else I can do to convince you. This doesn't break thermodynamics laws. There's still an increase in entropy over time, and no energy is created out of thin air.
 
Physics news on Phys.org
  • #52
The seebeck coefficient is a proportionality factor between a temperature gradient and a thereby caused gradient of the electrochemical potential of charge carriers when there is no electrical current flowing.
 
  • Like
Likes russ_watters
  • #53
Lord Jestocost said:
The seebeck coefficient is a proportionality factor between a temperature gradient and a thereby caused gradient of the electrochemical potential of charge carriers when there is no electrical current flowing.
Not sure to whom this is directed. What you state is just a special case of.the expression I have written in post #49 for.instance, as generalized Ohm's law, with Je set to 0.
Your statement is not universally true, as I,wrote earlier, this quantity is in fact a tensor in the most general case. But your statement is mostly correct.
 
  • Like
Likes dextercioby
  • #54
fluidistic said:
... there's a very quick way to see that heat can flow from cold to hot just by looking at the generalized Fourier's law that appear in most thermoelectric books (and Wikipedia), ##\vec J_Q =-\kappa \nabla T + ST \vec J_e##. Where ##S##, the Seebeck coefficient can have either sign, ##\vec J_e## can have any direction since it's the current density and you can engineer the current to go in any direction you want and also any reasonable magnitude. It is clear that you can make a system where ##ST \vec J_e## outweights ##-\kappa \nabla T##. When this happens, heat flows from cold to hot (or in another direction if you want it to be so and arrange it that way).
Could you send the Wikipedia link with the generalized Fourier law you mention above?

The electric current density Je in the second term on the right of this equation, isn't that due to the Seebeck effect if we exclude external sources, such as a battery?
In that case it should be proportional to S, which means that the second term on the right doesn't change sign when you change the sign of S. (I'm thinking of a scalar S now, but a similar argument should hold for a tensor).
To me it looks like the second term cannot be varied as freely as you're saying.
Have you checked in detail that the second term can actually exceed the first and reverse the direction of JQ?
 
Last edited:
  • #55
Philip Koeck said:
Could you send the Wikipedia link with the generalized Fourier law you mention above?

The electric current density Je in the second term on the right of this equation, isn't that due to the Seebeck effect if we exclude external sources, such as a battery?
In that case it should be proportional to S, which means that the second term on the right doesn't change sign when you change the sign of S. (I'm thinking of a scalar S now, but a similar argument should hold for a tensor).
To me it looks like the second term cannot be varied as freely as you're saying.
Have you checked in detail that the second term can actually exceed the first and reverse the direction of JQ?
You're right, I have overlooked that the sign of S is directly linked to the direction of Je. I cannot.do the math now, no time, but you can,check it the details yourself of what happens to Jq.
 
  • Like
Likes Philip Koeck
  • #56
Philip Koeck said:
I meant the second example where you discuss anisotropic materials.
("... consider anisotropic materials, where kappa is a tensor. Write down Fourier's law under matricial form and you'll get that it's possible for a material to develop a transverse thermal gradient (and so a heat flux in that direction) even though it isn't the hot to cold direction.")

The example with an external current is understandable. It's just like a heat pump.
But in the above case I don't see any external work being done on the solid.
If heat flows spontanously in a direction that's not hot to cold I would expect an increase in entropy somewhere, for example a phase change.
@Philip Koeck

Your understanding is correct. Thermal energy is naturally tranfered in materials in one direction: from hot toward cold - unless people interfere.
 
  • Like
Likes hutchphd and Philip Koeck
  • #57
Lord Jestocost said:
@Philip Koeck

Your understanding is correct. Thermal energy is naturally tranfered in materials in one direction: from hot toward cold - unless people interfere.
I... interfere. I'm actually glad this discussion occurred, as I didn't think deeply enough of the case of no external energy source, so I will take the time to do it later.
However, what I am certain about, is that the direction of ##\vec J_Q## (the heat flux) needs not be the one of ##-\nabla T## (direction of hot to cold), especially in the case of anisotropic materials, which is ironically the text you quote. It's as if you discarded everything I wrote about it.

Similarly for thermoelectrics. What apparently cannot be done is to reverse the direction of the heat flux if one uses no external energy source. But you can certainly alter its direction (though not reverse it entirely). The claim that heat flows from hot to cold does not hold universally.
 
  • Like
Likes Philip Koeck
  • #58
Philip Koeck said:
So in the first case an increase of average potential energy is related to an increase in temperature whereas in the second it's not, I would say.

I also get the impression that there's a qualitative difference between the potential energy in vibrations and that due to altitude, for example.
You are, of course, wrong. When the gas is cooled down, it falls down loosing its gravitational potential energy. That's, for example, how black holes are formed when massive stars become colder.
 
  • #59
Philip Koeck said:
However I do see the difficulties with defining T as average kinetic energy.
What's not clear to me is how small the objects have to be that have this energy.
For example if you had a mechanical construction with billions of small steel balls on springs, how small would the steel balls have to be so that you would assign a temperature to the vibration of these balls?
They don't need to be small at all. For example, the balls in the lottery machine have a Maxwell distribution of velocities. However, the "temperature" of this ball velocities is different from the "temperature" of the atom velocities. This means that the system has two temperatures, one for microscopic degrees and another for macroscopic degrees, i.e. that the system is not in the full thermal equilibrium.
 
  • Like
Likes hutchphd, TeethWhitener and Philip Koeck
  • #60
Demystifier said:
You are, of course, wrong. When the gas is cooled down, it falls down loosing its gravitational potential energy. That's, for example, how black holes are formed when massive stars become colder.
This idea is very strange for me.
So two volumes of helium with the same average kinetic energy at different altitudes in a helium atmosphere would have different temperatures?
How would a thermometer be able to measure the potential energy of the helium atoms in the gravitational field?
Also there's the problem that temperature has a definite zero point whereas potential energy in a gravitational field doesn't.
I just don't get it.
 
  • #61
Philip Koeck said:
This idea is very strange for me.
So two volumes of helium with the same average kinetic energy at different altitudes in a helium atmosphere would have different temperatures?
How would a thermometer be able to measure the potential energy of the helium atoms in the gravitational field?
Also there's the problem that temperature has a definite zero point whereas potential energy in a gravitational field doesn't.
I just don't get it.
The thermometer measures the kinetic energy, but kinetic and potential energy are distributed according to the same temperature, see my post #21.

Note, however, that the formula in #21 is just a proportionality, one also needs to fix a constant (not depending on ##v## and ##z##) in front of it, which is to be determined from the normalization of the probability distribution. So when you redefine the zero of the potential energy, it is just absorbed into this normalization constant.

Note also that in that formula the probability function is a product of the velocity-dependent and position-dependent function, which means that velocity is statistically independent from position. Hence, the average kinetic energy does not depend on position, so if two volumes have the same average kinetic energy at different altitudes, then they also have the same average kinetic energy at the same altitudes, which just means that they have the same temperature.
 
  • #62
Thank you for this discussion. @fluidistic Can you name some books which helped you build knowledge on this topic of heat transfer? Thanks!
 
  • #63
Demystifier said:
The thermometer measures the kinetic energy, but kinetic and potential energy are distributed according to the same temperature, see my post #21.
I can't make sense of what you write.

Let's assume a monoatomic ideal gas without gravity. The probability for any atom to have a kinetic energy between E and E+dE will be proportional to √E e-βE.
In an isothermal atmosphere (with gravity) this should still be true.

However, the pressure in an isothermal atmosphere is proportional to e-βEpot.
And pressure is proportional to the number of atoms per volume in an ideal gas with constant T, which should be proportional to the probability of finding an atom with a potential energy between E and E+dE
 
  • #64
Philip Koeck said:
I can't make sense of what you write.

Let's assume a monoatomic ideal gas without gravity. The probability for any atom to have a kinetic energy between E and E+dE will be proportional to √E e-βE.
In an isothermal atmosphere (with gravity) this should still be true.

However, the pressure in an isothermal atmosphere is proportional to e-βEpot.
And pressure is proportional to the number of atoms per volume in an ideal gas with constant T, which should be proportional to the probability of finding an atom with a potential energy between E and E+dE
The temperature is related to the total average energy, which is the sum of average potential and average kinetic energy. Roughly,
$$T \propto \langle E_{\rm kin} \rangle + \langle E_{\rm pot} \rangle$$
The average is taken over all phase space, so ##\langle E_{\rm kin} \rangle## and ##\langle E_{\rm pot} \rangle## do not depend on velocity or position of the particle. Furthermore, if additionally ##\langle E_{\rm kin} \rangle \propto \langle E_{\rm pot} \rangle##, then both
$$T \propto \langle E_{\rm kin} \rangle
\;\;\; {\rm and} \;\;\;
T \propto \langle E_{\rm pot} \rangle$$
are true, so there is no contradiction. Indeed, when the potential energy is quadratic in canonical positions, then ##\langle E_{\rm kin} \rangle \propto \langle E_{\rm pot} \rangle## is guaranteed by the equipartition theorem. The gravitational potential energy is not quadratic, but for general potential energies there is a generalized equipartition theorem with similar properties.
 
Last edited:
  • Like
Likes Philip Koeck
  • #65
dextercioby said:
Thank you for this discussion. @fluidistic Can you name some books which helped you build knowledge on this topic of heat transfer? Thanks!
Not many books. There is "Physical Properties of Crystals: Their Representation by Tensors and Matrices" by Nye. I think it's a very nice book, it even has an accurate part on the Bridgman effect (a manifestation of thermoelectricity as a heat generation/adsorption in materials in which S is anisotropic). It is very rare, even in books which are solely about thermoelectricity.
There is Carlsaw and Jaeger's "Conduction of heat in solids", as a reference.

There is "Non equilibrium thermodynamics" by De Groot and Mazur. This one is hard to grasp, but it contains a lot of valuable information about fluxes and thermoelectricity.

Then there is a plethora of papers. There is a famous one by Callen (1948) https://journals.aps.org/pr/abstract/10.1103/PhysRev.73.1349.And what I consider a masterpiece (very informative and accurate): Domenicali's https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.26.237Then solving problems with finite elements. Compare the solutions with analytical results when possible before trusting the results. I use FEniCSx Python's "library" for this task. I find it extremely powerful and accurate.
 
  • Like
Likes dextercioby
  • #66
Philip Koeck said:
I can't make sense of what you write.

Let's assume a monoatomic ideal gas without gravity. The probability for any atom to have a kinetic energy between E and E+dE will be proportional to √E e-βE.
In an isothermal atmosphere (with gravity) this should still be true.

However, the pressure in an isothermal atmosphere is proportional to e-βEpot.
And pressure is proportional to the number of atoms per volume in an ideal gas with constant T, which should be proportional to the probability of finding an atom with a potential energy between E and E+dE
I noticed the last sentence above is ambiguous.

What I meant was:
Pressure is proportional to the number of atoms per volume in an ideal gas with constant T and the number of atoms per volume should be proportional to the probability of finding an atom with a potential energy between E and E+dE.

What I wanted to point out was that the distribution of atoms among kinetic energy levels doesn't follow the same mathematical function as the distribution among potential energy levels in a gravitational field.
So there's a qualitative difference somewhere (although both do contain a Boltzmann factor with the same T).
I'll continue in another reply.
 
  • #67
Philip Koeck said:
the number of atoms per volume should be proportional to the probability of finding an atom with a potential energy between E and E+dE.
I have no idea how you arrived at that conclusion.

Philip Koeck said:
What I wanted to point out was that the distribution of atoms among kinetic energy levels doesn't follow the same mathematical function as the distribution among potential energy levels in a gravitational field.
Yes it does, according to Boltzmann and Gibbs.
 
  • #68
Demystifier said:
The temperature is related to the total average energy, which is the sum of average potential and average kinetic energy. Roughly,
$$T \propto \langle E_{\rm kin} \rangle + \langle E_{\rm pot} \rangle$$
The average is taken over all phase space, so ##\langle E_{\rm kin} \rangle## and ##\langle E_{\rm pot} \rangle## do not depend on velocity or position of the particle. Furthermore, if additionally ##\langle E_{\rm kin} \rangle \propto \langle E_{\rm pot} \rangle##, then both
$$T \propto \langle E_{\rm kin} \rangle
\;\;\; {\rm and} \;\;\;
T \propto \langle E_{\rm pot} \rangle$$
are true, so there is no contradiction. Indeed, when the potential energy is quadratic in canonical positions, then ##\langle E_{\rm kin} \rangle \propto \langle E_{\rm pot} \rangle## is guaranteed by the equipartition theorem. The gravitational potential energy is not quadratic, but for general potential energies there is a generalized equipartition theorem with similar properties.
That's what I would have said too. Degrees of freedom are related to quadratic canonical coordinates.

Potential energy in a gravitational field is something different.
Interesting information about the generalized equipartition theorem!

The way I've seen it so far:
If I don't allow any work to be done (keep the volume of a gas or solid constant, for example) then all heat added goes into the available degrees of freedom, such as translation, rotations and vibrational potential energy. This is directly related to a change in temperature.
I can't see the same direct connection to temperature if the gas changes it's position in an external potential such as gravity
 
  • #69
Demystifier said:
I have no idea how you arrived at that conclusion.

I'm approximating the potential energy as mgh so that Epot is proportional to h and therefore dE is proportional to dh.

That should mean that the number of atoms per volume in a slab of thickness dh at some height h must be proportional the probability of finding an atom with a potential energy between E and E + dE (if E = mgh).

Demystifier said:
Yes it does, according to Boltzmann and Gibbs.
The distribution among kinetic energies contains the factor √E, whereas the barometric formula doesn't.
 
  • #70
Philip Koeck said:
Degrees of freedom are related to quadratic canonical coordinates.
No, they are related to all canonical coordinates, quadratic or not.

Philip Koeck said:
all heat added goes into the available degrees of freedom, such as translation, rotations and vibrational potential energy.
The heat also goes to the ability of particles to go up in the gravitational field.

Philip Koeck said:
I can't see the same direct connection to temperature if the gas changes it's position in an external potential such as gravity
What if the gravitational potential energy was proportional to ##z^2##, would you see the connection in that case?
 
  • Like
Likes Philip Koeck
  • #71
Philip Koeck said:
The distribution among kinetic energies contains the factor √E,
This is a consequence of ##d^3p\propto \sqrt{E}\,dE##. There is no ##\sqrt{E}## when you consider probability density in the 3-momentum space, but it appears if you transform it to the probability density in the kinetic energy space.
 
  • #72
Demystifier said:
What if the gravitational potential energy was proportional to ##z^2##, would you see the connection in that case?
Very good point!
 
  • #73
Demystifier said:
The heat also goes to the ability of particles to go up in the gravitational field.
But then some of the heat is used to do work. That's a different story, isn't it?
 
  • #74
I'm getting my mind blown off a little bit more. I just realized that Joule heat depends on the sign of S, the Seebeck coefficient. It's clearly written in Domenicali's paper I quoted above (and trivial to derive starting from the generalized Ohm's law), but I actually never realized it before. I'm having fun with finite elements right now. Lots of crazy stuff occur even without considering the Thomson effect and anisotropy in general. The program computes anything I ask it to.
 
  • #75
Philip Koeck said:
But then some of the heat is used to do work.
No. The work is ##PdV##, but ##dV=0## because the volume is not changed. Moving up only means that density at high ##z## becomes larger. In the probability distribution proportional to ##e^{-\beta mg z}##, adding heat means changing ##\beta##, so that the particles are redistributed while the volume remains the same. In particular, for the infinite temperature we have
$$e^{-\beta g z}=e^{-0 g z}=1$$
which means that density does not longer depend on ##z##.
 
  • #76
Demystifier said:
No. The work is ##PdV##, but ##dV=0## because the volume is not changed. Moving up only means that density at high ##z## becomes larger. In the probability distribution proportional to ##e^{-\beta mg z}##, adding heat means changing ##\beta##, so that the particles are redistributed while the volume remains the same. In particular, for the infinite temperature we have
$$e^{-\beta g z}=e^{-0 g z}=1$$
which means that density does not longer depend on ##z##.
I was referring to the work for lifting the atoms in a gravitational field, not PV-work.

I think we might be talking about different things here.

What I'm really trying to get at is whether temperature is connected to an equilibrium distribution of molecules or atoms among internal energy levels (translation, rotation and vibration) only or whether one should also include external energies such as the potential energy in a gravitational field.

If we take the more general definition of T from post 3, what is the meaning of U? Is it just the inner energy or does it include even potential energies in external fields?
 
  • #77
Philip Koeck said:
What I'm really trying to get at is whether temperature is connected to an equilibrium distribution of molecules or atoms among internal energy levels (translation, rotation and vibration) only or whether one should also include external energies such as the potential energy in a gravitational field.

If we take the more general definition of T from post 3, what is the meaning of U? Is it just the inner energy or does it include even potential energies in external fields?
It's all energy, defined by the full Hamiltonian ##H## of the system. After all, the notion of "inner energy" is not even well defined in general.

Perhaps the following will convince you. The temperature ##T## is a parameter that defines a state of statistical equilibrium. Why is the statistical equilibrium related to the energy in the first place? Because the energy is conserved, so energy is a quantity which does not change while the closed system evolves from non-equilibrium to equilibrium. But only the full energy, defined by the full Hamiltonian, is conserved. The "inner energy" alone (whatever that means) is not conserved.
 
  • Like
Likes Philip Koeck and fluidistic
  • #78
Demystifier said:
It's all energy, defined by the full Hamiltonian ##H## of the system. After all, the notion of "inner energy" is not even well defined in general.

Perhaps the following will convince you. The temperature ##T## is a parameter that defines a state of statistical equilibrium. Why is the statistical equilibrium related to the energy in the first place? Because the energy is conserved, so energy is a quantity which does not change while the closed system evolves from non-equilibrium to equilibrium. But only the full energy, defined by the full Hamiltonian, is conserved. The "inner energy" alone (whatever that means) is not conserved.
Just to see what this leads to "in practice":
If I have 2 identical containers filled with a monoatomic ideal gas and the only difference is that one is in zero gravity whereas the other is on the surface of a planet.
That should mean that they have different values for Cv, right?
How different would depend on the value of g on the planet, I would think.
 
  • Like
Likes Demystifier
  • #79
Philip Koeck said:
Just to see what this leads to "in practice":
If I have 2 identical containers filled with a monoatomic ideal gas and the only difference is that one is in zero gravity whereas the other is on the surface of a planet.
That should mean that they have different values for Cv, right?
How different would depend on the value of g on the planet, I would think.
Yes, but not only on ##g##. It also depends on the vertical size of the volume. If the volume has a small height ##h## so that ##mgh\ll kT##, then the effect of gravity is negligible. For typical atom mass ##m## at room temperature on Earth, I think this inequality is something like ##h\ll 1##km (you can check it by yourself).
 
  • Like
Likes Philip Koeck
  • #80
Demystifier said:
Yes, but not only on ##g##. It also depends on the vertical size of the volume. If the volume has a small height ##h## so that ##mgh\ll kT##, then the effect of gravity is negligible. For typical atom mass ##m## at room temperature on Earth, I think this inequality is something like ##h\ll 1##km (you can check it by yourself).
For fun, one can also compute how large the temperature must be so that the thermal fluctuations can significantly lift up a stone with ##m=1##kg.
 
  • Like
Likes Philip Koeck
  • #81
A little update. In this simple model, there's a wire (1 cm long, 0.01 cm x 0.01 cm cross section) whose 2 ends are kept at different temperature (300 K and 360 K), while electric current is injected near those 2 ends (but not quite at the same surfaces, 6 amperes). I assumed a -1100 V/K Seebeck coefficient, I assumed that this Seebeck coefficient does not depend on temperature (if this wasn't the case, the temperature profile would get more complicated, and similarly for the voltage profile). I assumed reasonable values for the thermal conductivity for a metal (so around 100 W/Km^2) and so on and so forth. No temperature dependence of any parameter.

It's a finite element model with over 8.5 k nodes. Mesh picture:

mesh_pic.jpg


The total computed Joule heat is 0.032 W.
The temperature profile along the center of the wire looks like so:
temp_profile.jpg


The temperature gradient (direction of "cold to hot") looks like so:
grad_T.jpg


It looks like it points along the wire's length, changes direction and magnitude, passing to 0 as suggested by the temperature profile.

Now the long awaited heat flux:
heat_flux.jpg

It is dominated by the electric current in this case. Its direction is not from "hot to cold", it is more complicated than this, and it depends on the direction of the electric current (and sign of Seebeck coefficient). What you cannot see in the picture above, is that in the tiny region between the hot reservoir and where the current enters the material, the heat flux actually reverses direction.

I could plot more things, such as the current density, energy flux, entropy flux, voltage profile, etc. I could also deal with the case where I short circuit 2 spots on this wire, without passing any extra current but the one generated thanks to the voltage created by the Seebeck effect, to leave the system as unperturbed as possible.
 
Last edited:
  • #82
Personally, I like the phenomenological approach of Caratheodory: https://eclass.uoa.gr/modules/document/file.php/CHEM105/Άρθρα/Caratheodory-1909.pdf
This has been brought to modern mathematical language by Lieb and Yngvason: https://www.sciencedirect.com/scien...n6FXF-_Ey0CR_eTbjMqwnJUsOSf349E__koQ6XIoC7w0A

There empirical temperature is derived from the zeroth law, i.e. transitivity of equilibrium. This allows one to define equivalence classes of equilibrium states with are labeled by empirical temperature theta. Heat flow is already a topic beyond equilibrium thermodynamics. As remarked before, it can also be from cold to hot if other driving forces dominate (e.g. chemical potential differences). The exception is the reversible heat exchange between closed systems. Absolute temperature T can be shown to be an integrating factor of the reversible heat exchanged between closed systems (which yields entropy), which is only a function of empirical temperature T= T(theta). The sign of temperature difference is then by definition linked to the direction of reversible heat exchange.
 
  • #83
dextercioby said:
TL;DR Summary: What is the physical interpretation of temperature of a mass of liquid or a solid?

Usually, the mental image of temperature is: an internal property of a bulk of matter, which typically describes the average kinetic plus rotation/vibration energy of molecules, so we imagine a gas in which temperature is a measure of how quick molecules are, and how frequently they collide one to another. The higher the temperature, the more energy of molecules.

Let's switch to a liquid at room temperature (water) or a crystalline solid (a bar of pure iron). How would you define their temperature? Would it for example be a measure of the average energy of the "electron gas" in the conduction band? What about liquid water, which lacks a rigid crystalline structure, being just a scattter of molecules kept together with vdW forces and "hydrogen bonds"?
I'll be honest. As a condensed matter physicist who worked at cryogenic temperatures, to me, temperature is what a thermocouple measures. But let me see if I can say a few things you might find helpful.

In crystalline solids, temperature modifies the Fermi-Dirac distribution for the electrons and the Bose-Einstein distribution of the phonons. Magnetism is a thing, but it's not my jam. Both electrons and phonons are travelling waves, so they have kinetic energy. The change in distribution let me know electrons and phonons move up to higher states.

Is it just the conduction electrons? The slope of the band determines the velocity of the wave. The core electrons will not overlap much at all, so those bands are essentially flat, and flat bands will not have a velocity. They are much like core electrons in atoms. The conduction and valence electrons will have non-zero slopes, so they will have kinetic energy.

Now here's where I start getting confused. It's not that all electron bands of higher energy have greater slopes (it is true in the free electron model, though), so I can imagine a pathological example where the electrons are in an s-like conduction band, which is highly dispersive (high slope), and transition to d-like valence bands of lesser slope. The kinetic energy would go down, and also the temperature, with an input of energy to the system. I have never heard of that. Is there some principle or symmetry at work that ensures this doesn't happen? I don't know. Maybe I need to think about changing internal energy with entropy.

That's it for my rough interpretation.

If we were to do a calculation, on the other hand, we would use the well-known thermodynamic and statistical relations. This would be the realm of computer calculations.

As for liquids, in my view, temperature is a measure of the average kinetic energy of the particles. Any quantum mechanical goings-on would act in a similar manner to the classical degrees of freedom we learn about and act like energy sinks that increase heat capacity (recall 1/2 kT).
 
  • #84
Dr_Nate said:
I'll be honest. As a condensed matter physicist who worked at cryogenic temperatures, to me, temperature is what a thermocouple measures. But let me see if I can say a few things you might find helpful.
Thermocouples measure a voltage, which gives information about the temperature difference between the junction (tip of it, where you want to know the temperature), and the reference temperature of the apparatus/voltmeter. Only then, this voltage is translated into a temperature. This is alright, unless you want to measure cryogenics temperatures (near absolute zero, or even with liquid He below 1.5 K). The reason is that the Seebeck coefficient of materials drop to 0 near absolute zero. So the voltage reading will drop to zero. This is no good for accuracy. Other kinds of thermometers are better suited.
 
  • Like
Likes dextercioby
Back
Top