Stationary states -- Boltzmann distribution

In summary: I asked. What I asked was if the Boltzmann distribution is equivalent to the density matrix.In summary, the answer is that the density matrix is necessary in QM when you only know the energy, but the Boltzmann distribution is equivalent when you know the energy and the state.
  • #1
Konte
90
1
Hello everybody,

- In quantum mechanics, the state ## | \psi \rangle ## of a system that is in thermodynamic equilibrium can be expressed as a linear combination of its stationary states ## | \phi _n \rangle ## : $$ | \psi \rangle = \sum_n c_n | \phi _n \rangle $$
It permit us to express the mean value of energy as:
$$ \langle E \rangle _{\psi}= \sum_n E_n | c_n |^2 $$

- In other approach, one way to express the mean value of energy is by using the Boltzmann distribution. So my question is:
As the system is in thermodynamic equilibrium, is it allowed to think that Boltzmann distribution ## \frac{N_i}{N} = \frac{g_i e^{-\frac{E_i}{k_BT}}}{Z(T)} ## are equivalent to the ## | c_n |^2 ## ?

Thank you everybody.

Konte
 
Physics news on Phys.org
  • #2
Konte said:
In quantum mechanics, the state ## | \psi \rangle ## of a system that is in thermodynamic equilibrium can be expressed as a linear combination of its stationary states ## | \phi _n \rangle ## : $$ | \psi \rangle = \sum_n c_n | \phi _n \rangle $$
That's not correct, a system in thermodynamic equilibrium can't be described by a ket vector. Where did you get this idea? What we need is the density matrix, see wikipedia.

Konte said:
So my question is:
As the system is in thermodynamic equilibrium, is it allowed to think that Boltzmann distribution ## \frac{N_i}{N} = \frac{g_i e^{-\frac{E_i}{k_BT}}}{Z(T)} ## are equivalent to the ## | c_n |^2 ##
If you use the density matrix, a similar relation is essentially correct. Note however that for a system of bosons resp. fermions, the Boltzmann distribution gets replaced by the Bose-Einstein distribution resp. the Fermi-Dirac distribution.
 
  • Like
Likes Konte
  • #3
Thank you for the answer,

kith said:
That's not correct, a system in thermodynamic equilibrium can't be described by a ket vector. Where did you get this idea? What we need is the density matrix

Because I have a lack of knowledge about matrix density, I would like to be sure:
Let say, I want to describe and to study a single ammonia molecule at room temperature. So, even in this case, the correct way is using density matrix?

Thanks.
Konte
 
  • #4
Konte said:
Because I have a lack of knowledge about matrix density, I would like to be sure:
Let say, I want to describe and to study a single ammonia molecule at room temperature. So, even in this case, the correct way is using density matrix?
If you have a single molecule, then you can't really talk about its temperature. If it is exchanging energy with a reservoir at given T, then yes, the density matrix formalism is necessary.
 
  • Like
Likes Konte and kith
  • #5
Konte said:
Hello everybody,

- In quantum mechanics, the state ## | \psi \rangle ## of a system that is in thermodynamic equilibrium can be expressed as a linear combination of its stationary states ## | \phi _n \rangle ## : $$ | \psi \rangle = \sum_n c_n | \phi _n \rangle $$
It permit us to express the mean value of energy as:
$$ \langle E \rangle _{\psi}= \sum_n E_n | c_n |^2 $$

- In other approach, one way to express the mean value of energy is by using the Boltzmann distribution. So my question is:
As the system is in thermodynamic equilibrium, is it allowed to think that Boltzmann distribution ## \frac{N_i}{N} = \frac{g_i e^{-\frac{E_i}{k_BT}}}{Z(T)} ## are equivalent to the ## | c_n |^2 ## ?
In QM, the state describes you knowledge about the system. Since ##c_n=| c_n |e^{i\varphi_n}##, knowledge of ##c_n## involves more knowledge than knowledge of ##| c_n |##. If you know ##c_n## then you can write the expansion
$$ | \psi \rangle = \sum_n c_n | \phi _n \rangle $$
in which case you don't necessarily need to work with density matrix. But often you know only ##| c_n |## and not ##c_n##, in which case you cannot write the expansion above. In this case, it is necessary to use the density matrix. Thermodynamic equilibrium is a special example of the case in which you know only ##| c_n |## and not ##c_n##.
 
  • Like
Likes Konte
  • #6
Demystifier said:
In QM, the state describes you knowledge about the system. Since ##c_n=| c_n |e^{i\varphi_n}##, knowledge of ##c_n## involves more knowledge than knowledge of ##| c_n |##. If you know ##c_n## then you can write the expansion
$$ | \psi \rangle = \sum_n c_n | \phi _n \rangle $$
in which case you don't necessarily need to work with density matrix. But often you know only ##| c_n |## and not ##c_n##, in which case you cannot write the expansion above. In this case, it is necessary to use the density matrix. Thermodynamic equilibrium is a special example of the case in which you know only ##| c_n |## and not ##c_n##.
That doesn't seem right. Of course, you can write the energy expectation value of a thermal state ##\rho=\frac{e^{-\beta \hat H}}{Z}## as a sum ##\sum |c_n|^2 E_n##, but there is no pure state ##\left|\psi\right>=\sum c_n \left|E_n\right>## that reproduces the statistics of ##\rho## exactly, no matter what phase information you supply. Of course, there is a state that reproduces the energy expectation value, but the density matrix allows you to compute more than just the expectation value of the energy observable. If you want to reproduce the statistics of all observables, then no pure state can be found that matches the thermal state.
 
Last edited:
  • Like
Likes Konte
  • #7
DrClaude said:
If it is exchanging energy with a reservoir at given T, then yes, the density matrix formalism is necessary.

Thanks for your answer. So, for the case of this single molecule, how can I model this exchange of energy with a reservoir at given T?
 
  • #8
rubi said:
Of course, there is a state that reproduces the energy expectation value,
How do we construct that state? May be in the same way as I mention in my first post?

rubi said:
If you want to reproduce the statistics of all observables, then no pure state can be found that matches the thermal state.

It is really interesting. Is there any mathematical way or formal demonstration that can prove it to us once and for all?

Thanks a lot.
 
  • #9
Konte said:
How do we construct that state? May be in the same way as I mention in my first post?
One example would be ##\rho_\text{pure}=\left|\Psi\right>\left<\Psi\right|## with ##\left|\Psi\right>=\frac{1}{\sqrt{Z}}\sum_n e^{-\frac{1}{2}\beta E_n}\left|E_n\right>##, but you can also add arbitrary phases as Demystifier has mentioned. However, this state is not statistically equivalent to the density matrix ##\rho_\text{mixed}=\frac{1}{Z}e^{-\beta \hat H}##. It only reproduces the energy expectation value.

It is really interesting. Is there any mathematical way or formal demonstration that can prove it to us once and for all?
Take the projector ##P=\left|\theta\right>\left<\theta\right|## onto the state ##\left|\theta\right>=\frac{1}{\sqrt{2}}\left(\left|E_1\right>+\left|E_2\right>\right)##. You get ##\left<P\right>_\text{pure}=\frac{1}{2 Z}\left|e^{-\frac{1}{2}\beta E_1}+e^{-\frac{1}{2}\beta E_2}\right|^2## but ##\left<P\right>_\text{mixed}=\frac{1}{2 Z}\left(e^{-\beta E_1}+e^{-\beta E_2}\right)##. I did the calculation in my head, so potentially the factors aren't exactly right. o0) Anyway, this means that the states predict different probabilities for finding the state ##\left|\theta\right>## in a measurement. If you give me a different state ##\left|\Psi\right>## with different phases to fix the problem for this particular ##P##, I will just add a relative phase factor in ##\left|\theta\right>## to make the problem reappear.
 
Last edited:
  • #10
rubi said:
That doesn't seem right.
What exactly "doesn't seem right"? I don't see any contradiction between my statements and your statements.
 
  • #11
Demystifier said:
What exactly "doesn't seem right"? I don't see any contradiction between my statements and your statements.
Maybe I have misunderstood you, but it seemed to me as if you were claiming that in a thermal state we know ##\left|c_n\right|## and once you supply the phase information, i.e. a list of numbers ##\varphi_n## such that ##c_n=\left|c_n\right|e^{\mathrm i\varphi_n}##, we know the "real" state of the system. But that can't be right, because no matter what list of numbers ##\varphi_n## you give me, I will always find an observable such that its statistics in the thermal state doesn't match its statistics in the state with the additional phase information incorporated. Hence, it can't be true that there is additional phase information and we just don't know it. The ##c_n## can't be the coefficients of a pure state whose phase information we have dropped.
 
  • #12
rubi said:
Maybe I have misunderstood you, but it seemed to me as if you were claiming that in a thermal state we know ##\left|c_n\right|## and once you supply the phase information, i.e. a list of numbers ##\varphi_n## such that ##c_n=\left|c_n\right|e^{\mathrm i\varphi_n}##, we know the "real" state of the system. But that can't be right, because no matter what list of numbers ##\varphi_n## you give me, I will always find an observable such that its statistics in the thermal state doesn't match its statistics in the state with the additional phase information incorporated. Hence, it can't be true that there is additional phase information and we just don't know it. The ##c_n## can't be the coefficients of a pure state whose phase information we have dropped.
Well, one should distinguish what one can (in principle) know about some physical system from what one actually knows about that system. It is certainly possible that someone does not know something which in principle can be known. For example, there can be a system in which ##\varphi_n## can be known, but someone does not know them. In this case, that person with incomplete information will describe the system with a mixed (e.g. thermal) state, despite the fact a more complete information is also possible. Of course, that person may perform additional measurements and attain additional information about phases, after which he can describe the system with a pure state and not with a mixed state. But before that, the best he can is to describe the system with a mixed state.
 
  • #13
Demystifier said:
Of course, that person may perform additional measurements and attain additional information about phases, after which he can describe the system with a pure state and not with a mixed state.
But that new state would be statistically inconsistent with the thermal state and as far as we know, the statistics predicted by the thermal state is consistent with experiments, so collecting additional information should not require us to modify the state.
 
  • #14
rubi said:
But that new state would be statistically inconsistent with the thermal state and as far as we know, the statistics predicted by the thermal state is consistent with experiments, so collecting additional information should not require us to modify the state.
i) Suppose that I have a machine with two buttons; button A and button B. When I press A, the machine prepares a particle in the state ##|A\rangle##. When I press B, the machine prepares state ##|B\rangle##. So I press one of these buttons, but I don't tell you which one I pressed. What can you say about the state before you make any measurement? Can your knowledge at that time be expressed as
$$\rho=\frac{1}{2}|A\rangle \langle A| + \frac{1}{2}|B\rangle \langle B| ?$$

ii) Now consider the variation of the experiment in which I have a 1000 machines of the kind above. For each of them I press either A or B. (I am not obliged to press the same button each time.) In this way I prepare 1000 particles, each of which is either in the state ##|A\rangle## or ##|B\rangle##. You pick one of these particles. What can you say about the state of that particle before you make any measurement? Can your knowledge on that particle at that time be expressed as
$$\rho=\frac{1}{2}|A\rangle \langle A| + \frac{1}{2}|B\rangle \langle B| ?$$
 
  • #15
None of these states is likely to describe the experiments correctly. In the first case, I can only make one experiment and in the second case I don't know the probabilities for pressing A or B. The states are just best guesses based on the maximum entropy method. However, this method only makes sense if you can improve your knowledge by Bayesian inference. In physics, this makes no sense, since the microscopic dynamics of a system is governed by the certain laws of physics and the approach to equilibrium is a physical process that doesn't depend on our knowledge. The laws of physics are supposed to single out a definite state. Only after the laws of physics have made the system approach some equilibrium state, we can measure it, compute its entropy and find that it happens to correspond to a maximum entropy distribution. However, the maximum entropy method doesn't explain the state. A MaxEnt advocate would expect that the prior state would have to be adjusted using Bayesian inference, but that would require one to modify it. However, experimentally, the thermal states don't seem to require any modification. The only explanation can be that the theory generically predicts an approach to equilibrium.
 
  • #16
rubi said:
None of these states is likely to describe the experiments correctly. In the first case, I can only make one experiment and in the second case I don't know the probabilities for pressing A or B. The states are just best guesses based on the maximum entropy method. However, this method only makes sense if you can improve your knowledge by Bayesian inference. In physics, this makes no sense, since the microscopic dynamics of a system is governed by the certain laws of physics and the approach to equilibrium is a physical process that doesn't depend on our knowledge. The laws of physics are supposed to single out a definite state. Only after the laws of physics have made the system approach some equilibrium state, we can measure it, compute its entropy and find that it happens to correspond to a maximum entropy distribution. However, the maximum entropy method doesn't explain the state. A MaxEnt advocate would expect that the prior state would have to be adjusted using Bayesian inference, but that would require one to modify it. However, experimentally, the thermal states don't seem to require any modification. The only explanation can be that the theory generically predicts an approach to equilibrium.
If I understood you correctly, you claim that quantum state is objective thing that does not depend on our knowledge. I would agree that this is so for pure states, but I don't agree that this is so for mixed states. In addition, many physicists would even disagree that pure states are objective. So this is a somewhat controversial topic, which depends on the interpretation of QM. So perhaps it's better to stop further discussion.
 
  • #17
No, I'm not making any interpretational statements. The situation is analogous to classical statistical mechanics. In principle, you could also use the MaxEnt method to arrive at the classical canonical ensemble, but it's not physically correct to do so. The statistial ensembles must arise from a relaxation into equilibrium, since the laws of physics are defined by the microscopic theory and no additional input should be required. That's why people study ergodic theory or the Fokker-Planck equation and so on. The same reasoning applies to quantum statistical mechanics.

In other words: Do you think the MaxEnt formalism is a valid way to derive the canonical ensemble in classical statistical mechanics? If not, why would it be valid in quantum mechanics? If yes, how do you explain that it describes the correct statistics even though no Bayesian updating is needed?
 
Last edited:
  • Like
Likes vanhees71
  • #18
Well, according to the H theorem the equibrium state is the state of maximum entropy under the constraints set by conservation laws, and this leads to the (grand-)canonical statistical operators
$$\hat{\rho}=\frac{1}{Z} \exp[-\beta (\hat{H}-\sum_j \mu_j \hat{Q}_j)], \quad Z=\mathrm{Tr} \exp(\ldots).$$
 
  • #19
rubi said:
Do you think the MaxEnt formalism is a valid way to derive the canonical ensemble in classical statistical mechanics?
Yes I do.

In addition, let me also note that I think that approaches based on (quasi)ergodicity are not a valid way to derive the canonical ensemble in classical statistical mechanics. If you want, I can explain why do I think so.

rubi said:
If yes, how do you explain that it describes the correct statistics even though no Bayesian updating is needed?
Sometimes it is needed. For instance, if you measure the velocity of a single molecule, then the probability distribution for this molecule is no longer given by Maxwell-Boltzmann distribution.

I am glad that you translated the problem into a discussion of classical statistical mechanics, where the concepts are much clearer.
 
Last edited:
  • #20
vanhees71 said:
Well, according to the H theorem the equibrium state is the state of maximum entropy under the constraints set by conservation laws, and this leads to the (grand-)canonical statistical operators
$$\hat{\rho}=\frac{1}{Z} \exp[-\beta (\hat{H}-\sum_j \mu_j \hat{Q}_j)], \quad Z=\mathrm{Tr} \exp(\ldots).$$
Yes but H-theorem assumes that some information about the system is ignored. The ignored information is some information which can be known in principle, but not in practice. It usually corresponds to some fine details which are ignored by coarse-graining. If nothing is ignored, i.e. if all microscopic degrees are taken into account, then unitarity of QM implies that pure state evolves into a pure state and von-Neumann entropy does not change. The ignorance of information is a subjective thing, so in this sense increase of entropy by H-theorem can be considered subjective.
 
Last edited:
  • #21
Of course, a lot of information is ignored. The equilibrium assumption is in fact the minimal information on the system you can have. If you have complete information, you don't need a maximum entropy principle since then you have the pure state of the system determined (implying determined values for a complete set of compatible observables).
 
  • Like
Likes Demystifier
  • #22
Let me also make a quote from the book L.D. Landau and E.M. Lifshitz "Statistical Physics" (3rd edition), page 18:
"The averaging by means of the statistical matrix according to (5.4) has a twofold nature. It comprises both the averaging due to the probabilistic nature of the quantum description (even when as complete as possible) and the statistical averaging necessitated by the incompleteness of our information concerning the object considered. For a pure state only the first averaging remains, but in statistical cases both types of averaging are always present."

I think this confirms my claims about density matrix (called statistical matrix by Landau and Lifshitz) and contradicts some claims by @rubi in #15.
 
  • #23
If you know that the system is prepared in a pure state you have the most complete knowledge about this system possible. It implies that the system has determined values for a complete set of compatible observables, and you can't know more about the system than that. Any observable not compatible with this set of observable (almost always) has not a determined value, and you know only probabilities to find a certain value when measuring it.

If you have a mixed state (that is not a pure state), you don't have most complete possible knowledge about the system. Usually then you know only the probabilities to find a possible value of any observable you measure. Of course, there are exceptions.

You can, however, have a determined value for one observable. Then statistical operator is of the form
$$\hat{\rho}=\frac{1}{N} \sum_{j=1}^N |a,j \rangle \langle a,j|,$$
where ##\{|a,j \rangle_{j \in \{1,2,\ldots,N \}}## is a complete orthonormal set of eigenvectors of the observable ##\hat{A}## to the eigenvalue ##a##, and ##j## just labels the eigenvectors. In such a case you have at least one more observable that is compatible with ##\hat{A}##, but of which you don't know the value. Then the above state is that of maximum entropy compatible with knowing the value of ##A## (principle of least prejudice a la Shannon and Jaynes).

Another funny thing about QT is that, if you have complete knowledge of a composite system, it may be that you never can have complete knowledge about parts of the system. One example is the polarization-entangled two-photon state usually used in Bell-test experiments. It's a pure state
$$|\Psi \rangle=\frac{1}{\sqrt{2}}(|HV \rangle-|VH \rangle),$$
i.e., you have the most complete possible knowledge concerning the polarization state of the entire system of two photons. For each of the single photons the state is that of maximum entropy ##\hat{\rho}_A=\mathrm{Tr}_B |\Psi \rangle \langle \Psi|=\frac{1}{2} \hat{1}##. Each of the photons has maximally uncertain polarization although the system as a whole has complete information concerning the polarization states.
 
  • Like
Likes Demystifier

1. What is the Boltzmann distribution and how does it relate to stationary states?

The Boltzmann distribution is a probability distribution that describes the distribution of particles over energy levels in a system at thermal equilibrium. It relates to stationary states by showing how the energy of a system is distributed among its particles in a steady state, where there is no net flow of energy.

2. How is the Boltzmann distribution derived?

The Boltzmann distribution is derived from the principles of statistical mechanics, specifically the Boltzmann factor which relates the energy of a state to its probability. It is also derived from the concept of entropy and the principle of maximum entropy.

3. What are the assumptions made in the Boltzmann distribution?

The assumptions made in the Boltzmann distribution include that the system is in thermal equilibrium, the particles in the system are non-interacting, and the energy levels are discrete.

4. What is the significance of the Boltzmann distribution in thermodynamics?

The Boltzmann distribution is significant in thermodynamics as it provides a way to calculate the most probable distribution of energy in a system at thermal equilibrium. It is also used to calculate thermodynamic quantities such as the internal energy, entropy, and free energy.

5. How is the Boltzmann distribution used in practical applications?

The Boltzmann distribution is used in practical applications such as in the study of gas behavior, chemical reactions, and electronic systems. It is also used in fields such as astrophysics, where it can be used to model the distribution of energy in stars and galaxies.

Similar threads

Replies
2
Views
576
  • Quantum Physics
Replies
19
Views
1K
  • Quantum Physics
Replies
6
Views
1K
Replies
2
Views
1K
  • Quantum Physics
Replies
4
Views
1K
Replies
19
Views
2K
Replies
8
Views
1K
Replies
1
Views
1K
  • Quantum Physics
Replies
3
Views
2K
Replies
4
Views
1K
Back
Top