I Calculating the number of energy states using momentum space

  • #51
BvU said:
It's the density of states as a function of E (or ##|\vec p|##) that is of interest here. Not how grainy it is for infinitesimal ##dp##.

But isn't the density of states deduced from ##dp## through ##\frac{dN}{dp}## which maintains the inaccuracy further?

Also, probability calculations using formulas such as the Maxwell-Boltzmann distribution is based on the number of states within an infinitesimally small increment like ##dv##.
 
Physics news on Phys.org
  • #52
Look at the numbers in your link, e.g. in example 2.3 and figure 2.4
 
  • #53
BvU said:
Look at the numbers in your link, e.g. in example 2.3 and figure 2.4
Sorry, I'm not sure which link and example you're talking about. Can't find any example 2.3 or figure 2.4 in the link I gave in post #45.
 
  • #54
Post #4, way back when. It works out your whole conundrum ...
 
  • #55
BvU said:
Post #4, way back when. It works out your whole conundrum ...

Sorry for the very late reply. Just checked the link and noticed that the number of states density actually increases as energy ##E## increases. This would mean that the approximation of the number of states per ##dE## by using n-space geometry would be increasingly more accurate. The "least" relative accurate approximation of the number of states per ##dE## is at very low levels of energy ##E## since the quantum states are relatively low there.

Please correct me if I'm wrong.

Also, one other thing, I read that the numer of states density ##N'## in terms of energy (without the factor of 2 for 2 possible spins):
$$N' = \frac{V \cdot 4\pi \cdot \sqrt{2} \cdot m^{1.5}}{h^3} \cdot \sqrt{E}$$
I also read that the number of states density in terms of momentum is:
$$N' = \frac{V \cdot 4\pi \cdot p^2}{h^3}$$
I can't seem to derive them from one another.
 
Last edited:
  • #56
From (2.4.3) with ##p= \hbar k##: $$
N = 2 \times{1\over 8} \times \left(L\over \pi\right)^3 \times {4\over 3}\pi k^3 ={{4\over 3}\pi V p^3\over h^3 }\Rightarrow dN ={4\pi V p^2\over h^3 }\ dp
$$ With ##\quad E= \displaystyle {{ p^2\over 2m} \Rightarrow dE = { p\over m} dp \Rightarrow { dp\over dE } = { m\over p} }\quad## you use ##\quad\displaystyle {{dN\over dE } = { dN\over dp} {dp\over dE}}\quad ## to get $$
dN ={4\pi V p^2\over h^3 }\ {m\over p}\ dE = {4\pi\; V \sqrt{2mE} \over h^3 } m \ dE $$
 
  • Like
Likes JohnnyGui
  • #57
BvU said:
From (2.4.3) with ##p= \hbar k##: $$
N = 2 \times{1\over 8} \times \left(L\over \pi\right)^3 \times {4\over 3}\pi k^3 ={{4\over 3}\pi V p^3\over h^3 }\Rightarrow dN ={4\pi V p^2\over h^3 }\ dp
$$ With ##\quad E= \displaystyle {{ p^2\over 2m} \Rightarrow dE = { p\over m} dp \Rightarrow { dp\over dE } = { m\over p} }\quad## you use ##\quad\displaystyle {{dN\over dE } = { dN\over dp} {dp\over dE}}\quad ## to get $$
dN ={4\pi V p^2\over h^3 }\ {m\over p}\ dE = {4\pi\; V \sqrt{2mE} \over h^3 } m \ dE $$

Thanks a lot, I wrongly assumed it is merely done by writing ##\sqrt{E}## in terms of momentum thinking this would transform the derivative somehow in the number of states density per unit momentum.

Is my statement in my previous post before the question about the formula more or less correct?
 
  • #58
I'd say yes.
 
  • #59
BvU said:
I'd say yes.

Thanks for verifying. I noticed something peculiar that I hope you could help me with.

I know that for each increment ##dp##, a shell containing a certain number of quantum states gets added to an 8th of a sphere in n-space, increasing its radius. I concluded that the radius of that 8th sphere in n-space in terms of momentum is:
$$R=\frac{p \cdot 2L}{h} = (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$
I want to calculate the thickness ##ΔR## of each n-shell that gets added to the 8th n-sphere when each increment ##dp## is added to a certain momentum ##p##. According to the above formula, this should be:
$$ΔR = \frac{(p+dp) \cdot 2L}{h} - \frac{p \cdot 2L}{h} = \frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$
According to this formula, each n-shell that gets added to the n-sphere decreases in thickness as each ##dp## is added to a larger value of ##p##.
I find this very weird, because the derivative ##\frac{dN}{dp}## actually shows that the number of quantum states per ##dp## increases exponentially as ##p## gets larger. How can an exponentially increasing number of quantum states per ##dp## fit into an n-shell with decreasing thickness per ##dp##? I'm aware that each added n-shell also increases in surface but that does not compensate enough for the decreasing thickness to make the number of quantum states in each n-shell get exponentially higher. For an exponentially increasing number of quantum states per ##dp##, I would expect n-shells of at least a fixed thickness.

How is this possible? Is there something wrong in my calculation?
 
  • #60
You already had $$ dN ={4\pi V p^2\over h^3 }\ dp $$ You know how to differentiate $$R = \frac{ 2Lp}{h}\Rightarrow dR = \frac{ 2L}{h} \;dp$$ the volume in there (between ##R+dR## and ##R## in n-space) increases with ##R^2## -- as discussed.

All clean and consistent. Why not move on to the next chapter ?
 
  • #61
JohnnyGui said:
the derivative ##\frac{dN}{dp}## actually shows that the number of quantum states per ##dp## increases exponentially as ##p## gets larger. How can an exponentially increasing number of quantum states per ##dp## fit into an n-shell with decreasing thickness per ##dp##
That is not exponentially but quadratically. And the 'delta-thickness' is constant.
 
  • #62
BvU said:
That is not exponentially but quadratically. And the 'delta-thickness' is constant.
Apologies, I indeed meant quadratically the whole time. And I expected the thickness should be constant but there's a problem, please see below.

BvU said:
You know how to differentiate $$R = \frac{2Lp}{h}\Rightarrow dR = \frac{ 2L}{h} \;dp$$

This differentiation is consistent with the formula that I wrote in my previous post:
$$dR = \frac{(p+dp) \cdot 2L}{h} - \frac{p \cdot 2L}{h} = \frac{2L}{h} dp$$
This indeed shows a fixed thickness of the n-shells. But when I simply rewrite this equation in terms of the corresponding n-sphere radii...
$$dR=\frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$
...then it shows that the thickness decreases with higher ##p## values.
What is exactly wrong with this rewrite of the formula? Doesn't the n-sphere's radius get increased with a factor of ##\frac{p+dp}{p}## every time an n-shell gets added to it? You can see that by ##\frac{p+dp}{p} \cdot \frac{2Lp}{h} = \frac{(p+dp)2L}{h}##
 
  • #63
I supose you meant to place brackets around $$dR=\frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$like this$$dR=\frac{p+dp}{p}\cdot\left ( n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}} \right ) $$which means $$
dR=\frac{p+dp}{p}\cdot 0 \quad ?$$In short: you forgot to work out ## n(|p+dp|)## for the ##n_i## in the first term.
 
  • #64
BvU said:
I supose you meant to place brackets around $$dR=\frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$like this$$dR=\frac{p+dp}{p}\cdot\left ( n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}} \right ) $$which means $$
dR=\frac{p+dp}{p}\cdot 0 \quad ?$$In short: you forgot to work out ## n(|p+dp|)## for the ##n_i## in the first term.

No, that's not how I meant it because only the first term ##(n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}## is a factor ##\frac{p+dp}{p}## larger with respect to the second one, in order to get the difference, i.e. the thickness. So the thickness is still decreasing when ##p## increases. Besides, putting them both in brackets would give a thickness of ##dR =0 ## which is incorrect, right?
 
Last edited:
  • #65
##n## depends on ##p##
 
  • #66
BvU said:
##n## depends on ##p##
So according to that, although the following is correct:
$$\frac{2Lp}{h} = (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$
This still means that the following is incorrect?
$$\frac{p + dp}{p} \cdot \frac{2Lp}{h} =\frac{p + dp}{p} \cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$
 
  • #67
No, that's correct :smile:.
 
  • #68
JohnnyGui said:
But when I simply rewrite this equation in terms of the corresponding n-sphere radii...
$$dR=\frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$
...then it shows that the thickness decreases with higher ##p## values.
No it does not:$$dR=\frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}} = \frac{dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$and in #66 your first equation shows that this is equal to $$dR=\frac{2L}{h} dp$$I repeat: bottom line of #60. There's much more interesting stuff ahead.
 
  • Like
Likes JohnnyGui
  • #69
BvU said:
No it does not:$$dR=\frac{p+dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}- (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}} = \frac{dp}{p}\cdot (n_x^2 + n_y^2+n_z^2)^{\frac{1}{2}}$$and in #66 your first equation shows that this is equal to $$dR=\frac{2L}{h} dp$$I repeat: bottom line of #60. There's much more interesting stuff ahead.

Sorry for the late reply. I finally got it and can't believe I was actually missing something so obvious. I kept considering the ##(n^2_x + n_y^2 + n_z^2)## parameter to be constant no matter what the value of ##p## is. o:)

I am now combining the states density with the Botlzmann statistics to understand the Maxwell-Boltzmann distribution. Sorry if this is a bit off-topic but one thing that bothers me is the following. For the derivation it is assumed that the collisions between particles are perfectly elastic and that the system is in thermal equilibrium. Furthermore, the particles are of 1 gas and thus have same mass.

But if this is the case, how is it assumed that particles in a container can have different kinetic energies? What other factors than elasticity, mass and temperature can change the kinetic energy of a colliding particle?
 
  • #70
What makes you think they should all have the same kinetic energy ?
JohnnyGui said:
What other factors than elasticity, mass and temperature can change the kinetic energy of a colliding particle?
The collisions themselves !
 
  • #71
BvU said:
What makes you think they should all have the same kinetic energy ?
The collisions themselves !

I thought that perfectly elastic collisions among identical particles, which is assumed for the derivation, would keep a particle's kinetic energy more or less constant. Please elaborate if this is incorrect.
 
  • #72
JohnnyGui said:
Please elaborate if this is incorrect
Very incorrect !
Experiment with sliding coins over a smooth table
 
  • #73
BvU said:
Very incorrect !
Experiment with sliding coins over a smooth table

I think it depends on the starting scenario of the system with a certain equilibrium temperature. If each particle has the same kinetic energy initially at the very start, then I can't conclude other than the kinetic energy of each particle staying constant because of perfectly elastic collisions. If however, if at the very start, each particle differ in kinetic energy (temperature has still yet to reach equilibrium) then I would understand why particles can have different kinetic energies in the system, even in the presence of perfectly elastic collisions.
 
  • #74
DId you try the coins ? Did the kinetic energy of each and every coin remain constant ?
Did you ever have to do an exercise with hard ball elastic collisions ? What is conserved ?
 
  • #75
JohnnyGui said:
If each particle has the same kinetic energy initially at the very start

Which they won't. A given equilibrium temperature only means the average kinetic energy of the particles is a certain value. It does not mean that every single particle has that kinetic energy.

I think you need to read the article on the kinetic theory of gases more carefully.
 
  • #76
PeterDonis said:
Which they won't. A given equilibrium temperature only means the average kinetic energy of the particles is a certain value. It does not mean that every single particle has that kinetic energy.

Two questions arises from this.

1. So if each particle does have the same kinetic energy initiallly at the very start, is it correct that each particle's kinetic energy stays constant after perfect elastic collisions?

2. The reason that they don't have the same kinetic energy at the very start is because the final equilibrium temperature is yet to be reached?
 
  • #77
JohnnyGui said:
if each particle does have the same kinetic energy initiallly at the very start

This is much, much too improbable to have any chance of being observed. Remember we're talking about something like ##10^{23}## particles in a typical container of gas.

JohnnyGui said:
is it correct that each particle's kinetic energy stays constant after perfect elastic collisions?

In the center of mass frame of the collision, yes, this will be true. But kinetic energy is frame-dependent, so it will not, in general, be true in the rest frame of the gas as a whole.

JohnnyGui said:
The reason that they don't have the same kinetic energy at the very start is because the final equilibrium temperature is yet to be reached?

No. Go read my post #75 again, carefully.
 
  • #78
PeterDonis said:
No. Go read my post #75 again, carefully.

I did, but I don't see how this post answers my question. It states that a characteristic of an equilibrium temperature is having an average kinetic energy and not every particle having that same kinetic energy. This is clear to me.

My question is more directed towards why particles don't have the same kinetic energy at the very start even if perfect elastic collisions are considered. I have a hard time grasping "rest frame of the gas as a whole" because a gas consists of particles going in different directions and thus each particle having its own rest frame.
 
  • #79
BvU said:
Experiment with sliding coins over a smooth table
 
  • #80
BvU said:
Experiment with sliding coins over a smooth table

My posted conclusion and question is deduced from this experiment. I have difficulty choosing the starting scenario; in the case of 2 coins, should I make 2 coins have the same velocity before collision or should one stay still? If it's the latter case, then I would conclude that the reason that particles don't have the same kinetic energy at equilibrium temperature is because particles had different kinetic energies before that equilibrium temperature was reached.
 
  • #81
Either. Only precisely head-on collisions of equal coins with equal but opposite velocities conserve the kinetic energies of both coins. Chance of one in very, very many.
 
  • #82
JohnnyGui said:
It states that a characteristic of an equilibrium temperature is having an average kinetic energy and not every particle having that same kinetic energy. This is clear to me.

Ok, good.

JohnnyGui said:
My question is more directed towards why particles don't have the same kinetic energy at the very start even if perfect elastic collisions are considered.

Because elastic collisions conserve the total kinetic energy of the two colliding particles. They don't conserve the kinetic energies of the two particles individually except in the very rare case where the combined momentum of the two particles is zero.

JohnnyGui said:
I have a hard time grasping "rest frame of the gas as a whole" because a gas consists of particles going in different directions and thus each particle having its own rest frame.

You're confused about frames. I can pick any frame I like to analyze the situation; there is no need to use a different frame for every particle just because each particle has a different velocity. The rest frame of the gas as a whole is the frame in which the center of mass of the gas as a whole is at rest. When we talk about the temperature of a gas being the average kinetic energy of its particles, we mean the average kinetic energy in that frame, the frame in which the center of mass of the gas is at rest. And in that frame, virtually all collisions will change the kinetic energies of both particles.
 
  • Like
Likes JohnnyGui
  • #83
@PeterDonis : Thank you for the clear explanation. I think I understand it now.

BvU said:
Either. Only precisely head-on collisions of equal coins with equal but opposite velocities conserve the kinetic energies of both coins. Chance of one in very, very many.

Ah, this explains it for me. I was not aware of this.

So, the number of states ##n_s(p)## for a particular momentum ##p## is given by:
$$n_s(p) =\frac{2\pi p^2 \cdot L^2}{h^2}$$
I have read about Boltzmann’s and Maxwell’s derivations for the number of particles with a particular momentum if the allowed momentums are discrete. If the allowed momentums are very closely packed together, is also correct to deduce that the number of particles ##n_i## having a particular momentum of ##p_i## to be:
$$n_i = F(p_i) \cdot n_s(p_i) = F(p_i) \cdot \frac{2\pi p_i^2 \cdot L^2}{h^2}$$
Where ##F(p_i)## is the number of particles at a particular momentum ##p_i## but per 1 microstate.
I am aware it is usually written in the form of a State Density, but I was wondering if this approach is also correct.
 
Last edited:
  • #84
BvU said:
You'll have a hard time finding solutions for the Schroedinger equation in this funny case !

I don't see why that would be difficult in many cases.
Calculating the number of energy states using momentum space

First, solve the Schrodinger equation for a box of Lx, Ly2 ; and record the constants for wavelength 'k' in x, and y; eg: record k for the lowest state of n in each direction.

So long as the differences in length, Lx - Lx2, Ly-Ly2, are multiples of the recorded wavelength ( for each respective axis ) Then I think the same wavelength must correctly solve the extended box in each axis.

The reason is simple, sine wave solutions for standing waves are zero at the walls; and happen to be zero at points where the walls "might" have existed if the box was reduced to dimensions Lx by Ly2.

Therefore, I'm sure any infinite well/rigid box can be extended by an integer multiple of wavelengths at points where the sine-waves are naturally zero, without making solution to the Schrodinger equation impossible.

The notation of 'n', can be confounded by the different lengths of the box ... but the ability to solve the Schrodinger equation is not made impossible because traditional notation can be confounded.

vanhees71 said:
To the contrary! In this container (i.e., the one with rigid boundary conditions) the position is well defined as a self-adjoint operator, but momentum is not. There are thus also no momentum eigenstates.

Vanshees, I pointed out in another thread that your proof appears to depend on over-specifiying the number of boundary conditions to compute the domain of a function. I suspect your complaint is probably a mathematical fiction caused by over-specifying the boundary conditions ??

When we only require that the value at the wall be the same as the opposing wall, psi(+a,y)=psi(-a,y) we have already given enough boundary conditions to determine that the momentum is a self adjoint operator for the specified axis. The boundary condition can be repeated for each axis, showing each one to be independently self-adjoint. That is to say, when we only *require* that the wave function be periodic, and not that it is also zero; we get a bigger domain than if we try to restrict the wave function to having a specific value at a periodic boundary. When we solve the *general* case for *any* value at the periodic boundary (the wall is one such boundary), the proof will come out with psi being self adjoint. But the proof will fail if we try to specify a particular value at the wall (even if we *know* what it should be.)

Again, By analogy --- > we *know* that in any test of Young's double slit experiment ... that if we try to specify mathematically that the particle must have a "probably" of *zero* to be found in one of the slits, we would destroy the solution to the interference pattern that is the well known result of the experiment. eg: You can put in mathematical boundary conditions that you are *sure* are true (when tested), that will destroy the ability of the Schrodinger equation to produce results consistent with experiment.

My understanding of the idea of self-adjointness, is essentially to prove the imaginary part of psi is canceled out when computing expectation values.

Operators work on psi by multiplication after differentiation; and self adjointness is required for the final product(s) to sum up to a purely real expectation value.

If only a single point's product (somewhere on psi) is computed, the idea of self adjointness is demonstrated when given real constants a,b that the complex product on the left side of this next equation is always real:

eg: ( a - ib )^{1/2}* ( a + ib )^{1/2}= ( a^2 + b^2 )^{1/2}

I've chosen to represent psi as the square root of a complex number, because in some sense psi is the momentum of the particle; and it's square is the kinetic energy in classical physics. p^2 / 2m = T

For self adjointness of functions, I am not required that the result of the multiplication be purely real at every point; but only that the *sum* (or integral) of the results cancel out the imaginary portion. However, the condition of self adjointness is trivially met when b=0, everywhere.

Since I can give a time invariant solution to Schrodinger's that has a psi that is purely *real* (b=0), in the case of an infinite well box; Where exactly does your claim of failure to be self adjoint come from?

If I naively compute the momentum operator on an infinite well and get an integral of a product that has a purely real result when evaluated; Why should I believe that self-adjointness is not true? eg: As opposed to believing you've over-specified a problem, and thereby made it insoluble by a mathematical proof that is perhaps flawed in cases having more boundary condition than there are unknowns that *must* be solved for?

To solve for N unknowns, in linear equations; I only need N independent equations. If I put in N+1 equations, depending on the textbook ... the proofs for an algorithm solving a linear set of equations may or may not be valid. We need to know the chain of reasoning used in the proofs whenever working with more equations than we have unknowns to solve for, in order to know the proof is valid.
 
Last edited:
  • #85
I have a question about calculating the number of particles at a particular energy level using Boltzmann Statistics in case of discrete energy levels.

For the number of particles ##n_i## at a particular discrete energy level ##E_i##, I understand that according to Boltzmann this is given by:
$$n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
My question is, does this formula take into account the number of possible quantum states at that particular energy level ##E_i## or does it only give the number of particles for just 1 quantums state at that energy level?
 
  • #86
JohnnyGui said:
I have a question about calculating the number of particles at a particular energy state using Boltzmann Statistics

Boltzmann statistics are classical, not quantum.

JohnnyGui said:
does this formula take into account the number of possible quantum states at that particular energy state ##E_i##?

No; it can't, because, as above, Boltzmann statistics are classical, not quantum.
 
  • #87
PeterDonis said:
Boltzmann statistics are classical, not quantum.
No; it can't, because, as above, Boltzmann statistics are classical, not quantum.

Does this mean that the mentioned formula for ##n_i## can be multiplied by the number of quantum states at that energy level in order to get the "true" number of particles at that energy level?
 
  • #88
JohnnyGui said:
Does this mean that the mentioned formula for ##n_i## can be multiplied by the number of quantum states at that energy level in order to get the "true" number of particles at that energy level?

No. Apparently you didn't grasp what "Boltzmann statistics are classical, not quantum" means. Not only that, but ##n_i## is, by definition, the number of particles with energy ##E_i##, as you yourself said in your previous post, so I have no idea why you would think you can get a "true" number of particles by multiplying it by something else.
 
  • #89
PeterDonis said:
No. Apparently you didn't grasp what "Boltzmann statistics are classical, not quantum" means. Not only that, but ##n_i## is, by definition, the number of particles with energy ##E_i##, as you yourself said in your previous post, so I have no idea why you would think you can get a "true" number of particles by multiplying it by something else.

Because you said it can't take into account the number of quantum states at a particular energy level, letting me think that the classical approach would give an erroneous number of particles in the case of a quantum approach for which it should be corrected somehow. Furthermore, the Boltzmann factor is combined with the number of quantum states to derive a formula when energylevels are considered continuous, making me think that perhaps ##n_i(E_i)## should be corrected that way.

This video shows that (part) of the Boltzmann formula is multiplied by the number of states at a particular energylevel ##\rho(\epsilon)## (the ##\rho(\epsilon)## is discussed in his previous video).
 
  • #90
JohnnyGui said:
Because you said it can't take into account the number of quantum states at a particur energy level

Can you give a specific quote? It's been a while.

JohnnyGui said:
letting me think that the classical approach would give an erroneous number of particles

If by "erroneous" you mean "different than the number that quantum statistics would give", of course it does. That's why we don't use Boltzmann statistics when the difference between them and the correct quantum statistics is important.

JohnnyGui said:
for which it should be corrected somehow

You don't "correct" Boltzmann statistics if you want correct answers when quantum effects are important. You just use the correct quantum statistics instead.

JohnnyGui said:
the Boltzmann factor is combined with the number of quantum states to derive a formula when energylevels are considered continuous

Can you give a reference? (Preferably a written one, not a video; it takes a lot more time to extract the relevant information from a video than it does from a written article or paper.)
 
  • #91
PeterDonis said:
Can you give a specific quote? It's been a while
I was reffering to your answer "No, it can't" in your previous post #86 when I asked "Does this formula take into account the number of possible quantum states at that particular energy state ##Ei##?"

PeterDonis said:
Can you give a reference? (Preferably a written one, not a video; it takes a lot more time to extract the relevant information from a video than it does from a written article or paper.)

Ok, I couldn't find the exact way on paper as how the lecturer did it, but I'll try to write a summary of what he did since I'm curious whether his method is correct or not. His method does result in the correct Maxwell's Distribution formula.

Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is:
$$n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
I was able to derive this one.

Furthermore, I tried to derive by myself the number of particles if energy is considered continuous; let's call this number ##n## to separate it from Boltzmann's ##n_i## that is used for discrete energylevels. I deduced that ##n## is equal to the Density of quantum states function ##D(E)## times ##dE## multiplied by some function ##F'(E)## times ##dE##. The ##F'(E)## is the number of particles per 1 quantum state per 1 ##E##; so it's basically the particle number density at a particular ##E## per 1 quantum state of that ##E##. Both ##D_E## and ##F'(E)## are derivatives of cumulative functions.
We already discussed that ##D(E) \cdot dE = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot dE##. So that ##n## would be:
$$n = D(E) \cdot dE \cdot F'(E) \cdot dE = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot dE \cdot F'(E) \cdot dE$$
Here comes the part that I don't get. The lecturer in the video states all of a sudden that:
$$F'(E) \cdot dE = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
So according to him, the number of particles in a continuous energy spectrum is given by:
$$n = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}} \cdot dE$$
Notice how he basically combined Boltzmann's classical formula (with discrete energylevels) with the Density of quantum states function ##D(E)##.
You can also see http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf(on sheet number ##8##) that this is done more or less the same way, combining the Boltzmann factor with the States Density.

I have continued working with that formula nonetheless. Integrating it to infinity gives me a complex constant ##C## that should be equal to the total number of particles ##N##. The probability of finding a particle with energy between ##E ≥ E + dE## is equal to ##\frac{n}{N}##. Writing ##n## in terms of the previous formula and ##N## in terms of ##C## and then simplifying it gives me the probability density as a function of ##E## that is exactly the same as Wiki states:

Distribution.png


I'd really like to understand how it is allowed to substitute a continuous formula ##F'(E)## with the classical Boltzmann's formula in which energylevels are considered discrete, combine it with quantum states density formula, and then get a valid formula out of it. Is there a way to explain this?
 

Attachments

  • Distribution.png
    Distribution.png
    1.8 KB · Views: 258
  • #92
JohnnyGui said:
was reffering to your answer "No, it can't" in your previous post #86 when I asked "Does this formula take into account the number of possible quantum states at that particular energy state ##Ei##?"

Ok, but that's just because the Boltzmann formula is classical. Obviously a classical formula can't take into account a quantum phenomenon. But you also can't get a correct answer by just multiplying the classical formula by the number of quantum states; why would you expect that to work?
 
  • #93
PeterDonis said:
Ok, but that's just because the Boltzmann formula is classical. Obviously a classical formula can't take into account a quantum phenomenon. But you also can't get a correct answer by just multiplying the classical formula by the number of quantum states; why would you expect that to work?

Perhaps you are already reading and replying; but as for your last question, please see the second part of my previous post. Also, perhaps my question is better to be formulated as: Is the number of particles at a particular energy level that is calculated by the Botlzmann formula, divided over the possible quantum states of that energy level?
 
  • #94
JohnnyGui said:
You can also see http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf(on sheet number 8) that this is done more or less the same way, combining the Boltzmann factor with the States Density.

That's not what is being done. The continuous state density is substituted for the discrete Boltzmann factor, not multiplied by it. That's what the right arrow in equation (13) means. Basically the assumption is that the energies of the states are close enough together that they can be approximated by a continuum. This is a common assumption for systems with very large numbers of particles (for example, a box of gas one meter on a side at room temperature has something like ##10^{25}## particles in it).
 
  • #95
JohnnyGui said:
Are the number of particles at a particular energy level, calculated by the Botlzmann formula, divided over the possible quantum states of that energy level?

No. The two numbers have nothing to do with each other. One is a classical approximation. The other is a quantum result. You can't just mix them together. As I said before, if you want a correct quantum answer, you should not be using the classical Boltzmann formula at all. You should be using the correct quantum distribution (Bose-Einstein or Fermi-Dirac, depending on what kind of particles you are dealing with).
 
  • #96
JohnnyGui said:
Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is:

$$
n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}
$$

I was able to derive this one.

How did you derive it? And what makes you think the derivation is classical? Discrete energy levels indicate a quantum system (more precisely, a quantum system that is bound, i.e., confined to a finite region of space), not a classical one.
 
  • #97
PeterDonis said:
That's not what is being done. The continuous state density is substituted for the discrete Boltzmann factor, not multiplied by it. That's what the right arrow in equation (13) means. Basically the assumption is that the energies of the states are close enough together that they can be approximated by a continuum. This is a common assumption for systems with very large numbers of particles (for example, a box of gas one meter on a side at room temperature has something like ##10^{25}## particles in it).

A part of the continuous state density is substituted by the Boltzmann factor (see also my previous post in which ##F(E) \cdot dE## is substituted). The Boltzmann factor is then multiplied by the Density of States within the integration. I can't see how a part of a classical approach can be mixed with a part of a quantum approach (density of states) while you said that it is not possible to get them mixed.

Edit: Typing a reply to your latest post, just a moment..
 
  • #98
PeterDonis said:
How did you derive it? And what makes you think the derivation is classical? Discrete energy levels indicate a quantum system (more precisely, a quantum system that is bound, i.e., confined to a finite region of space), not a classical one.

This is the Boltzmann formula that I was talking about the whole time. You made me think it was classical since you said that Boltzmann statistics are classical in your post #86. I'm not sure now which Boltzmann statistics you were referring to as classical.
 
  • #99
JohnnyGui said:
A part of the continuous state density is substituted by the Boltzmann factor (see also my previous post in which ##F(E) \cdot dE## is substituted). The Boltzmann factor is then multiplied by the Density of States within the integration.

That's not what's being done in the reference you linked to. You need to read it more carefully. See below.

JohnnyGui said:
This is the Boltzmann formula that I was talking about the whole time.

And that formula does not appear at all in the reference you linked to after equation (13). Equation (13) in that reference describes removing that formula, which involves a sum over discrete energy levels, and putting in its place a continuous integral; this amounts to ignoring quantum effects (which are what give rise to discrete energy levels) and assuming the energy per particle is continuous. There is no "Boltzmann factor" involving a sum over discrete energy levels anywhere in the distribution obtained from the integral.

JohnnyGui said:
You made me think it was classical since you said that Boltzmann statistics are classical in your post #86. I'm not sure now which Boltzmann statistics you were referring to as classical.

That's because we've been using the term "Boltzmann" to refer to multiple things. To be fair, that is a common thing to do, but it doesn't help with clarity.

Go back to this statement of yours:

JohnnyGui said:
Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is

This can't be right as you state it, because, as I've already said, classically there are no discrete energy levels. The only way to get discrete energy levels is to assume a bound system and apply quantum mechanics. So any derivation that results in the formula you give cannot be classical.

Here's what the reference you linked to is doing (I've already stated some of this before, but I'll restate it from scratch for clarity):

(1) Solve the time-independent Schrodinger Equation for a gas of non-interacting particles in a box of side ##L## to obtain an expression for a set of discrete energy levels (equations 10 and 11).

(2) Write down the standard partition function for the system with those discrete energy levels in terms of temperature (equation 12).

(3) Realize that that partition function involves a sum that is difficult to evaluate, and replace the sum with an integral over a continuous range of energies (equation 13 expresses this intent, but equation 22 is the actual partition function obtained, including the integral, after the density of states function ##g(\varepsilon)## is evaluated).

Step 3 amounts to partly ignoring quantum effects; but they're not being completely ignored, because the density of states ##g(\varepsilon)## is derived assuming that the states in momentum space (##k## space) are a discrete lattice of points, which is equivalent to assuming discrete energies. But the replacing of the sum by the integral does require that the energies are close enough together that they can be approximated by a continuum, which, again, amounts to at least partly ignoring quantum effects.

However, note equation 25 in the reference, which is an equation for the number of particles with a particular energy:

$$
n_j = \frac{N}{Z} e^{\frac{- \varepsilon_j}{kT}}
$$

This formula actually does not require the energies to be discrete; the subscript ##j## is just a way of picking out some particular value of ##\varepsilon## to plug into the formula. The formula can just as easily be viewed as defining a continuous function ##n(\varepsilon)## for the number of particles as a function of energy; or, as is often done, we can divide both sides by ##N##, the total number of particles, to obtain the fraction of particles with a particular energy, which can also be interpreted as the probability of a particle having a particular energy:

$$
f(\varepsilon) = \frac{1}{Z} e^{\frac{- \varepsilon}{kT}}
$$

Then you can just plug in whatever you obtain for ##Z## (for example, equation 24 in the reference). This kind of function is what Boltzmann worked with in his original derivation, and he did not know how to derive a specific formula for ##Z## from quantum considerations, as is done in the reference you give, because, of course, QM had not even been invented yet when he was doing his work. As far as I know, he and others working at that time used the classical formula for ##Z## in terms of the free energy ##F##:

$$
Z = e^{\frac{-F}{kT}}
$$

which of course looks quite similar to the above; in fact, you can use this to rewrite the function ##f## from above as:

$$
f(\varepsilon) = e^{\frac{F - \varepsilon}{kT}}
$$

which is, I believe, the form in which it often appears in the literature from Boltzmann's time period. Note that this form is purely classical, requiring no quantum assumptions; you just need to know the free energy ##F## for the system, which classical thermodynamics had ways of deriving for various types of systems based on other thermodynamic variables.
 
  • #100
I will further read on the detailed second part of your post about the method, thanks for that. I wanted to clear the following out of the way first:

PeterDonis said:
And that formula does not appear at all in the reference you linked to after equation (13).

I never referenced to anything after equation (13). My formula appears on the very first sheet in the link and equation (13) was the equation I was questioning about.

PeterDonis said:
That's because we've been using the term "Boltzmann" to refer to multiple things. To be fair, that is a common thing to do, but it doesn't help with clarity.

The first time you said that Boltzmann statistics are classical (post #86) is in response to my question about the formula for discrete energy levels shown in post #85, hence me thinking that formula is classical.

PeterDonis said:
This can't be right as you state it, because, as I've already said, classically there are no discrete energy levels.

Again, I called it "classically" as a consequence of the misconception of you calling it classically.

PeterDonis said:
There is no "Boltzmann factor" involving a sum over discrete energy levels anywhere in the distribution obtained from the integral.

The "Boltzmann factor" I'm referring to is the ##e^{\frac{-E}{kT}}## which is contained within the integral of equation (13). This factor is also present in the Boltzmann formula for discrete energy values, hence me wondering about how it can be used for a continuous approach. But perhaps you have already explained that in the second part of your post which I will read on now.​
 

Similar threads

Back
Top