Undergrad Calculating the number of energy states using momentum space

Click For Summary
The discussion focuses on calculating the number of energy states in a two-dimensional momentum space for a particle in a box. The formula presented assumes a homogeneous density of energy states across the circular momentum space, leading to confusion about the dependence on the box's dimensions. Participants clarify that while the number of states in each direction is indeed dependent on the box's length, the overall density of states per unit volume remains consistent regardless of the container's shape. The relationship between momentum vectors and the box dimensions is explored, emphasizing that the number of states is determined by the projections of momentum rather than the overall dimensions of the box. Ultimately, the conversation highlights the complexities of relating momentum space to physical dimensions in quantum mechanics.
  • #91
PeterDonis said:
Can you give a specific quote? It's been a while
I was reffering to your answer "No, it can't" in your previous post #86 when I asked "Does this formula take into account the number of possible quantum states at that particular energy state ##Ei##?"

PeterDonis said:
Can you give a reference? (Preferably a written one, not a video; it takes a lot more time to extract the relevant information from a video than it does from a written article or paper.)

Ok, I couldn't find the exact way on paper as how the lecturer did it, but I'll try to write a summary of what he did since I'm curious whether his method is correct or not. His method does result in the correct Maxwell's Distribution formula.

Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is:
$$n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
I was able to derive this one.

Furthermore, I tried to derive by myself the number of particles if energy is considered continuous; let's call this number ##n## to separate it from Boltzmann's ##n_i## that is used for discrete energylevels. I deduced that ##n## is equal to the Density of quantum states function ##D(E)## times ##dE## multiplied by some function ##F'(E)## times ##dE##. The ##F'(E)## is the number of particles per 1 quantum state per 1 ##E##; so it's basically the particle number density at a particular ##E## per 1 quantum state of that ##E##. Both ##D_E## and ##F'(E)## are derivatives of cumulative functions.
We already discussed that ##D(E) \cdot dE = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot dE##. So that ##n## would be:
$$n = D(E) \cdot dE \cdot F'(E) \cdot dE = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot dE \cdot F'(E) \cdot dE$$
Here comes the part that I don't get. The lecturer in the video states all of a sudden that:
$$F'(E) \cdot dE = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}$$
So according to him, the number of particles in a continuous energy spectrum is given by:
$$n = \frac{V \cdot 2^{2.5}\cdot \pi \cdot m^{1.5}}{h^3} \cdot \sqrt{E} \cdot \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}} \cdot dE$$
Notice how he basically combined Boltzmann's classical formula (with discrete energylevels) with the Density of quantum states function ##D(E)##.
You can also see http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf(on sheet number ##8##) that this is done more or less the same way, combining the Boltzmann factor with the States Density.

I have continued working with that formula nonetheless. Integrating it to infinity gives me a complex constant ##C## that should be equal to the total number of particles ##N##. The probability of finding a particle with energy between ##E ≥ E + dE## is equal to ##\frac{n}{N}##. Writing ##n## in terms of the previous formula and ##N## in terms of ##C## and then simplifying it gives me the probability density as a function of ##E## that is exactly the same as Wiki states:

Distribution.png


I'd really like to understand how it is allowed to substitute a continuous formula ##F'(E)## with the classical Boltzmann's formula in which energylevels are considered discrete, combine it with quantum states density formula, and then get a valid formula out of it. Is there a way to explain this?
 

Attachments

  • Distribution.png
    Distribution.png
    1.8 KB · Views: 298
Physics news on Phys.org
  • #92
JohnnyGui said:
was reffering to your answer "No, it can't" in your previous post #86 when I asked "Does this formula take into account the number of possible quantum states at that particular energy state ##Ei##?"

Ok, but that's just because the Boltzmann formula is classical. Obviously a classical formula can't take into account a quantum phenomenon. But you also can't get a correct answer by just multiplying the classical formula by the number of quantum states; why would you expect that to work?
 
  • #93
PeterDonis said:
Ok, but that's just because the Boltzmann formula is classical. Obviously a classical formula can't take into account a quantum phenomenon. But you also can't get a correct answer by just multiplying the classical formula by the number of quantum states; why would you expect that to work?

Perhaps you are already reading and replying; but as for your last question, please see the second part of my previous post. Also, perhaps my question is better to be formulated as: Is the number of particles at a particular energy level that is calculated by the Botlzmann formula, divided over the possible quantum states of that energy level?
 
  • #94
JohnnyGui said:
You can also see http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf(on sheet number 8) that this is done more or less the same way, combining the Boltzmann factor with the States Density.

That's not what is being done. The continuous state density is substituted for the discrete Boltzmann factor, not multiplied by it. That's what the right arrow in equation (13) means. Basically the assumption is that the energies of the states are close enough together that they can be approximated by a continuum. This is a common assumption for systems with very large numbers of particles (for example, a box of gas one meter on a side at room temperature has something like ##10^{25}## particles in it).
 
  • #95
JohnnyGui said:
Are the number of particles at a particular energy level, calculated by the Botlzmann formula, divided over the possible quantum states of that energy level?

No. The two numbers have nothing to do with each other. One is a classical approximation. The other is a quantum result. You can't just mix them together. As I said before, if you want a correct quantum answer, you should not be using the classical Boltzmann formula at all. You should be using the correct quantum distribution (Bose-Einstein or Fermi-Dirac, depending on what kind of particles you are dealing with).
 
  • #96
JohnnyGui said:
Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is:

$$
n_i = \frac{N}{\sum_{i=0}^\infty e^{\frac{-E_i}{k_B}}} \cdot e^{\frac{-E_i}{k_B}}
$$

I was able to derive this one.

How did you derive it? And what makes you think the derivation is classical? Discrete energy levels indicate a quantum system (more precisely, a quantum system that is bound, i.e., confined to a finite region of space), not a classical one.
 
  • #97
PeterDonis said:
That's not what is being done. The continuous state density is substituted for the discrete Boltzmann factor, not multiplied by it. That's what the right arrow in equation (13) means. Basically the assumption is that the energies of the states are close enough together that they can be approximated by a continuum. This is a common assumption for systems with very large numbers of particles (for example, a box of gas one meter on a side at room temperature has something like ##10^{25}## particles in it).

A part of the continuous state density is substituted by the Boltzmann factor (see also my previous post in which ##F(E) \cdot dE## is substituted). The Boltzmann factor is then multiplied by the Density of States within the integration. I can't see how a part of a classical approach can be mixed with a part of a quantum approach (density of states) while you said that it is not possible to get them mixed.

Edit: Typing a reply to your latest post, just a moment..
 
  • #98
PeterDonis said:
How did you derive it? And what makes you think the derivation is classical? Discrete energy levels indicate a quantum system (more precisely, a quantum system that is bound, i.e., confined to a finite region of space), not a classical one.

This is the Boltzmann formula that I was talking about the whole time. You made me think it was classical since you said that Boltzmann statistics are classical in your post #86. I'm not sure now which Boltzmann statistics you were referring to as classical.
 
  • #99
JohnnyGui said:
A part of the continuous state density is substituted by the Boltzmann factor (see also my previous post in which ##F(E) \cdot dE## is substituted). The Boltzmann factor is then multiplied by the Density of States within the integration.

That's not what's being done in the reference you linked to. You need to read it more carefully. See below.

JohnnyGui said:
This is the Boltzmann formula that I was talking about the whole time.

And that formula does not appear at all in the reference you linked to after equation (13). Equation (13) in that reference describes removing that formula, which involves a sum over discrete energy levels, and putting in its place a continuous integral; this amounts to ignoring quantum effects (which are what give rise to discrete energy levels) and assuming the energy per particle is continuous. There is no "Boltzmann factor" involving a sum over discrete energy levels anywhere in the distribution obtained from the integral.

JohnnyGui said:
You made me think it was classical since you said that Boltzmann statistics are classical in your post #86. I'm not sure now which Boltzmann statistics you were referring to as classical.

That's because we've been using the term "Boltzmann" to refer to multiple things. To be fair, that is a common thing to do, but it doesn't help with clarity.

Go back to this statement of yours:

JohnnyGui said:
Boltzmann derived classically that the number of particles ##n_i## with a particular discrete energy level ##E_i## is

This can't be right as you state it, because, as I've already said, classically there are no discrete energy levels. The only way to get discrete energy levels is to assume a bound system and apply quantum mechanics. So any derivation that results in the formula you give cannot be classical.

Here's what the reference you linked to is doing (I've already stated some of this before, but I'll restate it from scratch for clarity):

(1) Solve the time-independent Schrodinger Equation for a gas of non-interacting particles in a box of side ##L## to obtain an expression for a set of discrete energy levels (equations 10 and 11).

(2) Write down the standard partition function for the system with those discrete energy levels in terms of temperature (equation 12).

(3) Realize that that partition function involves a sum that is difficult to evaluate, and replace the sum with an integral over a continuous range of energies (equation 13 expresses this intent, but equation 22 is the actual partition function obtained, including the integral, after the density of states function ##g(\varepsilon)## is evaluated).

Step 3 amounts to partly ignoring quantum effects; but they're not being completely ignored, because the density of states ##g(\varepsilon)## is derived assuming that the states in momentum space (##k## space) are a discrete lattice of points, which is equivalent to assuming discrete energies. But the replacing of the sum by the integral does require that the energies are close enough together that they can be approximated by a continuum, which, again, amounts to at least partly ignoring quantum effects.

However, note equation 25 in the reference, which is an equation for the number of particles with a particular energy:

$$
n_j = \frac{N}{Z} e^{\frac{- \varepsilon_j}{kT}}
$$

This formula actually does not require the energies to be discrete; the subscript ##j## is just a way of picking out some particular value of ##\varepsilon## to plug into the formula. The formula can just as easily be viewed as defining a continuous function ##n(\varepsilon)## for the number of particles as a function of energy; or, as is often done, we can divide both sides by ##N##, the total number of particles, to obtain the fraction of particles with a particular energy, which can also be interpreted as the probability of a particle having a particular energy:

$$
f(\varepsilon) = \frac{1}{Z} e^{\frac{- \varepsilon}{kT}}
$$

Then you can just plug in whatever you obtain for ##Z## (for example, equation 24 in the reference). This kind of function is what Boltzmann worked with in his original derivation, and he did not know how to derive a specific formula for ##Z## from quantum considerations, as is done in the reference you give, because, of course, QM had not even been invented yet when he was doing his work. As far as I know, he and others working at that time used the classical formula for ##Z## in terms of the free energy ##F##:

$$
Z = e^{\frac{-F}{kT}}
$$

which of course looks quite similar to the above; in fact, you can use this to rewrite the function ##f## from above as:

$$
f(\varepsilon) = e^{\frac{F - \varepsilon}{kT}}
$$

which is, I believe, the form in which it often appears in the literature from Boltzmann's time period. Note that this form is purely classical, requiring no quantum assumptions; you just need to know the free energy ##F## for the system, which classical thermodynamics had ways of deriving for various types of systems based on other thermodynamic variables.
 
  • #100
I will further read on the detailed second part of your post about the method, thanks for that. I wanted to clear the following out of the way first:

PeterDonis said:
And that formula does not appear at all in the reference you linked to after equation (13).

I never referenced to anything after equation (13). My formula appears on the very first sheet in the link and equation (13) was the equation I was questioning about.

PeterDonis said:
That's because we've been using the term "Boltzmann" to refer to multiple things. To be fair, that is a common thing to do, but it doesn't help with clarity.

The first time you said that Boltzmann statistics are classical (post #86) is in response to my question about the formula for discrete energy levels shown in post #85, hence me thinking that formula is classical.

PeterDonis said:
This can't be right as you state it, because, as I've already said, classically there are no discrete energy levels.

Again, I called it "classically" as a consequence of the misconception of you calling it classically.

PeterDonis said:
There is no "Boltzmann factor" involving a sum over discrete energy levels anywhere in the distribution obtained from the integral.

The "Boltzmann factor" I'm referring to is the ##e^{\frac{-E}{kT}}## which is contained within the integral of equation (13). This factor is also present in the Boltzmann formula for discrete energy values, hence me wondering about how it can be used for a continuous approach. But perhaps you have already explained that in the second part of your post which I will read on now.​
 
  • #101
JohnnyGui said:
I never referenced to anything after equation (13).

Yes, I know; that's part of my point. The part after equation (13) can't be left out, because that's where the actual derivation of the partition function is done. The discrete formula given prior to that is not used at all.

JohnnyGui said:
The first time you said that Boltzmann statistics are classical (post #86) is in response to my question about the formula for discrete energy levels shown in post #85, hence me thinking that formula is classical.

Yes, sorry for the confusion. I didn't catch at that point that you were using a discrete formula.

JohnnyGui said:
ng about how it can be used for a continuous approach. But perhaps you have already explained that in the second part of your pos

Yes, read on!
 
  • #102
I have read your explanation that it brought me two more questions before making me understand this better.

Question 1

PeterDonis said:
This formula actually does not require the energies to be discrete; the subscript jjj is just a way of picking out some particular value of εε\varepsilon to plug into the formula.

If energy is considered continuous, doesn't this mean that the formula for ##n_j## must be replaced with a derivative of a cumulative function of the number of particles, just like the fact that the density of states ##g(\epsilon)## times ##d\epsilon## is used within the integral, which gives the number of states between ##\epsilon ≥ \epsilon + d\epsilon##. Why isn't it done like that for ##n_j##?

Question 2

I just noticed that sheet number 18 in my http://hep.ph.liv.ac.uk/~hock/Teaching/StatisticalPhysics-Part3-Handout.pdf shows a relevant part about my mentioned formula so it's not only shown on the first sheet; it says right above equation 35 that the formula... $$n(\epsilon) = \frac{N}{Z} \cdot e^{-\frac{\epsilon}{kT}}$$
...is actually the number of particles per 1 state which kind of answers my question in post #93#. However, since the formula in that sheet is considering energy being continuous (notice the ##\epsilon##), is this exact interpretation of the formula also valid for a discrete energy level ##\epsilon_j##? If not, how is the interpretation of the very same formula then changed merely by considering energy being continuous or discrete?
 
  • #103
JohnnyGui said:
If energy is considered continuous, doesn't this mean that the formula for ##n_j## must be replaced with a derivative of a cumulative function of the number of particles

What formula for ##n_j## are you talking about? Also, you do understand that evaluating the integral gives you a continuous function for the number of particles as a function of the energy?

JohnnyGui said:
is this exact interpretation of the formula also valid for a discrete energy level ##\epsilon_j##?

Why wouldn't it be?
 
  • #104
PeterDonis said:
What formula for njnjn_j are you talking about? Also, you do understand that evaluating the integral gives you a continuous function for the number of particles as a function of the energy?

I made a typo, I am referring to ##n(\epsilon) = \frac{N}{Z} \cdot e^{-\frac{\epsilon}{kT}}## which is multiplied by ##g(\epsilon) \cdot d\epsilon## to give the number of particles between ##\epsilon ≥ \epsilon + d\epsilon## as the link and the video show:
$$n(\epsilon ≥ \epsilon + d\epsilon) = \frac{N}{Z} \cdot e^{\frac{-\epsilon}{kT}} \cdot g(\epsilon) \cdot d\epsilon$$
From what I understand, an integral gives a continuous function as a function of energy if the derivative of a cumulative function is integrated. This is indeed done for the number of states; the derivative of the volume of a sphere in energy-space is within the integral; ##g(\epsilon)##.
But since energy is continuous, why isn't ##g(\epsilon) \cdot d\epsilon## multiplied by the number density per ##\epsilon## instead of ##n(\epsilon)## within the integral?

PeterDonis said:
Why wouldn't it be?

Because you denied that statement in post ##95## and I wanted to make sure that deny was part of the earlier misconception as well.
Furthermore, I noticed that the link and the video do not tell this interpretation when deriving Boltzmann's formula for discrete energy levels, hence me wanting to make sure.
 
Last edited:
  • #105
JohnnyGui said:
why isn't ##g(\epsilon) \cdot d\epsilon## multiplied by the number density per ##\epsilon## instead of ##n(\epsilon)## within the integral?

It depends on whether you want the number of particles or the fraction of particles. You could just as easily divide by the total number of particles ##N## and have the fraction of particles instead of the number. The math is the same either way (since ##N## is a constant so it doesn't affect how you do the integral). And none of this has anything to do with the continuous vs. discrete question.

JohnnyGui said:
Because you denied that statement in post 95

No, I didn't. I denied a different statement, which is not part of what we are currently talking about.

JohnnyGui said:
I wanted to make sure that deny was part of the earlier misconception as well.

I guess the answer to this would be "yes" given the above.

JohnnyGui said:
I noticed that the link and the video do not tell this interpretation when deriving Boltzmann's formula for discrete energy levels

The link you give doesn't derive Boltzmann's formula for discrete energy levels (equation 12) at all. It just assumes it.
 
  • #106
JohnnyGui said:
From what I understand, an integral gives a continuous function as a function of energy if the derivative of a cumulative function is integrated.

You're thinking of it backwards. You can integrate any function you like. Once you've done the integral, you can consider the thing you integrated as a "cumulative function" as it relates to the thing you get as a result of the integral. But the process of evaluating the integral doesn't care about any of that and does not depend on it.

JohnnyGui said:
This is indeed done for the number of states; the derivative of the volume of a sphere in energy-space is within the integral; ##g(\epsilon)##.

##g(\epsilon)## isn't the derivative of the volume of a sphere in energy space. It's the number of states per unit volume in energy space.

Also, there's only one integral being done, so if you want to consider the function of ##\epsilon## inside the integral as the derivative of the function you get by evaluating the integral, that's fine, but it's the entire integrand that's the derivative of the result of the integral; you can't split it up into pieces.
 
  • #107
PeterDonis said:
And none of this has anything to do with the continuous vs. discrete question.

I haven't said it has something to do with the continuous vs discrete question. It's a side note question about the formula to understand the formulation better.

PeterDonis said:
No, I didn't. I denied a different statement, which is not part of what we are currently talking about.

Ok. It wasn't distinguishable whether you also denied that statement or not since that statement was quoted in post ##95## after already saying that the Boltzmann formula is classical about an earlier post of mine.

PeterDonis said:
The link you give doesn't derive Boltzmann's formula for discrete energy levels (equation 12) at all. It just assumes it.

I know, I was talking about a text interpretation about the formula, just like when I found the interpretation in the piece of text about equation 35. Another video of the lecture showed the derivation for discrete energy levels as well but did not tell that interpretation either.
 
  • #108
PeterDonis said:
g(ϵ)g(ϵ)g(\epsilon) isn't the derivative of the volume of a sphere in energy space. It's the number of states per unit volume in energy space.

Apologies, I typed it without paying attention. This is indeed what I meant.

PeterDonis said:
Also, there's only one integral being done, so if you want to consider the function of ϵϵ\epsilon inside the integral as the derivative of the function you get by evaluating the integral, that's fine, but it's the entire integrand that's the derivative of the result of the integral; you can't split it up into pieces.

I think this is indeed what I was misunderstanding.
 
  • #109
I have found another source that takes the degeneracy (e.g. the number of quantums states of an energy level) into account for the derivation of the Boltzmann statistics formula. I have found some inconsistency with other sources at the step where Langrag's Constants (##\alpha## and ##\beta \cdot \epsilon_j##) are applied to solve the equation for 0.

The time stamp in this video derived the following equation (when degeneracy is not taken into account):
$$ln(n_j) + \alpha + \beta \epsilon_j = 0 → n_j = e^{-\alpha} \cdot e^{-\beta \epsilon_j}$$
Notice that Langrang's Constants (##\alpha## and ##\beta\epsilon_j##) are being added to ##ln(n_j)## and then solved for 0.
The link that takes the degeneracy into account somehow shows on sheet ##14## and ##15## that Langrang's constant ##\beta\epsilon_j## should be substracted, which gives:
$$ln(g_j) - ln(n_j) + \alpha - \beta \cdot \epsilon_j = 0 → n_j = g_j \cdot e^\alpha \cdot e^{-\beta \epsilon_j}$$
Where ##g_j## is the number of degeneracy that energy level ##\epsilon_j## has.

I'm not sure why Langrang's Constant ##\beta \cdot \epsilon_j## should be substracted when degeneracy is taken into account. I would assume it could also be added since the solution should be zero nonetheless. And yet, even if it is added instead of substracted, the end equation is still different from the equation from the video in which degeneracy is not taken into account.
I'd expect it should be the same because when energy is considered continuous afterwards, the very same formula ##e^{-\alpha-\beta\epsilon_j}## of the video is multiplied by the number of states, which is the analogue for taking degeneracy into account.

Not sure what I'm missing here.
 
Last edited:
  • #110
JohnnyGui said:
I'm not sure why Langrang's Constant ##\beta \cdot \epsilon_j## should be substracted when degeneracy is taken into account.

First, it's not the sign of ##\beta## that's being changed, it's the sign of ##\alpha##. Rewrite the first equation as

$$
- ln (n_j) - \alpha - \beta \cdot \varepsilon_j = 0
$$

Now it's the same as the second except that the ##ln (g_j)## term is absent (because the first source doesn't consider degeneracy, which is where that term comes from--if ##g_j = 1##, no degeneracy, then ##ln(g_j) = 0##) and the sign of ##\alpha## is changed. If you go back into how the first formula is derived, you will see that the sign gets flipped during the derivation (an equation with minus something = 0 is changed to just something = 0). The second source simply doesn't do that sign flip.

Second, the choice of the sign of ##\alpha## has nothing to do with degeneracy. It's just an arbitrary choice of signs. The two sources are just making different arbitrary choices. (The first source doesn't go on to discuss the link between ##\alpha## and the chemical potential/Fermi energy; if it did, the sign flip would just end up getting to the version of the Maxwell-Boltzmann distribution on slide 21 of the second source in one less step--the step on slide 21 where the sign of the argument of the exponential gets flipped would not be needed.)
 
  • #111
PeterDonis said:
It's just an arbitrary choice of signs. The two sources are just making different arbitrary choices.

I might have missed your point. But don't these arbitrary choices of the signs eventually lead to a different formulation for ##n_j##?

Here's why:

PeterDonis said:
If you go back into how the first formula is derived, you will see that the sign gets flipped during the derivation (an equation with minus something = 0 is changed to just something = 0)

If you're referring to exactly this timestamp of the video, then I noticed that if the minus sign before that summation is kept, this would lead to the formula ##n_j = e^{\alpha} \cdot e^{\beta \epsilon_j}## whereas, when it's removed (just like the lecturer did), this would lead to ##n_j = e^{-\alpha} \cdot e^{-\beta \epsilon_j}##. Don't these differences lead to a different result?
Unless addition or substraction of Langrang's Constants depends on whether or not you have kept the minus sign before, I'm still confused.

Furthermore, the fact that Sheet 21 of the second source says that the minus sign is added based on "intuition", makes me think that they have not applied the signs correctly before, because the formula of the video doesn't need that "intuition" step if the lecturer went on discussing the chemical potential/Fermi energy.
 
  • #112
JohnnyGui said:
But don't these arbitrary choices of the signs eventually lead to a different formulation for ##n_j##?

In terms of ##\alpha## and ##\beta##, yes. But those choices of sign will just change the relationship between ##\alpha## and ##\beta## and the chemical potential (or Fermi energy) and temperature (by flipping the signs there). The final formulas in terms of chemical potential/Fermi energy and temperature, which are what actually have physical meaning, will be the same either way.

JohnnyGui said:
the fact that Sheet 21 of the second source says that the minus sign is added based on "intuition"

No, that's not what it says. No minus sign is "added in". The two formulas at the top of that slide are equivalent to each other; they're just expressed in slightly different algebraic form.

The note about "It's intuitive" just means that the formula, now that it's written in that form, matches what you would intuitively expect to be the case. It doesn't mean intuition had to be used to obtain the formula.
 
  • #113
PeterDonis said:
The final formulas in terms of chemical potential/Fermi energy and temperature, which are what actually have physical meaning, will be the same either way.

I can't seem to reproduce the same final formula. Here's why:

PeterDonis said:
But those choices of sign will just change the relationship between αα\alpha and ββ\beta and the chemical potential (or Fermi energy) and temperature (by flipping the signs there)

Do you mean that the different signs of ##\alpha## and ##\beta## are "compensated" by flipping the signs of the Fermi energy and temperature? If so, then I can't see that's being done in the source . Sheet 20 still says that ##\alpha = \frac{E_F}{kT}## and ##\beta = \frac{1}{kT}## and according to their formula, the number of particles per quantum state ##n_j## (formula divided by ##g_j##) is ##n_j = e^{\alpha - \beta \epsilon_j}##, which would give as Sheet ##20## says:
$$n_j= e^{\frac{E_F-E_j}{kT}}= e^{\frac{-(E_j-E_F)}{kT}}$$
Flipping the formula from the video like you said in your post #110 would give ##n_j = e^{-(\alpha + \beta E_j)}##. Writing the terms ##\alpha## and ##\beta## out like above would give:
$$n_j = e^{\frac{-(E_F + E_j)}{kT}}=e^{\frac{-E_F - E_j}{kT}}$$
The sign of ##E_F## is still different and therefore those two formulas don't give the same number of particles per quantum state at a particular ##E_F## and ##E_j##. What am I still missing here?
 
  • #114
JohnnyGui said:
Do you mean that the different signs of ##\alpha## and ##\beta## are "compensated" by flipping the signs of the Fermi energy and temperature?

In the slides, you don't flip any signs.

In the video, you would flip just one sign, not both; as noted in post #110, that would be the sign of ##\alpha##. Or, if you don't want to flip the sign of ##\alpha## in order to keep the video formulas as they are, you would flip the sign of ##E_F## in the formula for that in terms of ##\alpha##.
 
  • #115
PeterDonis said:
In the slides, you don't flip any signs.

In the video, you would flip just one sign, not both; as noted in post #110, that would be the sign of ##\alpha##. Or, if you don't want to flip the sign of ##\alpha## in order to keep the video formulas as they are, you would flip the sign of ##E_F## in the formula for that in terms of ##\alpha##.

That part is now clear to me. But flipping the signs of ##\alpha## or ##E_F## of the formula in the video would give different outcomes for ##n_j## compared to when it is not flipped. What is the reason that the signs of either ##\alpha## or ##E_F## in the video should be flipped while this is how the lecturer derived it?

I would expect the derivation of the slides and the video to end up the being the same without the need of any sign flipping for which I can't find a reason.
 
  • #116
JohnnyGui said:
flipping the signs of ##\alpha## or ##E_F## of the formula in the video would give different outcomes for njn_j compared to when it is not flipped.

What do you mean by "different outcomes"? If you flip the one sign in the formulas in the video, you get the same formulas as are in the other source you linked to. What's the problem?

JohnnyGui said:
What is the reason that the signs of either ##\alpha## or ##E_F## in the video should be flipped while this is how the lecturer derived it?

Um, because you want to get the right answer? Physically, the final formula as it's given in the slides you linked to is obviously correct (and the slides explain why). So any derivation is going to have to end up with that formula.

I have no idea why the lecturer in the video chose to start with the sign choices he did. You'd have to ask the lecturer. Expecting all presentations to be entirely consistent in every choice of sign (not to mention lots of other arbitrary choices) is expecting far too much. As long as you end up with the correct answer, it doesn't matter how you get there.
 
  • #117
PeterDonis said:
What do you mean by "different outcomes"? If you flip the one sign in the formulas in the video, you get the same formulas as are in the other source you linked to. What's the problem?

The sheet's source is not the problem here. The problem is that when the sign of ##\alpha## or ##E_F## in the video formula is flipped (to make it the same as the sheet's formula from the other source), the formula is not the same anymore as the original video formula that the lecturer derived. It would give different outcomes for ##n_j## compared to before it was flipped.

If it DID give the same outcomes, then I'd have no problem and no questions.
 
  • #118
JohnnyGui said:
The problem is that when the sign of ##\alpha## or ##E_F## in the video formula is flipped (to make it the same as the sheet's formula from the other source), the formula is not the same anymore as the original video formula that the lecturer derived.

So what?
 
  • #119
PeterDonis said:
So what?

Not sure if you have read the rest of my previous post other than what you quoted. The lecturer derived it as:
$$n_j = e^{\frac{-E_F - E_j}{kT}}$$
And, just for the sake of making the formula the same as the other source, he'd have to change it to:
$$n_j= e^{\frac{-(E_j-E_F)}{kT}}$$
Clearly, one of these equations is incorrect here, because they give different outcomes. If the first equation is incorrect, what did the lecturer do wrong in his derivation?
 
  • #120
JohnnyGui said:
The lecturer

Which lecturer? The slides? Or the video? From what I saw in the video, he never got to any formula involving ##E_F## at all. He only gave formulas with ##\alpha## and ##\beta## in them, and never gave an equation for ##E_F## in terms of ##\alpha##.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
960
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K