Applying a constraint in the calculus of variations

In summary, the speaker is discussing the derivation of Boltzmann statistics without the correction for indistinguishability. They are trying to maximize a function F under the constraints ∑ ni = N and ∑ ni ui = U, with constant N and U. The speaker presents two solutions for the problem, both of which lead to the same values for ni. The only difference between the two solutions is the value of α, which the speaker argues is not important as it can be eliminated by introducing a normalization sum.
  • #1
Philip Koeck
673
182
I have an analytical function F of the discrete variables ni, which are natural numbers. I also know that the sum of all ni is constant and equal to N.
N also appears explicitly in F, but F is not a function of N. F exists in a coordinate system given by the ni only.
Should I carry out the variation as if N would vary when I vary any of the ni and then apply the constant N as a constraint with a Lagrange multiplier or is it correct to leave out the variation of N with ni from the beginning.
As an example you can look at: F = N + ∑gi ln ni
The gi are just weights, which can be different for every i.
 
Last edited:
Physics news on Phys.org
  • #2
Lagrange multipliers are explicitly for the sort of problem you're talking about. However, if the constraint is that ##\sum_j n_j = N##, then it leads to a boring result: ##\frac{\partial F}{\partial n_i} = \lambda## (where ##\lambda## is the lagrange multiplier, some constant).

However, if the ##n_j## are supposed to all be natural numbers, then taking partial derivatives isn't going to minimize ##F##, because the values of ##n_j## that minimize ##F## might not be natural numbers. I suppose you could use that answer as a starting place, and then search nearby for integer values that minimize ##F##?
 
  • Like
Likes BvU
  • #3
Which of the following two solutions is correct?
1 + gi/ni - λ = 0
or gi/ni - λ = 0
 
  • #4
Philip Koeck said:
Which of the following two solutions is correct?
1 + gi/ni - λ = 0
or gi/ni - λ = 0

Since ##\lambda## isn't a fixed number, there is no difference which of those you use. They lead to the same answer for ##n_i##, once you eliminate ##\lambda##.
 
  • #5
I'm still mystified.
Can we look at the actual problem instead? A bit more complicated, I'm afraid.
I want to maximize F given below under the constraints ∑ ni = N and ∑ ni ui = U, with constant N and U. I'll use α and β for the multipliers. The gi and ui are given parameters.

F = N ln N - N + ∑ ( ni ln gi - ni ln ni + ni )

Are both the following solutions correct, would you say?

ln N + ln gi - ln ni - α - β ui = 0

ln gi - ln ni - α - β ui = 0
 
  • #6
Looks like you are trying to derive the Boltzmann distribution. Google is your friend.
 
  • #7
Philip Koeck said:
I'm still mystified.
Can we look at the actual problem instead? A bit more complicated, I'm afraid.
I want to maximize F given below under the constraints ∑ ni = N and ∑ ni ui = U, with constant N and U. I'll use α and β for the multipliers. The gi and ui are given parameters.

F = N ln N - N + ∑ ( ni ln gi - ni ln ni + ni )

Are both the following solutions correct, would you say?

ln N + ln gi - ln ni - α - β ui = 0

ln gi - ln ni - α - β ui = 0

Those lead to the exact same answers for ##n_i##. They lead to different values for ##\alpha##, but you don't care about the value of ##\alpha##.
 
  • #8
stevendaryl said:
Those lead to the exact same answers for ##n_i##. They lead to different values for ##\alpha##, but you don't care about the value of ##\alpha##.
Isn't α given by ∂F/∂N ? How can it be different for the two solutions?
Shouldn't it be completely defined by F?
Yes I am looking at the derivation of Boltzmann statistics, but without the correction for indistinguishability.
That's why F contains two terms that depend only on N.
 
Last edited:
  • #9
Philip Koeck said:
Isn't α given by ∂F/∂N ? How can it be different for the two solutions?
Shouldn't it be completely defined by F?

Do you agree that the two solutions lead to the same values for ##n_i##?

In both cases, the solution is: ##n_i = N g_i e^{-\beta u_i}/\sum_i (g_i e^{-\beta u_i})##

The two different values for ##\alpha## differ by ##\ln N##.

What's important is ##n_i##, not ##\alpha##.
 
  • #10
stevendaryl said:
Do you agree that the two solutions lead to the same values for ##n_i##?

In both cases, the solution is: ##n_i = N g_i e^{-\beta u_i}/\sum_i (g_i e^{-\beta u_i})##

The two different values for ##\alpha## differ by ##\ln N##.

What's important is ##n_i##, not ##\alpha##.
I would write the solutions as follows:
ni = N gi exp(- α - β ui)
and
ni = gi exp(- α - β ui)
I see that you remove the α from the solutions by introducing the normalization sum.

I agree that if the α differ by ln N the two results are the same, but how do you argue that the α should be different?
 
Last edited:

1. What is the purpose of applying a constraint in the calculus of variations?

The purpose of applying a constraint in the calculus of variations is to restrict the possible solutions to a problem and find the optimal solution that satisfies the given constraint. This helps to simplify the problem and make it more manageable.

2. How is a constraint represented in the calculus of variations?

A constraint is typically represented as an additional equation or condition that must be satisfied by the solution to the problem. This equation or condition is often referred to as a boundary condition or a constraint equation.

3. Can a constraint be applied to any problem in the calculus of variations?

Yes, a constraint can be applied to any problem in the calculus of variations as long as it is mathematically feasible and relevant to the problem at hand. However, some problems may not require the use of constraints to find a solution.

4. What are the different types of constraints that can be applied in the calculus of variations?

There are two main types of constraints that can be applied in the calculus of variations: equality constraints and inequality constraints. Equality constraints require the solution to satisfy a specific equation, while inequality constraints require the solution to satisfy a specific inequality.

5. How does applying a constraint affect the solution to a problem in the calculus of variations?

Applying a constraint can significantly affect the solution to a problem in the calculus of variations. It can limit the range of possible solutions and may result in a unique or non-unique solution depending on the type of constraint applied. In some cases, it may also make the problem more challenging to solve.

Similar threads

Replies
1
Views
944
Replies
11
Views
2K
Replies
1
Views
942
Replies
5
Views
2K
Replies
5
Views
1K
Replies
3
Views
822
Replies
22
Views
462
Replies
1
Views
609
Replies
6
Views
1K
Back
Top