I Why must partialS/partialα_n = -β_n where S is the complete integral?

AI Thread Summary
The discussion centers on the relationship between the Jacobi complete integral and the constants associated with the derivatives of the integral with respect to the constant α. It clarifies that the equation ∂S/∂α1 = -β1 does not imply that ∂²S/∂α1∂t must be zero, as S generally depends on time both explicitly and implicitly through the variables qk(t). The constants βk arise from Hamilton's equations and are not arbitrarily chosen; they are constants due to the nature of the system described. The total time derivative of S is not constant, but the derivative with respect to αk remains constant during the evolution of the system. The conversation highlights the importance of understanding the dependencies of S on time and the variables involved.
giraffe714
Messages
21
Reaction score
2
TL;DR Summary
I don't understand why the derivative of the Jacobi complete integral with respect to the constant α must be another constant, and furthermore why that constant is negative.
As stated in the TLDR, I don't understand why the derivative of the Jacobi complete integral with respect to the constant α must be another constant, and furthermore why that constant is negative. The textbook I'm following, van Brunt's The Calculus of Variations proves it by taking:
$$ \frac{\partial}{\partial \alpha_1} (H + \frac{\partial S}{\partial t}) = \frac{\partial^2 S}{\partial \alpha_1 \partial t} + \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} = 0 $$
and then just stating that
$$ \frac{\partial S}{\partial \alpha_1} = -\beta_1 $$
is satisfied identically but i cant figure out a) how those two equations are even related and b) from what I can tell, if ## \frac{\partial S}{\partial \alpha_1} = -\beta_1 ## where ##\beta_1## is constant, that means ##\frac{\partial^2 S}{\partial \alpha_1 \partial t} ## must be zero? but if it's zero, then in the original equation
$$ \frac{\partial}{\partial \alpha_1} (H + \frac{\partial S}{\partial t}) = \frac{\partial^2 S}{\partial \alpha_1 \partial t} + \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} = 0 $$
the term ## \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} ## must be zero and I just don't see why that's true?
 
Physics news on Phys.org
giraffe714 said:
The textbook I'm following, van Brunt's The Calculus of Variations proves it by taking:
$$ \frac{\partial}{\partial \alpha_1} (H + \frac{\partial S}{\partial t}) = \frac{\partial^2 S}{\partial \alpha_1 \partial t} + \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} = 0 $$
and then just stating that
$$ \frac{\partial S}{\partial \alpha_1} = -\beta_1 $$
I happened to find online access to this text, so I will refer to specific pages and equations in the book.

The equation ##\dfrac{\partial S}{\partial \alpha_1} = -\beta_1## is just a specific example of the general equation ##P_k = -\dfrac{\partial S}{\partial Q_k}## found in (8.25) on page 173. Note that at the bottom of page 175 we have ##Q_k = \alpha_k## and ##P_k = \beta_k##.

giraffe714 said:
from what I can tell, if ## \frac{\partial S}{\partial \alpha_1} = -\beta_1 ## where ##\beta_1## is constant, that means ##\frac{\partial^2 S}{\partial \alpha_1 \partial t} ## must be zero?
No, ##\dfrac{\partial^2 S}{\partial \alpha_1 \partial t} ## does not have to be zero. This can be confusing. ##S(t, q, \alpha)## generally depends on ##t## both explicitly and also implicitly through the various ##q_k(t)##. So, ##\dfrac{\partial^2 S}{\partial \alpha_1 \partial t} ## will also generally depend on ##t## explicitly and implicitly through the ##q_k(t)##. As time progresses, ##t## and ##q(t)## change in such a way that ##\beta_1 = - \dfrac{\partial S}{\partial \alpha_1} ## remains constant during the time evolution of the system. Similarly for the other ##\beta_k##.

However, ##\dfrac{\partial^2 S}{\partial \alpha_1 \partial t} ## is generally not zero. This is because the notation ##\dfrac{\partial^2 S}{\partial t \partial \alpha_1} ## is interpreted as $$\frac{\partial^2 S}{\partial t \partial \alpha_1} = \left. \frac{\partial}{\partial t} \left( \frac{\partial S}{\partial \alpha_1} \right)\right|_{q_k, \alpha_k}$$ where the ##q_k## are held constant while taking the partial derivative with respect to ##t##. See footnote 7 on page 178.

As an example, look at the expression for ##\beta_2## on page 181 for the geometrical optics example: $$\beta_2 =\frac{\alpha_2 A}{\mu^2} - t$$ where ##A## is the function of ##q_1(t)## given on page 180. ##\beta_2## is a constant of the motion. But $$\frac{\partial^2 S}{\partial t \partial \alpha_2} = \frac{\partial}{\partial t}(-\beta_2) = - \frac{\partial }{\partial t} \left(\frac{\alpha_2 A}{\mu^2} - t \right ) = 1$$ since ##A## is fixed while taking the partial with respect to ##t##.

---------

As a side note, I get a slightly different result for ##\beta_1## and ##\beta_2## for the geometrical optics example. I get $$\beta_1 = \dfrac{2 \alpha_1 A}{\mu^2} - q_2$$ $$\beta_2 = \dfrac{2 \alpha_2 A}{\mu^2} - t.$$ These have a factor of 2 in the fractions that do not appear in the expressions in the book. But I could have made a mistake. My results for ##q_1(t)## and ##q_2(t)## are $$q_1 = \frac{\mu^2}{4\alpha_2^2}(\beta_2+t)^2 + \frac{\alpha_1^2+\alpha_2^2}{\mu^2}$$ $$q_2 = \frac{\alpha_1}{\alpha_2}t + \frac{\alpha_1}{\alpha_2}\beta_2 - \beta_1$$
 
Last edited:
TSny said:
. S(t,q,α) generally depends on t both explicitly and also implicitly through the various qk(t).
That actually makes sense, thank you. I think I was just missing that S is a function of q_k which are also functions of t so that clears it up. One question though, just to make sure: the fact that ## P_k = \beta_k = const. ## doesn't *follow* from ## Q_k = \alpha_k = const. ##, instead they're just *chosen* to be this way - and further there is also no actual guarantee that an S that satisfies this can be found or exists?
TSny said:
But ∂2S∂t∂α2=∂∂t(−β2)=−∂∂t(α2Aμ2−t)=1 since A is fixed while taking the partial with respect to t.
And just to be perfectly clear - when we're taking this partial derivative, we disregard the q_k(t) dependence, which is why it doesn't have to be zero. If this is true then I assume the total time derivate of S would be constant, correct?
 
Last edited:
giraffe714 said:
One question though, just to make sure: the fact that ## P_k = \beta_k = const. ## doesn't *follow* from ## Q_k = \alpha_k = const. ##, instead they're just *chosen* to be this way
I don't think the ##P_k##'s are "chosen" to be constants. See the first part of section 8.4.1 which explains why both the ##P_k##'s and the ##Q_k##'s are constants. They are constants because of Hamilton's equations $$\dot Q_k = -\frac{\partial \hat H}{\partial P_k}$$ $$\dot P_k = -\frac{\partial \hat H}{\partial Q_k}$$ with the requirement that ##\hat H = 0##. Thus ##\dot Q_k = 0## and ##\dot P_k = 0##.


giraffe714 said:
- and further there is also no actual guarantee that an S that satisfies this can be found or exists?
##S## exits as long as a solution to the Hamilton-Jacobi equation (8.31) exists.

giraffe714 said:
And just to be perfectly clear - when we're taking this partial derivative, we disregard the q_k(t) dependence, which is why it doesn't have to be zero.
Yes, that's right.

giraffe714 said:
If this is true then I assume the total time derivate of S would be constant, correct?
The total time derivative of ##S(q_k, \alpha_k, t)## is not a constant. The total time derivative of ##\dfrac{\partial S}{\partial \alpha_k}## is zero. See (8.38). And this corresponds to ##\dfrac{\partial S}{\partial \alpha_k} = -\beta_k## being constant.

The time derivative of ##S## is interesting: $$\frac{dS}{dt} = \sum_k \frac{\partial S}{\partial q_k} \dot q_k + \frac{\partial S}{\partial t} = \sum_k p_k \dot q_k - H(q_k, p_k, t) = L$$ where ##L## is the Lagrangian of the system in the original coordinates ##q_k##. Here, we used ##\frac{\partial S}{\partial q_k} = p_k## and ##\frac{\partial S}{\partial t} = -H##.
 
The rope is tied into the person (the load of 200 pounds) and the rope goes up from the person to a fixed pulley and back down to his hands. He hauls the rope to suspend himself in the air. What is the mechanical advantage of the system? The person will indeed only have to lift half of his body weight (roughly 100 pounds) because he now lessened the load by that same amount. This APPEARS to be a 2:1 because he can hold himself with half the force, but my question is: is that mechanical...
Some physics textbook writer told me that Newton's first law applies only on bodies that feel no interactions at all. He said that if a body is on rest or moves in constant velocity, there is no external force acting on it. But I have heard another form of the law that says the net force acting on a body must be zero. This means there is interactions involved after all. So which one is correct?
Thread 'Beam on an inclined plane'
Hello! I have a question regarding a beam on an inclined plane. I was considering a beam resting on two supports attached to an inclined plane. I was almost sure that the lower support must be more loaded. My imagination about this problem is shown in the picture below. Here is how I wrote the condition of equilibrium forces: $$ \begin{cases} F_{g\parallel}=F_{t1}+F_{t2}, \\ F_{g\perp}=F_{r1}+F_{r2} \end{cases}. $$ On the other hand...
Back
Top