I Confusion in variation derivative

AI Thread Summary
The discussion centers on deriving Hamilton's generalized principle from D'Alembert's principle, specifically addressing the justification of the equation involving the variation of velocity, ## \frac{d}{dt} \delta r_i = \delta [\frac{d}{dt}r_i] ##. The user questions whether ## \delta \dot r_i ## remains constant when shifting the origin while keeping time constant, leading to confusion about whether ## \frac{d}{dt} \delta r_i ## should be zero. Clarifications are provided that the functional variation behaves like a standard differential, allowing for the total derivative to be treated similarly to an independent variable. The discussion emphasizes that the derivative of a sum equals the sum of the derivatives, reinforcing the additive nature of the variations involved. Overall, the conversation highlights the nuances of functional variations in the context of Hamiltonian mechanics.
weezy
Messages
92
Reaction score
5
This link shows us how to derive Hamilton's generalised principle starting from D'Alembert's principle. While I had no trouble understanding the derivation I am stuck on this particular step.
Screen Shot 2017-07-22 at 7.19.53 PM.png

I can't justify why ## \frac{d}{dt} \delta r_i = \delta [\frac{d}{dt}r_i] ##. This is because if I consider ##\delta \dot r_i## to be spatial variation in velocity of a particle as I shift my origin keeping time constant, doesn't it stay the same i.e. doesn't ##\delta \dot r_i = 0##?

Also if I assume that throughout a large section of the path ##\delta r_i = Constant## don't I get ## \frac{d}{dt} \delta r_i = 0 ? ##

Is this supposed to have a non-zero value or are we simply playing with 0's here?

Edit : If I assume ## \delta ## to act like an operator on ## r ## I don't see a problem arising as we've done in Quantum mechanics for commuting operators by interchanging their order. Can we say the same for this?
 
Physics news on Phys.org
This is technically an assumed constraint and not a derived result. But it follows from the understanding of what the functional differential is doing.
You are introducing a variation in the time dependent variable: r_i \to \tilde{r}_i = r_i + \delta r_i. We understand that we are to extend the time derivative without introducing additional independent variations so that \dot{r}_i \to \tilde{\dot{r}}_i = \dot{\tilde{r}_i}. It is implied that the functional variation behaves just like a standard differential in that it is a local linear deviation from the previous value.

The action here of \delta on r is the same as the action of d on an independent vector variable. If you had a function of a vector: f(\mathbf{x}) then its variation is
df(\mathbf{x}) = \lim_{h\to 0}\frac{ f(\mathbf{x} + h \mathbf{dx})-f(\mathbf{x})}{h} = \frac{df}{d\mathbf{x}}[\mathbf{dx}]

Here the total derivative\frac{df}{\mathbf{dx}} of f is a linear mapping from the differential vector \mathbf{dx} to the range of f. The differential \mathbf{dx}=\langle dx_1, dx_2,dx_3\cdots\rangle is an independent auxiliary (vector) variable representing a local linear coordinate with origin at the point indicated by the original vector \mathbf{x}. You can say that the differential operator d maps the original independent vector variable to this auxiliary variable but keep in mind that it is an independent variation. While we are at it you can think of the vector as a function from its component index into the reals. x(1) = x_1, x(2)=x_2, etc. The non-trivial action of the differential is effected by its action on functions of \mathbf{x}.

Now consider a continuously indexed "vector" i.e. a continuous function r(t) thinking of t as the index. To avoid confusing the same kind of differential of a function, i.e. dr(t) = \dot{r}(t)dt we use the different differential notation: \delta r to indicate (in the function space where r lives) an independent variation of the choice of function r(t) \to r(t) + \delta r(t).

The fact that the derivative of the variation is the variation of the derivative is no more mysterious than the fact that for the \mathbf{x} example the variation in the difference of successive components is the difference in the variation of successive components:
\Delta: \mathbf{x} \mapsto x_2 - x_1
and thus
d\Delta \mathbf{x} = \Delta d\mathbf{x} = dx_2 - dx_1

But the short answer is that the "justification" comes down to the fact that the derivative of a sum is the sum of the derivatives and we are (independently) varying the function additively: r\mapsto r+ \delta r.
 
  • Like
Likes weezy
Thread 'Question about pressure of a liquid'
I am looking at pressure in liquids and I am testing my idea. The vertical tube is 100m, the contraption is filled with water. The vertical tube is very thin(maybe 1mm^2 cross section). The area of the base is ~100m^2. Will he top half be launched in the air if suddenly it cracked?- assuming its light enough. I want to test my idea that if I had a thin long ruber tube that I lifted up, then the pressure at "red lines" will be high and that the $force = pressure * area$ would be massive...
I feel it should be solvable we just need to find a perfect pattern, and there will be a general pattern since the forces acting are based on a single function, so..... you can't actually say it is unsolvable right? Cause imaging 3 bodies actually existed somwhere in this universe then nature isn't gonna wait till we predict it! And yea I have checked in many places that tiny changes cause large changes so it becomes chaos........ but still I just can't accept that it is impossible to solve...
Hello! I am generating electrons from a 3D gaussian source. The electrons all have the same energy, but the direction is isotropic. The electron source is in between 2 plates that act as a capacitor, and one of them acts as a time of flight (tof) detector. I know the voltage on the plates very well, and I want to extract the center of the gaussian distribution (in one direction only), by measuring the tof of many electrons. So the uncertainty on the position is given by the tof uncertainty...
Back
Top