Why are y and y' treated as independent in calculus of variation?

HAMJOOP
Messages
31
Reaction score
0
In calculus of variation, we use Euler's equation to minimize the integral.

e.g. ∫f{y,y' ;x}dx

why we treat y and y' independent ?
 
Mathematics news on Phys.org
Because there is no algebraic relation between a function and its derivative.

This is why you need boundary conditions to solve differential equations.
 
UltrafastPED said:
Because there is no algebraic relation between a function and its derivative.

This is why you need boundary conditions to solve differential equations.

Sorry, but this is a bogus answer. A function may depend on another function non-algebraically, and that is perfectly fine as far as functional dependency goes. Not to mention that the dependency may perfectly well be algebraic.

The real reason is that we use the partial derivatives to obtain an expression for the difference ## F(z + \Delta z, y + \Delta y, x) - F(z, y, x) ##, which is approximately ## F_z \Delta z + F_y \Delta y ## when ##\Delta z## and ##\Delta y## are sufficiently small. This expression is true generally, and is true when ## z ## represents the derivative of ## y ## - all it takes is that the variations of both must be small enough. If ## y = f(x) ##, its variation is ## \delta y = \epsilon g(x) ##, and ## \delta y' = \epsilon g'(x)##. If ## \epsilon ## is small enough, then using the result above, ## F((y + \delta y)', (y + \delta y), x) - F(y', y, x)) \approx \epsilon F_{y'}g'(x) + \epsilon F_y g(x) ##, where ##F_{y'}## is just a fancy symbol equivalent to ##F_z##, meaning partial differentiation with respect to the first argument. Then we use integration by parts and convert that to ## \epsilon (-(F_{y'})' + F_y) g(x)##. Observe that we do use the relationship between ## y ## and ## y' ## in the final step.
 
Would the following also be correct reasoning?

We want to find the least action for:

##S = \int_{x_1}^{x_2} f(y,y',x) \, dx##

While this may look as though y, y' and x are simple independent variables, since we are actually looking for the function f(x) that provides this least action, what this notation really means is this:

##S = \int_{x_1}^{x_2} f[y(x), \frac d {dx} y(x), x] \, dx##

So y and y' are not truly independent.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Back
Top