Why are y and y' treated as independent in calculus of variation?

Click For Summary
SUMMARY

In calculus of variation, Euler's equation is utilized to minimize the integral ∫f{y,y';x}dx. The treatment of y and y' as independent variables arises from the absence of a direct algebraic relationship between a function and its derivative. Boundary conditions are essential for solving differential equations due to this lack of algebraic dependency. The discussion highlights that while y and y' may appear independent, they are interconnected through functional dependency, particularly when considering variations in the context of least action.

PREREQUISITES
  • Understanding of Euler's equation in calculus of variation
  • Familiarity with boundary conditions in differential equations
  • Knowledge of partial derivatives and their applications
  • Concept of functional dependency in mathematical analysis
NEXT STEPS
  • Study the derivation and applications of Euler's equation in calculus of variation
  • Explore boundary value problems in differential equations
  • Learn about functional analysis and its implications in calculus
  • Investigate integration by parts in the context of variational calculus
USEFUL FOR

Mathematicians, physicists, and engineers interested in advanced calculus, particularly those working with variational principles and differential equations.

HAMJOOP
Messages
31
Reaction score
0
In calculus of variation, we use Euler's equation to minimize the integral.

e.g. ∫f{y,y' ;x}dx

why we treat y and y' independent ?
 
Mathematics news on Phys.org
Because there is no algebraic relation between a function and its derivative.

This is why you need boundary conditions to solve differential equations.
 
UltrafastPED said:
Because there is no algebraic relation between a function and its derivative.

This is why you need boundary conditions to solve differential equations.

Sorry, but this is a bogus answer. A function may depend on another function non-algebraically, and that is perfectly fine as far as functional dependency goes. Not to mention that the dependency may perfectly well be algebraic.

The real reason is that we use the partial derivatives to obtain an expression for the difference ## F(z + \Delta z, y + \Delta y, x) - F(z, y, x) ##, which is approximately ## F_z \Delta z + F_y \Delta y ## when ##\Delta z## and ##\Delta y## are sufficiently small. This expression is true generally, and is true when ## z ## represents the derivative of ## y ## - all it takes is that the variations of both must be small enough. If ## y = f(x) ##, its variation is ## \delta y = \epsilon g(x) ##, and ## \delta y' = \epsilon g'(x)##. If ## \epsilon ## is small enough, then using the result above, ## F((y + \delta y)', (y + \delta y), x) - F(y', y, x)) \approx \epsilon F_{y'}g'(x) + \epsilon F_y g(x) ##, where ##F_{y'}## is just a fancy symbol equivalent to ##F_z##, meaning partial differentiation with respect to the first argument. Then we use integration by parts and convert that to ## \epsilon (-(F_{y'})' + F_y) g(x)##. Observe that we do use the relationship between ## y ## and ## y' ## in the final step.
 
Would the following also be correct reasoning?

We want to find the least action for:

##S = \int_{x_1}^{x_2} f(y,y',x) \, dx##

While this may look as though y, y' and x are simple independent variables, since we are actually looking for the function f(x) that provides this least action, what this notation really means is this:

##S = \int_{x_1}^{x_2} f[y(x), \frac d {dx} y(x), x] \, dx##

So y and y' are not truly independent.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 22 ·
Replies
22
Views
2K
  • · Replies 13 ·
Replies
13
Views
1K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K