What is the Functional Differentiation of F[y(x)] with Respect to y(x')?

  • Thread starter Thread starter leroyjenkens
  • Start date Start date
  • Tags Tags
    Functional
leroyjenkens
Messages
615
Reaction score
49

Homework Statement



F[y(x)]=\int [y(x)\frac{dy(x)}{dx}+y(x)^{2}]\,dx

Homework Equations


δ(x-x')

I think this is the Kronecker Delta. It might be the Dirac Delta.

The Attempt at a Solution



I have the whole thing written in my notes, I just don't know how to make sense of it.

The problem starts off like what I gave up top.
The first step is \frac{δF[y(x)]}{δy(x')}=\int [\frac{δy(x)}{δy(x')}\frac{dy(x)}{dx}+y(x)\frac{δ}{δy(x')}\frac{dy(x)}{dx}+\frac{δ(x)}{δy(x')}y(x)^2]\,dx

This is the first step and I'm already lost. We're going over this in classical mechanics and I've never done this in any math class before.

I really have no idea what's going on.
So he takes the partial derivative of both sides with respect to x prime? Does that first step look right? I've never been so confused in my life. I have no idea why the entire right side of that equation looks like that after step 1.
 
Physics news on Phys.org
This δ does not mean a Dirac or Kronecker or delta, if this is like the way I was taught calculus of variations in classical mechanics. It just means "change in".
 
MisterX said:
This δ does not mean a Dirac or Kronecker or delta, if this is like the way I was taught calculus of variations in classical mechanics. It just means "change in".

Oh. Well that thing shows up a step later. \frac{δF[y(x)]}{δy(x')} turns into δ(x-x') for some reason. I have no idea why.
 
This is definitely calculus of variations. Are you sure the notation was not introduced in an earlier lecture? Perhaps you are supposed to have some book as a reference?
 
voko said:
This is definitely calculus of variations. Are you sure the notation was not introduced in an earlier lecture? Perhaps you are supposed to have some book as a reference?

He switched from using ∂ to using δ. But I'm not sure why, it just says in my notes "change in notation" and then he starts using those.
We weren't assigned any book. Everything comes from lecture notes, which is probably why I'm so confused. I have to frantically copy down notes instead of focusing completely on listening to the lecture.

Have you guys taken a calculus of variations class?
 
Then I would suggest you ask around whether somebody else present at that lecture has a more complete version of notes. I find it very hard to believe that such an important part was not explained. And if you all agree that it is not there, then perhaps you should ask the lecturer to clarify that.
 
voko said:
I find it very hard to believe that such an important part was not explained.
Unfortunately I find it easy to believe. I remember with horror a course in classical mechanics that touched on these subjects. The teacher never bothered with anything that resembled definitions that made sense.
 
leroyjenkens said:
F[y(x)]=\int [y(x)\frac{dy(x)}{dx}+y(x)^{2}]\,dx
This notation already doesn't make sense to me. The right-hand side is OK, but not the left-hand side. I assume that y is a function. In that case, y(x) is a member of its codomain. The left-hand side should probably be F[y]. Note that the right-hand side doesn't depend on x.

I don't understand what you're supposed to do. Is the task just to find the functional derivative of this F? Do you know what the answer is supposed to be?

leroyjenkens said:
δ(x-x')

I think this is the Kronecker Delta. It might be the Dirac Delta.
This notation is only used for the Dirac delta. But in expressions like δF[y]/δy, the delta has nothing to do with any of those, and is more like a ∂.

leroyjenkens said:
I have the whole thing written in my notes, I just don't know how to make sense of it.

The problem starts off like what I gave up top.
The first step is \frac{δF[y(x)]}{δy(x')}=\int [\frac{δy(x)}{δy(x')}\frac{dy(x)}{dx}+y(x)\frac{δ}{δy(x')}\frac{dy(x)}{dx}+\frac{δ(x)}{δy(x')}y(x)^2]\,dx

This is the first step and I'm already lost. We're going over this in classical mechanics and I've never done this in any math class before.

I really have no idea what's going on.
So he takes the partial derivative of both sides with respect to x prime? Does that first step look right? I've never been so confused in my life. I have no idea why the entire right side of that equation looks like that after step 1.
As I said in my previous post, when I took a course like the one you seem to be taking now, the teacher was completely unable to make sense. I eventually came up with a way to deal with these things that seemed to make sense. I would interpret 0=δF[y]/δy as
$$0=\left.\frac{d}{d\varepsilon}\right|_0 F[y_\varepsilon],$$ where ##y_\varepsilon## is a function for each ε in some interval that contains 0. I would also write y instead of ##y_0##. This is not the proper way to define functional derivatives, but I was at least able to use this to make some sense of what the teacher was doing. (I never did it rigorously though).

Let's try it with your problem: (I'll use t instead of ε to save myself some time).
\begin{align}
\left.\frac{d}{dt}\right|_0 F[y_t] =\int \left.\frac{d}{dt}\right|_0\left(y_t(x)y_t'(x)+y_t(x)^2\right)\,\mathrm dx =\int\left( \left(\left.\frac{d}{dt}\right|_0 y_t(x)\right)y'(x)+y(x)\left.\frac{d}{dt}\right|_0 y_t'(x)+\left.\frac{d}{dt}\right|_0 y_t(x)^2\right)dx
\end{align} This is at least similar to what you're supposed to get in the first step. I suspect that the numerator you wrote as ##\delta(x)## should just be ##\delta##. If y vanishes at the endpoints of the interval we're integrating over, the second term will actually cancel the first term, so we get
$$\int\left.\frac{d}{dt}\right|_0 y_t(x)^2\,\mathrm d x.$$ The actual definition of functional derivative is included in Arnold's book, p. 55-56. http://books.google.com/books?id=Pd8-s6rOt_cC&lpg=PP1&hl=sv&pg=PA55#v=onepage&q&f=false
 
Fredrik said:
Unfortunately I find it easy to believe. I remember with horror a course in classical mechanics that touched on these subjects. The teacher never bothered with anything that resembled definitions that made sense.

I was lucky, then. I first encountered variations in a course of mechanics, where they were called virtual displacements, introduced without much rigor, and used to derive Lagrange's equations without ever introducing functionals and minimizing anything. That was a long and tricky derivation, which I suspect was not ever understood by most of my pals, but I was one of the few lucky ones who did. That same course, next semester, then touched upon the variational formulation of mechanics, with functionals et al, which was plain sailing after the previous exposition. Then there was another course on optimization, where that same stuff was developed much more formally, but that was all downhill.
 
  • #10
Fredrik said:
This notation already doesn't make sense to me. The right-hand side is OK, but not the left-hand side. I assume that y is a function. In that case, y(x) is a member of its codomain. The left-hand side should probably be F[y]. Note that the right-hand side doesn't depend on x.

I don't understand what you're supposed to do. Is the task just to find the functional derivative of this F? Do you know what the answer is supposed to be?


This notation is only used for the Dirac delta. But in expressions like δF[y]/δy, the delta has nothing to do with any of those, and is more like a ∂.


As I said in my previous post, when I took a course like the one you seem to be taking now, the teacher was completely unable to make sense. I eventually came up with a way to deal with these things that seemed to make sense. I would interpret 0=δF[y]/δy as
$$0=\left.\frac{d}{d\varepsilon}\right|_0 F[y_\varepsilon],$$ where ##y_\varepsilon## is a function for each ε in some interval that contains 0. I would also write y instead of ##y_0##. This is not the proper way to define functional derivatives, but I was at least able to use this to make some sense of what the teacher was doing. (I never did it rigorously though).

Let's try it with your problem: (I'll use t instead of ε to save myself some time).
\begin{align}
\left.\frac{d}{dt}\right|_0 F[y_t] =\int \left.\frac{d}{dt}\right|_0\left(y_t(x)y_t'(x)+y_t(x)^2\right)\,\mathrm dx =\int\left( \left(\left.\frac{d}{dt}\right|_0 y_t(x)\right)y'(x)+y(x)\left.\frac{d}{dt}\right|_0 y_t'(x)+\left.\frac{d}{dt}\right|_0 y_t(x)^2\right)dx
\end{align} This is at least similar to what you're supposed to get in the first step. I suspect that the numerator you wrote as ##\delta(x)## should just be ##\delta##. If y vanishes at the endpoints of the interval we're integrating over, the second term will actually cancel the first term, so we get
$$\int\left.\frac{d}{dt}\right|_0 y_t(x)^2\,\mathrm d x.$$ The actual definition of functional derivative is included in Arnold's book, p. 55-56. http://books.google.com/books?id=Pd8-s6rOt_cC&lpg=PP1&hl=sv&pg=PA55#v=onepage&q&f=false

The standard way I have seen (in treatments written by mathematicians, rather than physicists or engineers) is to look at the "directional derivative" in function space. So, if
F(y) = \int_a^b L[y(x),y'(x)] \, dx, then for ##\delta y = h p## (h is a (small) scalar, p is a function on [a,b]) we have
F(y + h p) = \int_a^b L[y(x) + hp(x),y&#039;(x) + hp&#039;(x)]\, dx<br /> = F(y) + h \int_a^b \left[ L_y [y(x),y&#039;(x)] p(x) + L_{y&#039;} [y(x),y&#039;(x)] p&#039;(x) \right] \,dx<br /> + O(h^2).
The coefficient of h above is called the directional derivative of F in the direction p. Note, in particular, if you restrict the "variations" p such that y+hp passes through the original endpoints y(a) and y(b), then we must have p(a) = p(b) = 0. For differentiable p of that type we can integrate by parts in the above to get rid of 'p(x):
F(y + hp) - F(y) = h \left. L_{y&#039;} p \right|_{a}^{b} +<br /> h \int_a^b p(x) \left[ L_y - \frac{d}{dx} L_{y&#039;}\right](x) \, dx. The "boundary" term is zero because p(a) = p(b) = 0, so we finally get
F&#039;(y;p) \equiv \left. \frac{d}{dh} F(y + h p) \right|_{h=0}<br /> = \int_a^b p(x) \left[ L_y[y(x),y&#039;(x)] - \frac{d}{dx} L_{y&#039;}[y(x),y&#039;(x)] \right] \, dx.

In your case, ##L(y,y') = y y' + y^2## so ##L_y = y' + 2y## and ##L_{y'} = y.## Thus,
F&#039;(y;p) = \int_a^b p(x) \left( y&#039;(x) + 2y(x) - \frac{d}{dx} y(x)\right) \, dx<br /> = \int_a^b p(x) [2 y(x) ] \, dx. Note, however, that this would hold only for "variations" p that satisfy p(a) = p(b) = 0; if not, there would be an extra term
\left. L_{y&#039;} p \right|_{x=b} - \left. L_{y&#039;} p \right|_{x=a} = y(b)p(b)-y(a)p(a).

Personally, I always found the ##\delta/\delta y## stuff to be both unsatisfactory and confusing. I don't know why people still bother to use it, when the alternatives are so much more straightforward. However, you are stuck with trying to figure out what your lecturer is doing, but maybe also being exposed to an alternative will help you fathom what is happening.
 
  • #11
Thanks Ray. That was very clear. I didn't understand the O(h2) when I skimmed through that section of Arnold last night (his h is a function), but it's clear what it means in your presentation. I guess I finally understand functional derivatives.
 
Back
Top