I'll just add a derivation of the Euler-Lagrange equation for this simple Lagrangian, because I think this kind of stuff is fun, and because similar questions have come up before, and I expect them to come up again.
Let x:[t_1,t_2]\rightarrow\mathbb R^2 be the curve that minimizes the action. Let \big\{x_\epsilon:[t_1,t_2]\rightarrow\mathbb R^2\big\}[/tex] be an arbitrary one-parameter family of curves such that x_0=x and x_\epsilon(t_1)=x_\epsilon(t_2)for all \epsilon.<br />
<br />
If it it's not 100% clear already, I mean that each x_\epsilon is a curve, that the one with \epsilon=0 is the one that minimizes the action, and that all of these curves have the same endpoints as x.<br />
<br />
We have<br />
<br />
0=\frac{d}{d\epsilon}\bigg|_0 S[x_\epsilon]=\int_{t_1}^{t_2}\frac{d}{d\epsilon}\bigg|_0 L(x_\epsilon,x&#039;_\epsilon(t))dt=\int_{t_1}^{t_2}\bigg(L_{,1}(x(t),x&#039;(t))\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t)+L_{,2}(x(t),x&#039;(t))\frac{d}{d\epsilon}\bigg|_0 x&#039;_\epsilon(t) \bigg)dt<br />
<br />
The notation L_{,i} means the ith partial derivative of L. There's a trick we can use to rewrite the last term above.<br />
<br />
L_{,2}(x(t),x&#039;(t))\frac{d}{d\epsilon}\bigg|_0 x&#039;_\epsilon(t)=\frac{d}{dt}\bigg(L_{,2}(x(t),x&#039;(t))\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t)\bigg)-\frac{d}{dt}\bigg(L_{,2}(x(t),x&#039;(t))\bigg)\frac{d}{d\epsilon}\bigg|_0<br />
<br />
We have<br />
<br />
L_{,1}(a,b)=\frac{d}{da}\Big(\frac 1 2 mb^2-V(a)\Big)=-V&#039;(a)<br />
<br />
L_{,2}(a,b)=\frac{d}{db}\Big(\frac 1 2 mb^2-V(a)\Big)=mb<br />
<br />
so<br />
<br />
L_{,1}(x(t),x&#039;(t))=V&#039;(x(t))<br />
<br />
L_{,2}(x(t),x&#039;(t))=mx&#039;(t)<br />
<br />
This turns the equation into<br />
<br />
0=\int_{t_1}^{t_2}\bigg(-V&#039;(x(t))\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t)+mx&#039;(t)\frac{d}{d\epsilon}\bigg|_0 x&#039;_\epsilon(t) \bigg)dt<br />
<br />
=\int_{t_1}^{t_2}\bigg(-V&#039;(x(t))\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t)+\frac{d}{dt}\bigg(mx&#039;(t)\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t)\bigg)-mx&#039;&#039;(t)\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t)\bigg)dt<br />
<br />
The last step might look weird, but the only thing I'm doing there is to use the product rule for derivatives. The integral of the middle term is actually =0, because the assumption that all the curves have the same endpoints implies that<br />
<br />
\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t_1)=\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t_2)=0<br />
<br />
All we have left after throwing away the zero term is<br />
<br />
0=\int_{t_1}^{t_2}\bigg(-V&#039;(x(t))-mx&#039;&#039;(t)\bigg)\frac{d}{d\epsilon}\bigg|_0 x_\epsilon(t) dt<br />
<br />
But the fact that this is supposed to hold for <i>arbitrary</i> one-parameter families of curves that satisfy the necessary requirements means that we can choose that derivative to the right of the parentheses to be any continuous function of t that we want, and the integral is still supposed to be =0. This is only possible if the expression in parentheses is =0, so we end up with<br />
<br />
mx&#039;&#039;(t)=-V&#039;(x(t))=F(x(t))<br />
<br />
as promised.