# Calculus Calculus of variation textbook 'not under a single integral'

1. Feb 4, 2017

### dIndy

I have to find functions that maximise certain criterea. The problem can however not be put "under a single integral", for example I've to find $f(t)$, $g(t)$ that maximise:

$\int_0^{t_e}f(t)^2dt\int_0^{t_e}g(t)^2dt - (\int_0^{t_e}f(t)g(t)dt)^2$

With $-1 \leq f(t)\leq1$ and $-1 \leq g(t)\leq 1$ for all t

For this problem I can still intuitively guess a solution, such as: $f(t)=1$ for all t and $g(t)=1$ until $0.5t_e$, afterwards $g(t)=-1$.

But for more complicated problems I'll no longer be able to guess the solution and will need a proper way to find a solution. Most calculus of variations textbooks I've consulted (such as Gelfand) focus on problems that can be solved with the Euler-Lagrange equation, which I do not think can be applied here?

Does anybody know a textbook that covers these type of problems, or even a keyword that describes these kind of problems? Searching for calculus of variations did not get me far. I come from a life sciences background, so sadly my ability to read something much more complicated than the textbook by Gelfand is limited, but any recommendation is appreciated, thanks!

2. Feb 4, 2017

### Stephen Tashi

For that particular problem: "Orthogonal basis functions".

Assume $f(x) = \sum_{i=1}^n a_i p_i(x)$ where the $a_i$ are unknown constants and the $p_i(x)$ are as set of orthogonal basis functions. Assume $g(x) = \sum_{i=1}^n b_i p_i(x)$ where the $b_i$ are unknown constants.
Look at the restrictions that the given equation places on $a_i$ and $b_i$.

It isn't clear what you mean by "more complicated problems". The general description "Problems of finding the extrema of functions who arguments are unknown functions" seems too vague to pick out a specific branch of mathematics.

3. Feb 4, 2017

### dIndy

Thanks, this concept might be quite useful, I did some searching and the solution I guessed in the opening post seems to be a Walsh function. The problem in my opening post was also equivalent to maximising the determinant:

$\det{\begin{vmatrix}\int f(t)^2dt&\int f(t)g(t)dt\\ \int f(t)g(t)dt&\int g(t)^2dt\end{vmatrix}}$

A more difficult problem then could be maximising:

$\det{\begin{vmatrix}\int f(t)^2dt&\int f(t)g(t)dt&\int f(t)g(t)^2dt \\ &\int g(t)^2dt&\int g(t)^3dt\\ & & \int g(t)^4dt\end{vmatrix}}$

Where it would then be ideal if $f$ was not only orthogonal to $g$ but also to $g^2$, and $g$ to $g^2$?

Another problem could be when the determinant explicitly depends on $t$:

$\det{\begin{vmatrix}\int f(t)^2dt&\int f(t)^2tdt\\ &\int f(t)^2t^2dt \end{vmatrix}}$

But I do not want to turn this thread into a homework one for solving these specific problems.

What I was originally looking for was a book that describes methods for turning problems like mine into differential equations (I do not know if this is even possible), similar to how in calculus of variation books maximising $\int_a^b L(y,y',t)dt$ turns into the Euler-Lagrange equation. But then a text for problems that could not be neatly written as a Lagrangian under an integral.

4. Feb 4, 2017

### Stephen Tashi

The idea isn't to make $f$, $g$, $g^2$ orthogonal, but rather to parameterize the problem by expressing the functions as sums of orthogonal functions. (Of course, it may turn out that answers to particular problems do indeed require that $f$ and $g$ be orthogonal to each other.)

For example, if $f(x) = a_1 p_1(x) + a_2 p_2(x)$ and $g(x) = b_1 p_1(x) + b_2 p_2(x)$ where the $p_i$ are orthogonal and also orthonormal functions on the set (or interval) $S$ then:

$\int_S f(x)g(x)dx =$
$a_1 b_1 \int_S p_1(x)p_1(x) + a_1 b_2 \int_S p_1(x)p_2(x) + a_2 b_1 p_2(x) p_1(x) + a_2 b_2 \int_S p_2(x) p_2(x)$
$= a_1 b_1 + 0 + 0 + a_2 b_2$

and

$\int_S g^2(x) dx =$
$\int_S b_1 b_1 \int_S p_1(x) p_1(x) + 2 b_1 b_2 \int_S p_1(x) p_2(x) + b_2 b_2 \int_S p_2(x)p_2(x)$
$= b_1^2 + 0 + b_2^2$

So all the calculus is gone and what's left is algebraic expressions in the unknowns $a_i, b_i$.

5. Feb 4, 2017

### Stephen Tashi

My recollection of the calculus of variations is that one defines a "variation" of the unknown solution $f(x)$ to be the function $\delta f = f(x) + \alpha v(x)$ where $\alpha$ is a "small" constant and $v(x)$ is an arbitrary differentiable function that is 0 at x = a and x = b. The manipulations used in the calculus of variations arrive at a differential equation for $f(x)$ that does not depend on the specific choice of $v(x)$. So we'd have to understand what conditions in a more general problem would lead to equations independent of the choice of $v(x)$.