Calculus of variation textbook 'not under a single integral'

Click For Summary

Discussion Overview

The discussion revolves around finding functions that maximize certain criteria without being expressible under a single integral. Participants explore the calculus of variations, particularly in contexts where traditional methods, such as the Euler-Lagrange equation, may not apply. The conversation includes the search for relevant textbooks and keywords, as well as the exploration of orthogonal basis functions and determinant maximization.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant seeks to maximize a specific expression involving integrals of functions f(t) and g(t) under certain constraints, noting the challenge of not being able to apply the Euler-Lagrange equation.
  • Another participant suggests the concept of "Orthogonal basis functions" as a potential approach, proposing that f(x) and g(x) could be expressed as sums of orthogonal functions.
  • A participant mentions that the original problem is equivalent to maximizing a determinant involving integrals of f(t) and g(t), and proposes more complex problems that could arise from this framework.
  • There is a discussion about the conditions under which functions f and g might need to be orthogonal, with some arguing that parameterization through orthogonal functions could simplify the problem.
  • One participant expresses a desire for resources that address transforming such problems into differential equations, similar to traditional calculus of variations approaches.

Areas of Agreement / Disagreement

Participants express differing views on the applicability of traditional calculus of variations methods to the problems at hand. There is no consensus on a specific approach or solution, and multiple competing ideas are presented regarding the use of orthogonal functions and the nature of the problems being discussed.

Contextual Notes

Participants note the limitations of existing calculus of variations textbooks in addressing problems that cannot be neatly expressed under a single integral. The discussion reflects a need for further exploration of methods that could apply to more complex scenarios.

dIndy
I have to find functions that maximise certain criterea. The problem can however not be put "under a single integral", for example I've to find ##f(t)##, ##g(t)## that maximise:

##
\int_0^{t_e}f(t)^2dt\int_0^{t_e}g(t)^2dt - (\int_0^{t_e}f(t)g(t)dt)^2
##

With ## -1 \leq f(t)\leq1## and ## -1 \leq g(t)\leq 1## for all t

For this problem I can still intuitively guess a solution, such as: ## f(t)=1## for all t and ## g(t)=1## until ##0.5t_e##, afterwards ## g(t)=-1##.

But for more complicated problems I'll no longer be able to guess the solution and will need a proper way to find a solution. Most calculus of variations textbooks I've consulted (such as Gelfand) focus on problems that can be solved with the Euler-Lagrange equation, which I do not think can be applied here?

Does anybody know a textbook that covers these type of problems, or even a keyword that describes these kind of problems? Searching for calculus of variations did not get me far. I come from a life sciences background, so sadly my ability to read something much more complicated than the textbook by Gelfand is limited, but any recommendation is appreciated, thanks!
 
Physics news on Phys.org
dIndy said:
or even a keyword that describes these kind of problems?

For that particular problem: "Orthogonal basis functions".

Assume ##f(x) = \sum_{i=1}^n a_i p_i(x) ## where the ##a_i## are unknown constants and the ##p_i(x)## are as set of orthogonal basis functions. Assume ##g(x) = \sum_{i=1}^n b_i p_i(x)## where the ##b_i## are unknown constants.
Look at the restrictions that the given equation places on ##a_i## and ##b_i##.

It isn't clear what you mean by "more complicated problems". The general description "Problems of finding the extrema of functions who arguments are unknown functions" seems too vague to pick out a specific branch of mathematics.
 
Stephen Tashi said:
For that particular problem: "Orthogonal basis functions".
Thanks, this concept might be quite useful, I did some searching and the solution I guessed in the opening post seems to be a Walsh function. The problem in my opening post was also equivalent to maximising the determinant:

##\det{\begin{vmatrix}\int f(t)^2dt&\int f(t)g(t)dt\\
\int f(t)g(t)dt&\int g(t)^2dt\end{vmatrix}}##

A more difficult problem then could be maximising:

##\det{\begin{vmatrix}\int f(t)^2dt&\int f(t)g(t)dt&\int f(t)g(t)^2dt \\
&\int g(t)^2dt&\int g(t)^3dt\\
& & \int g(t)^4dt\end{vmatrix}}##

Where it would then be ideal if ##f## was not only orthogonal to ##g## but also to ##g^2##, and ##g## to ##g^2##?

Another problem could be when the determinant explicitly depends on ##t##:

##\det{\begin{vmatrix}\int f(t)^2dt&\int f(t)^2tdt\\
&\int f(t)^2t^2dt \end{vmatrix}}##

But I do not want to turn this thread into a homework one for solving these specific problems.

What I was originally looking for was a book that describes methods for turning problems like mine into differential equations (I do not know if this is even possible), similar to how in calculus of variation books maximising ##\int_a^b L(y,y',t)dt## turns into the Euler-Lagrange equation. But then a text for problems that could not be neatly written as a Lagrangian under an integral.
 
dIndy said:
Where it would then be ideal if ##f## was not only orthogonal to ##g## but also to ##g^2##, and ##g## to ##g^2##?

The idea isn't to make ##f##, ##g##, ##g^2## orthogonal, but rather to parameterize the problem by expressing the functions as sums of orthogonal functions. (Of course, it may turn out that answers to particular problems do indeed require that ##f## and ##g## be orthogonal to each other.)

For example, if ##f(x) = a_1 p_1(x) + a_2 p_2(x) ## and ##g(x) = b_1 p_1(x) + b_2 p_2(x)## where the ##p_i## are orthogonal and also orthonormal functions on the set (or interval) ##S## then:

##\int_S f(x)g(x)dx = ##
##a_1 b_1 \int_S p_1(x)p_1(x) + a_1 b_2 \int_S p_1(x)p_2(x) + a_2 b_1 p_2(x) p_1(x) + a_2 b_2 \int_S p_2(x) p_2(x) ##
## = a_1 b_1 + 0 + 0 + a_2 b_2 ##

and

##\int_S g^2(x) dx = ##
## \int_S b_1 b_1 \int_S p_1(x) p_1(x) + 2 b_1 b_2 \int_S p_1(x) p_2(x) + b_2 b_2 \int_S p_2(x)p_2(x) ##
## = b_1^2 + 0 + b_2^2 ##

So all the calculus is gone and what's left is algebraic expressions in the unknowns ##a_i, b_i##.
 
dIndy said:
What I was originally looking for was a book that describes methods for turning problems like mine into differential equations (I do not know if this is even possible), similar to how in calculus of variation books maximising ##\int_a^b L(y,y',t)dt## turns into the Euler-Lagrange equation

We could think about that.

My recollection of the calculus of variations is that one defines a "variation" of the unknown solution ##f(x) ## to be the function ##\delta f = f(x) + \alpha v(x)## where ##\alpha## is a "small" constant and ##v(x)## is an arbitrary differentiable function that is 0 at x = a and x = b. The manipulations used in the calculus of variations arrive at a differential equation for ##f(x)## that does not depend on the specific choice of ##v(x)##. So we'd have to understand what conditions in a more general problem would lead to equations independent of the choice of ##v(x)##.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 17 ·
Replies
17
Views
12K
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
7K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 105 ·
4
Replies
105
Views
11K
  • · Replies 8 ·
Replies
8
Views
3K