# Variational expression: a demonstration

1. May 16, 2012

### EmilyRuck

1. The problem statement, all variables and given/known data

Hello!
My problem is with a variational expression. I have a quantity, say L, which could be determined by the ratio:

$K = \displaystyle \frac{\left[\int_a^b E(x)e_n(x)dx\right]^2}{\left[\int_a^b E(x)e_1(x)dx\right]^2}$

Where $e_1(x), e_n(x)$ are known functions and $n$ is an integer, with $n > 1$.
If I couldn't know exactly $E(x)$ and I substitute it with $E_0(x) + \delta E(x)$, I have to demonstrate that the corresponding $\delta K$ is proportional to $[\delta E(x)]^2$.
So an error of 10 % made during the estimation of $E(x)$ is an error of only 1 % for the corresponding estimation of $K$.

2. Relevant equations

3. The attempt at a solution

I tried to write the square of numerator and denominator:

$K + \delta K = \displaystyle \frac{\left[\int_a^b [E(x) + \delta E(x)] e_n(x)dx\right]^2}{\left[\int_a^b [E(x) + \delta E(x)] e_1(x)dx\right]^2} =$
$\displaystyle = \frac{\left[\int_a^b E(x)e_n(x)dx\right]^2 + 2\int_a^b E(x)e_n(x)dx \int_a^b \delta E(x)e_n(x)dx + \left[\int_a^b \delta E(x)e_n(x)dx\right]^2}{\left[\int_a^b E(x)e_1(x)dx\right]^2 + 2\int_a^b E(x)e_1(x)dx \int_a^b \delta E(x)e_1(x)dx + \left[\int_a^b \delta E(x)e_1(x)dx\right]^2}$

We can easily substitute the integral expression with letters and write:

$K + \delta K = \displaystyle \frac{a^2 + 2ab + b^2}{c^2 + 2cd + d^2}$

How could I do now?
Could I neglect the $b^2$ and $d^2$ terms because they are small? If I do this, then I could divide both numerator and denominator by $c^2$ and take the Taylor series expansion of the denominator truncated to the first order, so

$K + \delta K \simeq \frac{\displaystyle \frac{a^2}{c^2} + \displaystyle \frac{2ab}{c^2}}{1 + \displaystyle \frac{2d}{c}} \simeq \left(\displaystyle \frac{a^2}{c^2} + \frac{2ab}{c^2}\right)\left(1 + \displaystyle \frac{2d}{c}\right)$

But in this way I can't separate yet $a$ and $c$ from $b$ and $d$, and I can't write a term with only $c^2$ or $d^2$. How can I proceed?
Thank you if you read this post,

Emily

2. May 16, 2012

### Ray Vickson

The result you want is false, in general. Let us write $\delta E(x)$ as $r h(x),$ where $h(x)$ is a function (not necessarily small) and $r$ is the perturbation parameter (which we do think of a small). We have
$$K = K(r) = \left(\frac{a + br}{c + dr}\right)^2,$$ where
$$\begin{array}{l}a = \int_a^b e_0(x) E_0(x) \, dx \\ b = \int_a^b e_0(x) h(x) \, dx \\ c = \int_a^b e_n(x) E_0(x) \, dx \\ d = \int_a^b e_n(x) h(x) \, dx \end{array}$$
Your result requires that the Taylor expansion of K(r) have the form $K(0) + K_2 r^2,$ with no $r^1$ term. This is false: we have
$$K'(0) = dK(r)/dr|_{r=0} = \frac{2a}{c^3}(bc - ad),$$
which is nonzero in general.

RGV

3. May 17, 2012

### EmilyRuck

Maybe you wanted to write:

$$\begin{array}{l}a = \int_a^b E_0(x) e_n(x) \, dx \\ b = \int_a^b h(x) e_n(x) \, dx \\ c = \int_a^b E_0(x) e_1(x) \, dx \\ d = \int_a^b h(x) e_1(x) \, dx \end{array}$$

Substituting:

$$bc - ad = 0 \Rightarrow \int_a^b h(x) e_n(x) \, dx \int_a^b E_0(x) e_1(x) \, dx - \int_a^b E_0(x) e_n(x) \, dx \int_a^b h(x) e_1(x) \, dx = 0$$

It seems to be true just if the product of the integrals equals the integrals of the products. Isn't it?
According to my notes, the result about $$[\delta E(x)]^2$$ should be true in general: my professor and another one said that in their lessons, without explaining why .

Emily

4. May 17, 2012

### Ray Vickson

Sorry, no. The product b*c involves the integral of e_n * h, while the product a*d involves the integral of e_1 * h. There is no reason why they should cancel. Look at it this way:
$$bc-ad = \int_a^b h(x) e_n(x) \, dx \int_a^b E_0(w) e_1(w) \, dw - \int_a^b E_0(w) e_n(w) \, dw \int_a^b h(x) e_1(x) \, dx .$$
Does that look like 0 to you?

RGV

Last edited: May 17, 2012
5. May 17, 2012

### Ray Vickson

To convince you the result is false, let's look at a numerical example. (Of course, the functions you are using in your study may not resemble the ones I use below; it may be that for the special functions you are using, the result could, conceivably, hold---just not for the reasons you claim.) Let's take $E_0(x) = x,\: h(x) = x^2,\: e_n(x) = \sin(\pi x/2), \: e_1(x) = \cos(\pi x/2), \: a = 0, \:b = 1.$ Then
$$\begin{array}{rcl}K &=& \frac{\left( \int_0^1 (x + r x^2) \sin(\pi x/2) \, dx \right)^2}{\left( \int_0^1 (x + r x^2) \cos(\pi x/2) \, dx \right)^2}\\ &\doteq& \frac{4(2.283185308\: r + 3.141592654)^2}{(3.586419096+1.869604404\: r)^2} \\ &=&3.069288133 + 1.261227604\: r - 0.527913928\: r^2 + O(r^3). \end{array}$$
The term linear in r is not zero.

RGV

6. May 21, 2012

### EmilyRuck

I tried to do some numerical examples too, with functions very similar to your ones. In the particular problem, it should be

$e_n(x) = \sin(n \pi x/w)\\ e_1(x) = \sin(\pi x/w)\\ E(x) = \displaystyle \sqrt{(x - a)(b - x)}$

with $h(x)$ whatever error function and $w$ a numerical positive constant; in my example I chose a sort of Gaussian distribution, $h(x) = e^{-x^2}$. $E(x)$ should be the estimated function, so it is $E(x) = E_0(x) + rh(x)$ and this is all we know about the function. Don't worry if somewhere $E(x)$ becomes imaginary, it could happen.
But now I wonder why the professor said that :surprised. I agree with you, the first order error is nonzero!
Anyway, thank you so much for your calculations and observations, which are all right and useful!!!

Emily

7. May 21, 2012

### EmilyRuck

With a more accurate example, I tried these realistic values for each integral:

$a = 1.9;\\ b=2.1;\\ w = 4$

and I had to consider $E_0(x) = \sqrt{(x - a)(b - x)}$ instead of $E(x)$ to make this example (I need an expression for $E_0(x)$!).

$n = 3, 5$ gave a difference $bc - ad$ which is nonzero only if we consider a $10^{-7}$ precision (now a and b are the integrals, not their extremes!).
$n = 9$ gave a difference $bc - ad$ which is nonzero only if we consider a $10^{-6}$ precision and so on for increasing values of n.
But the contribution of the nth term for increasing $n$ is decreasing, so maybe we can consider in this case $bc - ad \simeq 0$.
I can't say which are the mathematical reasons for this result, but - strictly numerically - it seems to work.

Emily

8. May 22, 2012

### EmilyRuck

Even if in this textbook the calculation are quite different, the results that I should demonstrate are the same: the statement is:
K «is stationary for small arbitrary variations in the electric field distribution about its correct value. It, therefore, follows that a first-order approximation to the electric field distribution will yield a second-order approximation to» the value of K.
(Robert E. Collin, Field Theory of Guided Waves, Ch. 8)

9. May 22, 2012

### Ray Vickson

I don't know why you are arguing; we are talking about two different things. The thing you wrote down in your original post was not, in general, stationary, so that is why the first-order differential was nonzero. If you did have a stationary thing then, of course, the first-order differential would be zero BY DEFINITION, and that would mean your deviations would be of second or higher order---no argument there.

RGV

10. May 22, 2012

### EmilyRuck

Oh, so it had been a misunderstanding. My professor presented the first expression,

$$K = \displaystyle \frac{\left[\int_a^b E(x)e_n(x)dx\right]^2}{\left[\int_a^b E(x)e_1(x)dx\right]^2}$$

as a variational, stationary expression. This is why I wrote the post.

So, the $$K$$ expression is not stationary?
How can I recognize a stationary expression and where I can see that it has zero first-order differential?

Emily

11. May 22, 2012

### Ray Vickson

I don't know the background or the context, so I don't know what K is supposed to be, or where it came from. I just took your word for it that K was supposed to be stationary at E = E_0, and I disputed that. However, maybe what you wrote is not what was meant, etc. At this point I give up.

RGV