# Derivative of integral

1. Jan 20, 2016

### EngWiPy

Hello,

I have this problem

$$\frac{\partial}{\partial\,x}\int_0^{∞}\log(1+x)\,f_X(x)\,dx$$,

where x is a random variable, and f_X(x) is its probability density function.

It's been a long time since I encountered a similar problem, and I forgot how to do this. Do we use Leibniz integral rule here?

Thanks

2. Jan 20, 2016

### Krylov

Strictly speaking, this derivative is zero. You need to use different symbols for the integration variable and the differentiation variable. The way it is now makes it hard to answer your question.

3. Jan 20, 2016

### EngWiPy

What do you mean?

4. Jan 20, 2016

### Krylov

You integrate with respect to $x$. This yields a number, a constant. The derivative of a constant is zero.

5. Jan 20, 2016

### EngWiPy

OK right it is like this

$$\frac{\partial}{\partial\,s}\int_0^{∞}\log(1+x\,s)\,f_X(x)\,dx$$

6. Jan 20, 2016

### Krylov

Ok, yes, then you can use Leibniz rule, but probably you better use this version or something similar, because your integral is over an unbounded domain. Basically, you can interchange integration and differentiation pending some technical conditions, the most important of those being that you can bound the partial derivative of the integrand w.r.t. $s$ by some integrable function, uniformly in $s$.

7. Jan 20, 2016

### EngWiPy

So basically it becomes

$$\int_0^{∞}\frac{x\,f_X(x)}{1+x\,s}\,dx$$

right?

8. Jan 20, 2016

### Krylov

Yes, but for this to be rigorous you have to prove that there exists a function $g : [0,\infty) \to \mathbb{R}$ such that $g$ is integrable, i.e.
$$\int_0^{\infty}{|g(x)|\,dx} < \infty$$
and furthermore it holds that
$$\left|\frac{x\,f_X(x)}{1+x\,s}\right| \le |g(x)|$$
for all $x \ge 0$ and for all $s$ that you are considering. If you are a physicist you probably don't care too much, but that will make me cry a little.

9. Jan 20, 2016

### Krylov

It may not be that hard, though. For example, if $s$ is positive and bounded away from zero, then $x \mapsto \frac{x}{1 + xs}$ is bounded on $[0,\infty)$, uniformly in $s$. Let $C > 0$ be such a bound. Since $f_X$ is a probability density, you can choose $g = C f$.

10. Jan 20, 2016

### EngWiPy

If we have $\mathbb{E}\{x\}=\bar{x}$, then we can have $g(x)=xf_X(x)\ge\frac{xf_X(x)}{1+x\,s}$ since $x,\,s\ge 0$, right?

11. Jan 20, 2016

### Krylov

Yes, if $X$ has finite expectation, that works fine. Very nice

12. Jan 20, 2016

### EngWiPy

OK, perfect. Now comes the second question:

This problem comes from a larger problem, which is an optimization problem. The optimization problem states that

$$\underset{s\ge 0}{\max}\int_0^{∞}\log(1+x\,s)\,f_X(x)\,dx\\\text{s.t}\,\,\, s\le \bar{s}$$

and some how the solution ends up like this

$$s=\left(\frac{1}{x_0}-\frac{1}{x}\right)^+$$

where $x_0$ is a constant related to the Lagrangian multiplier, and $(a)^+=\max(0,a)$. But how?

13. Jan 20, 2016

### Krylov

I don't think I understand it entirely. What is $\overline{s}$? Is it also an expectation? Also, why is the solution a function of $x$? So far, I was under the impression that $s$ is a numerical parameter.

14. Jan 20, 2016

### EngWiPy

$\bar{s}$ is a maximum value. I should've written $s_{\text{max}}$. Basically $s$ in my problem is a power allocated to a communication system, where the $\log(1+x\,s)$ is the instantaneous capacity of the channel given state $x$. The integral of course is the average capacity. So, I need to optimize $s$ such that the average capacity of the channel is maximized, given that there is a maximum power budget. Makes sense now?

15. Jan 20, 2016

### Krylov

Partially. I still don't understand why the solution you presented in post #12 is a function of $x$. Also, I don't understand why you would use Lagrange multipliers when you have a simple inequality constraint $0 \le s \le s_{\text{max}}$. The way I read it now, is that you need to find $s_0$ such that the function
$$[0,s_{\text{max}}] \ni s \mapsto \int_0^{\infty}{\log(1 + sx)f_X(x)\,dx}$$
assumes a local or global maximum in $s_0$. But since $f_X$ is non-negative, doesn't this just mean that $s_0 = s_{\text{max}}$? I suppose not, probably I'm misunderstanding something.

16. Jan 20, 2016

### EngWiPy

That is why I asked the question. I thought I did the partial derivative wrongly. For the constraint, actually it depends on $x$. We need the avreage power below a certain maximum budget.That is

$$\int_0^∞ s(x)f_X(x)\,dx\leq s_{\text{max}}$$

The transmit power also depends on $x$, i.e., it is $s(x)$. So, the optimization problem becomes:

$$\underset{s(x)\ge 0}{\max}\int_0^{∞}\log(1+x\,s(x))\,f_X(x)\,dx\\\text{s.t}\,\,\, \int_0^∞ s(x)f_X(x)\,dx\leq s_{\text{max}}$$

See the attached file eq. 4 and 5.

#### Attached Files:

• ###### 00634685.pdf
File size:
375 KB
Views:
86
17. Jan 20, 2016

### Krylov

Aha, so you are maximizing over a set of functions, rather than over a numerical parameter. This makes your problem more interesting, but also more difficult. It requires some knowledge of variational methods (i.e. infinite dimensional optimization). A few general remarks on strategy:
• You need to first carefully describe which functions are admissible by specifying the domain of your capacity functional as a subset of an appropriate function space, taking into account the budget constrain. In infinite dimensions it is typically not trivial to actually prove that this set contains a maximizer.

• The ordinary (numerical) derivative changes into a so-called Gâteaux derivative, which is a derivative of a functional (or, more generally, a nonlinear operator) with respect to a function. (Depending on context, sometimes you need the stronger notion of Fréchet derivative.) In particular, just pretending that $s$ is a numerical parameter and applying Leibniz' rule does not work.

• In order to deal with the inequality constraints some infinite dimensional form of the Kuhn-Tucker conditions may be required.
You might want to have a look at https://www.amazon.com/Nonlinear-Functional-Analysis-its-Applications/dp/038790915X of Zeidler's book on applied nonlinear functional analysis. I didn't study your PDF, but it may be that there you can also find some pointers to relevant literature.

Last edited by a moderator: May 7, 2017
18. Jan 21, 2016

### Krylov

Did you notice in your PDF link that in (3) there is inequality while in (4) the constraint is an equality?

It seems to me that one argues a priori that any maximizer of the problem with the inequality constraint must actually satisfy this constraint with equality, thereby eliminating the need for Kuhn-Tucker type conditions and making the problem amenable to Lagrange multipliers.

From the application's point of view, is it natural to assume from the beginning that any $S$ is continuous? (The maximizer in (5) is continuous.) Or would it be more natural to start in some space containing discontinuous functions as well?

19. Jan 21, 2016

### EngWiPy

I think $s$ is continuous, which means it takes any value between $0$ and $s_{\text{max}}$. OK, now with equality in the constraint, how do we get the solution? Any hint?

20. Jan 21, 2016

### Krylov

Yes, see post #17. You may also find sections 3.1 and 3.5 of Cheney's Analysis for Applied Mathematics helpful and a bit more accessible. For equality constraints the theory presented there should be enough.

Start by choosing a suitable function space, so you can define your objective and constraint functionals on the same open subset of that space and apply Lagrange multipliers. A Banach space of integrable functions seems most natural, but its positive cone has empty interior, which may complicate matters somewhat. Since you already know that your maximizer is continuous, you could cheat a little and try to work directly on a space of continuous functions instead.