Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Derivative of integral

  1. Jan 20, 2016 #1
    Hello,

    I have this problem

    [tex]\frac{\partial}{\partial\,x}\int_0^{∞}\log(1+x)\,f_X(x)\,dx[/tex],

    where x is a random variable, and f_X(x) is its probability density function.

    It's been a long time since I encountered a similar problem, and I forgot how to do this. Do we use Leibniz integral rule here?

    Thanks
     
  2. jcsd
  3. Jan 20, 2016 #2

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Strictly speaking, this derivative is zero. You need to use different symbols for the integration variable and the differentiation variable. The way it is now makes it hard to answer your question.
     
  4. Jan 20, 2016 #3
    What do you mean?
     
  5. Jan 20, 2016 #4

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    You integrate with respect to ##x##. This yields a number, a constant. The derivative of a constant is zero.
     
  6. Jan 20, 2016 #5
    OK right it is like this

    [tex]\frac{\partial}{\partial\,s}\int_0^{∞}\log(1+x\,s)\,f_X(x)\,dx[/tex]
     
  7. Jan 20, 2016 #6

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Ok, yes, then you can use Leibniz rule, but probably you better use this version or something similar, because your integral is over an unbounded domain. Basically, you can interchange integration and differentiation pending some technical conditions, the most important of those being that you can bound the partial derivative of the integrand w.r.t. ##s## by some integrable function, uniformly in ##s##.
     
  8. Jan 20, 2016 #7
    So basically it becomes

    [tex]\int_0^{∞}\frac{x\,f_X(x)}{1+x\,s}\,dx[/tex]

    right?
     
  9. Jan 20, 2016 #8

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Yes, but for this to be rigorous you have to prove that there exists a function ##g : [0,\infty) \to \mathbb{R}## such that ##g## is integrable, i.e.
    $$
    \int_0^{\infty}{|g(x)|\,dx} < \infty
    $$
    and furthermore it holds that
    $$
    \left|\frac{x\,f_X(x)}{1+x\,s}\right| \le |g(x)|
    $$
    for all ##x \ge 0## and for all ##s## that you are considering. If you are a physicist you probably don't care too much, but that will make me cry a little.
     
  10. Jan 20, 2016 #9

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    It may not be that hard, though. For example, if ##s## is positive and bounded away from zero, then ##x \mapsto \frac{x}{1 + xs}## is bounded on ##[0,\infty)##, uniformly in ##s##. Let ##C > 0## be such a bound. Since ##f_X## is a probability density, you can choose ##g = C f##.
     
  11. Jan 20, 2016 #10
    If we have ##\mathbb{E}\{x\}=\bar{x}##, then we can have ##g(x)=xf_X(x)\ge\frac{xf_X(x)}{1+x\,s}## since ##x,\,s\ge 0##, right?
     
  12. Jan 20, 2016 #11

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Yes, if ##X## has finite expectation, that works fine. Very nice :smile:
     
  13. Jan 20, 2016 #12
    OK, perfect. Now comes the second question:

    This problem comes from a larger problem, which is an optimization problem. The optimization problem states that

    [tex]\underset{s\ge 0}{\max}\int_0^{∞}\log(1+x\,s)\,f_X(x)\,dx\\\text{s.t}\,\,\, s\le \bar{s}[/tex]

    and some how the solution ends up like this

    [tex]s=\left(\frac{1}{x_0}-\frac{1}{x}\right)^+[/tex]

    where ##x_0## is a constant related to the Lagrangian multiplier, and ##(a)^+=\max(0,a)##. But how?
     
  14. Jan 20, 2016 #13

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    I don't think I understand it entirely. What is ##\overline{s}##? Is it also an expectation? Also, why is the solution a function of ##x##? So far, I was under the impression that ##s## is a numerical parameter.
     
  15. Jan 20, 2016 #14
    ##\bar{s}## is a maximum value. I should've written ##s_{\text{max}}##. Basically ##s## in my problem is a power allocated to a communication system, where the ##\log(1+x\,s)## is the instantaneous capacity of the channel given state ##x##. The integral of course is the average capacity. So, I need to optimize ##s## such that the average capacity of the channel is maximized, given that there is a maximum power budget. Makes sense now?
     
  16. Jan 20, 2016 #15

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Partially. I still don't understand why the solution you presented in post #12 is a function of ##x##. Also, I don't understand why you would use Lagrange multipliers when you have a simple inequality constraint ##0 \le s \le s_{\text{max}}##. The way I read it now, is that you need to find ##s_0## such that the function
    $$
    [0,s_{\text{max}}] \ni s \mapsto \int_0^{\infty}{\log(1 + sx)f_X(x)\,dx}
    $$
    assumes a local or global maximum in ##s_0##. But since ##f_X## is non-negative, doesn't this just mean that ##s_0 = s_{\text{max}}##? I suppose not, probably I'm misunderstanding something.
     
  17. Jan 20, 2016 #16
    That is why I asked the question. I thought I did the partial derivative wrongly. For the constraint, actually it depends on ##x##. We need the avreage power below a certain maximum budget.That is

    [tex]\int_0^∞ s(x)f_X(x)\,dx\leq s_{\text{max}}[/tex]

    The transmit power also depends on ##x##, i.e., it is ##s(x)##. So, the optimization problem becomes:

    [tex]
    \underset{s(x)\ge 0}{\max}\int_0^{∞}\log(1+x\,s(x))\,f_X(x)\,dx\\\text{s.t}\,\,\, \int_0^∞ s(x)f_X(x)\,dx\leq s_{\text{max}}
    [/tex]

    See the attached file eq. 4 and 5.
     

    Attached Files:

  18. Jan 20, 2016 #17

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Aha, so you are maximizing over a set of functions, rather than over a numerical parameter. This makes your problem more interesting, but also more difficult. It requires some knowledge of variational methods (i.e. infinite dimensional optimization). A few general remarks on strategy:
    • You need to first carefully describe which functions are admissible by specifying the domain of your capacity functional as a subset of an appropriate function space, taking into account the budget constrain. In infinite dimensions it is typically not trivial to actually prove that this set contains a maximizer.

    • The ordinary (numerical) derivative changes into a so-called Gâteaux derivative, which is a derivative of a functional (or, more generally, a nonlinear operator) with respect to a function. (Depending on context, sometimes you need the stronger notion of Fréchet derivative.) In particular, just pretending that ##s## is a numerical parameter and applying Leibniz' rule does not work.

    • In order to deal with the inequality constraints some infinite dimensional form of the Kuhn-Tucker conditions may be required.
    You might want to have a look at https://www.amazon.com/Nonlinear-Functional-Analysis-its-Applications/dp/038790915X of Zeidler's book on applied nonlinear functional analysis. I didn't study your PDF, but it may be that there you can also find some pointers to relevant literature.
     
    Last edited by a moderator: May 7, 2017
  19. Jan 21, 2016 #18

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Did you notice in your PDF link that in (3) there is inequality while in (4) the constraint is an equality?

    It seems to me that one argues a priori that any maximizer of the problem with the inequality constraint must actually satisfy this constraint with equality, thereby eliminating the need for Kuhn-Tucker type conditions and making the problem amenable to Lagrange multipliers.

    From the application's point of view, is it natural to assume from the beginning that any ##S## is continuous? (The maximizer in (5) is continuous.) Or would it be more natural to start in some space containing discontinuous functions as well?
     
  20. Jan 21, 2016 #19
    I think ##s## is continuous, which means it takes any value between ##0## and ##s_{\text{max}}##. OK, now with equality in the constraint, how do we get the solution? Any hint?
     
  21. Jan 21, 2016 #20

    Krylov

    User Avatar
    Science Advisor
    Education Advisor

    Yes, see post #17. You may also find sections 3.1 and 3.5 of Cheney's Analysis for Applied Mathematics helpful and a bit more accessible. For equality constraints the theory presented there should be enough.

    Start by choosing a suitable function space, so you can define your objective and constraint functionals on the same open subset of that space and apply Lagrange multipliers. A Banach space of integrable functions seems most natural, but its positive cone has empty interior, which may complicate matters somewhat. Since you already know that your maximizer is continuous, you could cheat a little and try to work directly on a space of continuous functions instead.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Derivative of integral
Loading...