1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Characteristic Function of Joint Gaussian Distribution

  1. Feb 21, 2012 #1
    This is inspired by Kardar's Statistical Physics of Particles, page 45, and uses similar notation.

    1. The problem statement, all variables and given/known data
    Find the characteristic function, [itex]\widetilde{p}(\overrightarrow{k})[/itex] for the joint gaussian distribution:

    [tex]p(\overrightarrow{x})=\frac{1}{\sqrt{(2\pi)^{N}det
    [C]}}exp[-\frac{1}{2}\sum_{m, n}^{N}C_{mn}^{-1}(x_{m}-\lambda_{m})(x_{n}-\lambda_{n})][/tex]

    Where C is a real symmetric matrix and C-1 is its inverse.
    (Note that the -1 is an exponent, not subtraction of the identity matrix. Anytime I write X-1 I'm talking about the inverse of the matrix X).


    2. Relevant equations
    [itex]\widetilde{p}(\overrightarrow{k})=\int p(\overrightarrow{x})exp[-i\sum_{j=1}^{N}k_jx_j]d^{N}\overrightarrow{x}[/itex]

    That is, the characteristic function is the fourier transform of the probability distribution.



    3. The attempt at a solution
    The first part of this problem was to find the normalization factor for the joint gaussian, which is the term in the square root in the expression for p(x). The way I did that was by noting that since C is real and symmetric, there must be an orthogonal matrix D such that (D-1)CD is diagonal with C's eigenvalues along the diagonal. Then, changing to the variables y_i = x_i - lambda_i, i.e. Dy=x-lambda, the sum over m,n reduces to just the sum over one index, and the integral breaks into the product of N gaussian integrals with variances equal to the eigenvalues of C.

    That part was pretty confusing to me but fortunately Kardar tells you the recipe. He says, "The corresponding joint characteristic function is obtained by similar manipulations, and is given by

    http://img194.imageshack.us/img194/9551/newfirst.png [Broken]
    ."

    Unfortunately I don't see how you can get rid of the extra x's when you try to perform the fourier transform...

    [tex] \widetilde{p}(\overrightarrow{k})=\int p(\overrightarrow{x})\exp[-i\sum_{j=1}^{N}k_jx_j]d^{N}\overrightarrow{x}[/tex]
    [tex]=\int_{\infty}^{\infty }\int_{\infty}^{\infty }...\int_{\infty}^{\infty }exp\left (\sum_{m, n}^{N}[C_{mn}^{-1}(x_{m}-\lambda_{m})(x_{n}-\lambda_{n})]-i\sum_{j}^{N}k_{j}x_{j} \right )dx_1dx_2...dx_N [/tex]

    Pretty ugly. Now if I try to change coordinates to the y's,
    http://img715.imageshack.us/img715/4754/secondpv.png [Broken]

    Where the alphas are the eigenvalues of C and their product (of all N of them) is det[C].

    This just doesn't seem to help. I'm really not sure what to do. Should I look for some other coordinates and find a new D to diagonalize? Or am I missing something? Is there some sort of orthogonality trick that lets me throw away a bunch of terms in the sum?

    Any help would be greatly appreciated.

    P.S. For some reason I couldn't get two of those equations to work in the PF markup. ???
     
    Last edited by a moderator: May 5, 2017
  2. jcsd
  3. Feb 21, 2012 #2

    kai_sikorski

    User Avatar
    Gold Member

    I think writing things out in sums for this problem obscures what's really going on. I think keeping things as matrices and vectors makes them a lot clearer.

    [tex] p({\bf x}) = (2\pi)^{N/2} |{\bf C}|^{-1} \exp\left(-\frac{1}{2}({\bf x}-{\bf \mu})^T {\bf C}^{-1}({\bf x}-{\bf \mu})\right)[/tex]
    [tex] p({\bf k}) = (2\pi)^{N/2} |{\bf C}|^{-1} \int \exp\left(-\frac{1}{2}({\bf x}-{\bf \mu})^T {\bf C}^{-1}({\bf x}-{\bf \mu})+ i {\bf k}^T {\bf x}\right) d {\bf x}[/tex]

    Now, you correctly notices that
    [tex] {\bf C} = {\bf D}{\bf \sigma}{\bf D}^{-1}[/tex]
    Where [itex]{\bf \sigma}[/itex] is a diagonal matrix and [itex]{\bf D}[/itex] is orthogonal. However did you know that [itex]{\bf D}[/itex] being orthogonal means that
    [tex]{\bf D}^{-1} = {\bf D}^{T} [/tex]

    Okay so define
    [tex] {\bf y} = {\bf D}({\bf x}-{\bf \mu})[/tex]
    Hence
    [tex] {\bf x} = {\bf D}^T{\bf y}+{\bf \mu}[/tex]
    And so after a change of variables in the integral we get
    [tex] p({\bf k}) = (2\pi)^{N/2} |{\bf C}|^{-1} \int \exp\left(-\frac{1}{2}{\bf y}^T {\bf \sigma}^{-1}{\bf y}+ i {\bf k}^T ({\bf D}^T{\bf y}+{\bf \mu}))\right) d {\bf y}[/tex]
    Pull out the mean
    [tex] p({\bf k}) = (2\pi)^{N/2} |{\bf C}|^{-1} \exp( i {\bf k}^T{\bf \mu})\int \exp\left(-\frac{1}{2}{\bf y}^T {\bf \sigma}^{-1}{\bf y}+ i {\bf k}^T{\bf D}^T{\bf y})\right) d {\bf y}[/tex]
    Okay now hint: the [itex]{\bf D}[/itex] in the integral is a rotation. Think about it rotating in k-space as opposed to real space. So define
    [tex] {\bf \kappa} = {\bf D} {\bf k} [/tex]
     
  4. Feb 21, 2012 #3
    Thanks a lot Kajetan! The ideas of factoring out the means and rotating k were what was eluding me. I got tricked into following Kardar's notation of writing out the sums--I figured he wrote them like that as a hint!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Characteristic Function of Joint Gaussian Distribution
  1. Gaussian Distribution (Replies: 6)

Loading...