- #1
Jolb
- 419
- 29
This is inspired by Kardar's Statistical Physics of Particles, page 45, and uses similar notation.
Find the characteristic function, [itex]\widetilde{p}(\overrightarrow{k})[/itex] for the joint gaussian distribution:
[tex]p(\overrightarrow{x})=\frac{1}{\sqrt{(2\pi)^{N}det
[C]}}exp[-\frac{1}{2}\sum_{m, n}^{N}C_{mn}^{-1}(x_{m}-\lambda_{m})(x_{n}-\lambda_{n})][/tex]
Where C is a real symmetric matrix and C-1 is its inverse.
(Note that the -1 is an exponent, not subtraction of the identity matrix. Anytime I write X-1 I'm talking about the inverse of the matrix X).
[itex]\widetilde{p}(\overrightarrow{k})=\int p(\overrightarrow{x})exp[-i\sum_{j=1}^{N}k_jx_j]d^{N}\overrightarrow{x}[/itex]
That is, the characteristic function is the Fourier transform of the probability distribution.
The first part of this problem was to find the normalization factor for the joint gaussian, which is the term in the square root in the expression for p(x). The way I did that was by noting that since C is real and symmetric, there must be an orthogonal matrix D such that (D-1)CD is diagonal with C's eigenvalues along the diagonal. Then, changing to the variables y_i = x_i - lambda_i, i.e. Dy=x-lambda, the sum over m,n reduces to just the sum over one index, and the integral breaks into the product of N gaussian integrals with variances equal to the eigenvalues of C.
That part was pretty confusing to me but fortunately Kardar tells you the recipe. He says, "The corresponding joint characteristic function is obtained by similar manipulations, and is given by
http://img194.imageshack.us/img194/9551/newfirst.png
."
Unfortunately I don't see how you can get rid of the extra x's when you try to perform the Fourier transform...
[tex] \widetilde{p}(\overrightarrow{k})=\int p(\overrightarrow{x})\exp[-i\sum_{j=1}^{N}k_jx_j]d^{N}\overrightarrow{x}[/tex]
[tex]=\int_{\infty}^{\infty }\int_{\infty}^{\infty }...\int_{\infty}^{\infty }exp\left (\sum_{m, n}^{N}[C_{mn}^{-1}(x_{m}-\lambda_{m})(x_{n}-\lambda_{n})]-i\sum_{j}^{N}k_{j}x_{j} \right )dx_1dx_2...dx_N [/tex]
Pretty ugly. Now if I try to change coordinates to the y's,
http://img715.imageshack.us/img715/4754/secondpv.png
Where the alphas are the eigenvalues of C and their product (of all N of them) is det[C].
This just doesn't seem to help. I'm really not sure what to do. Should I look for some other coordinates and find a new D to diagonalize? Or am I missing something? Is there some sort of orthogonality trick that let's me throw away a bunch of terms in the sum?
Any help would be greatly appreciated.
P.S. For some reason I couldn't get two of those equations to work in the PF markup. ?
Homework Statement
Find the characteristic function, [itex]\widetilde{p}(\overrightarrow{k})[/itex] for the joint gaussian distribution:
[tex]p(\overrightarrow{x})=\frac{1}{\sqrt{(2\pi)^{N}det
[C]}}exp[-\frac{1}{2}\sum_{m, n}^{N}C_{mn}^{-1}(x_{m}-\lambda_{m})(x_{n}-\lambda_{n})][/tex]
Where C is a real symmetric matrix and C-1 is its inverse.
(Note that the -1 is an exponent, not subtraction of the identity matrix. Anytime I write X-1 I'm talking about the inverse of the matrix X).
Homework Equations
[itex]\widetilde{p}(\overrightarrow{k})=\int p(\overrightarrow{x})exp[-i\sum_{j=1}^{N}k_jx_j]d^{N}\overrightarrow{x}[/itex]
That is, the characteristic function is the Fourier transform of the probability distribution.
The Attempt at a Solution
The first part of this problem was to find the normalization factor for the joint gaussian, which is the term in the square root in the expression for p(x). The way I did that was by noting that since C is real and symmetric, there must be an orthogonal matrix D such that (D-1)CD is diagonal with C's eigenvalues along the diagonal. Then, changing to the variables y_i = x_i - lambda_i, i.e. Dy=x-lambda, the sum over m,n reduces to just the sum over one index, and the integral breaks into the product of N gaussian integrals with variances equal to the eigenvalues of C.
That part was pretty confusing to me but fortunately Kardar tells you the recipe. He says, "The corresponding joint characteristic function is obtained by similar manipulations, and is given by
http://img194.imageshack.us/img194/9551/newfirst.png
."
Unfortunately I don't see how you can get rid of the extra x's when you try to perform the Fourier transform...
[tex] \widetilde{p}(\overrightarrow{k})=\int p(\overrightarrow{x})\exp[-i\sum_{j=1}^{N}k_jx_j]d^{N}\overrightarrow{x}[/tex]
[tex]=\int_{\infty}^{\infty }\int_{\infty}^{\infty }...\int_{\infty}^{\infty }exp\left (\sum_{m, n}^{N}[C_{mn}^{-1}(x_{m}-\lambda_{m})(x_{n}-\lambda_{n})]-i\sum_{j}^{N}k_{j}x_{j} \right )dx_1dx_2...dx_N [/tex]
Pretty ugly. Now if I try to change coordinates to the y's,
http://img715.imageshack.us/img715/4754/secondpv.png
Where the alphas are the eigenvalues of C and their product (of all N of them) is det[C].
This just doesn't seem to help. I'm really not sure what to do. Should I look for some other coordinates and find a new D to diagonalize? Or am I missing something? Is there some sort of orthogonality trick that let's me throw away a bunch of terms in the sum?
Any help would be greatly appreciated.
P.S. For some reason I couldn't get two of those equations to work in the PF markup. ?
Last edited by a moderator: