mjordan2nd said:
I finally got around to trying to looking at what that G matrix really represents. According to my book
G_{jk} = \langle (X_j - <X_j>)(X_k-<X_k>) \rangle.
I wanted to prove this, but my derivation is off by a factor of two, and I'm not really sure why. It may be something as simple as misunderstanding the multivariable Taylor expansion, but if that's the case I can't figure out why. I can't really find a good reference for the multivariable Taylor expansion so I suspect that is the issue or that the book is incorrect. Fzero, I had a bit of trouble with your method so instead I tried to Taylor expand both sides in terms of the qs and then equate the two sides term by term. First I rewrote the second equation in my original post as
<br />
\frac{1}{Z} \int_{\Omega} e^{- \beta H} e^{i \sum_{j=1}^N q_j (x_j-<x_j>)} = exp \left( - \sum_{j,k=1}^N q_j G_{jk} \frac{q_k}{2} \right).<br />
The Taylor expansion method should work, but you made a mistake doing the expansion of the right-hand side that I'll point out in a bit. In the meantime, I can't help but point out something very closely related that turns out to be extremely useful in computing correlation functions in practice. The expression you have above is an example of something called a "generating function." This means that we can actually compute the expected value of any function of the ##x_i## by taking appropriate derivatives of the generating function. We will call the generating function ##W[q_j]##, then copying your formula, we have
$$ W[q_j]= \frac{1}{Z} \int_{\Omega} e^{- \beta H} e^{i \sum_{j=1}^N q_j (x_j-<x_j>)} = \exp \left( - \sum_{j,k=1}^N q_j G_{jk} \frac{q_k}{2} \right).$$
Note that taking the derivative with respect to a ##q_i## brings down a factor of ##x_i - \langle x_i \rangle## inside the integral:
$$ - i \frac{\partial}{\partial q_i} W[q_i] = \left\langle (x_i - \langle x_i \rangle) e^{i \sum_{j=1}^N q_j (x_j-<x_j>)} \right\rangle = i \sum_{k} G_{ik} q_k \exp \left( - \sum_{j,k=1}^N q_j G_{jk} \frac{q_k}{2} \right).$$
I used the fact that ##G_{ij}## can be taken to be symmetric. Since you had a question about that below, the reason is that it appears in the sum
$$ \sum_{ij} G_{ij} q_i q_j .$$
Since the product ##q_i q_j## is symmetric under ##i\leftrightarrow j##, if G had an antisymmetric part, it would simply drop out in the sum. So we may restrict to symmetric G without loss of generality.
That aside, the expression I got by taking one derivative isn't so interesting, but if we differentiate again, we can get an expression for the correlator that you want
$$ - \frac{\partial^2 }{\partial q_i \partial q_j} W[q_i] = \left\langle (x_i - \langle x_i \rangle) (x_j - \langle x_j \rangle)e^{i \sum_{j=1}^N q_j (x_j-<x_j>)} \right\rangle $$
$$= \left[ G_{ij} - \left( \sum_{k} G_{ik} q_k\right) \left( \sum_{m} G_{jm} q_m \right)\right] \exp \left( - \sum_{j,k=1}^N q_j G_{jk} \frac{q_k}{2} \right).$$
We can then set all the ##q_i = 0 ## to get an expression for the correlation function. Written in terms of your ## y_j = x_j-\langle x_j\rangle##, we indeed find that
$$ \langle y_i y_j\rangle = G_{ij}. $$
More generally, for any function of the ##y_i##, we have an expression that is symbolically
$$ \langle f(y_i)\rangle = \left[ f(-i \partial/\partial q_i ) W[q_i] \right]_{q_i=0}. $$
That is, we replace every appearance of a ##y_i## in the expression with the appropriate derivative acting on the generating function, then set all the ##q_i = 0 ##.
I note that my notation seems a bit condensed. By ##f(y_i)##, I'm including functions of different ##y_i##s, like ## y_1 y_2^2 y_9##, or more complicated expressions.
<br />
y_j = x_j-<x_j><br />
<br />
\frac{1}{Z} \int_{\Omega} e^{- \beta H} e^{i \sum_{j=1}^N q_j y_j} = <e^{i \sum_{j=1}^N q_j y_j}> = exp \left( - \sum_{j,k=1}^N q_j G_{jk} \frac{q_k}{2} \right).<br />
Taylor expanding the left hand side in terms of the q's around q=0 I got (to second order)
<br />
<e^{i \sum_{j=1}^N q_j y_j}> = \langle 1 + i \sum_j y_j q_j - \frac{1}{2} \sum_{ij} y_i y_j q_i q_j + ... \rangle,<br />
and expanding the right-hand side
<br />
1+\sum_i \frac{\partial}{\partial q_i} \left[ exp \left( - \sum_{j,k=1}^N q_j G_{jk} \frac{q_k}{2} \right) \right] = 1 + \sum_{ij} \frac{1}{2} G_{ji} q_j q_i + \sum_{ik} \frac{1}{2} G_{ik}q_i q_k = 1 + \sum_{jk} G_{jk} q_j q_k<br />
You did the expansion a bit too quickly. Remember that for an ordinary function, the Taylor expansion is
$$f(x) = f(0) + f'(0) x + \frac{1}{2} f''(0) x^2 + \cdots .$$
In the expression above, you should really have
$$ W[q_i] = \left. W[q_]\right|_{q_i=0} + \sum_i q_i \left[ \frac{\partial }{\partial q_i}W[q_i] \right]_{q_i=0} +\frac{1}{2} \sum_{ij} q_i q_j \left[ \frac{\partial^2 }{\partial q_i\partial q_j}W[q_i] \right]_{q_i=0}+ \cdots .$$
When you compare the resulting terms, the 1/2 coefficient in this expansion makes the factors of 2 work out, along with your argument about the symmetry of G.
We can also see why the method I described above works out, since the derivatives I was taking match up with what you do here in the Taylor expansion.
Where in the last step I combined the two terms due to the symmetry of the matrix G (which the books states is required, though this is not clear to me). Based on this it appears that <y_i y_j> = 2 G_{ij}. However, while typing this it occurred to me that this discrepancy between my calculation and the books calculation could be from the fact that you have to count \frac{1}{2} \sum_{ij} y_i y_j q_i q_j basically twice because interchanging the indices pretty much gives you the same thing. Is this the correct explanation for the difference between my calculation and the text's? I'm sorry -- my (the book's...?) notation is awfully difficult to keep up with, but could anyone confirm my suspicion? Thanks!
Edit: In fact, now, looking at it in terms of my explanation for the discrepancy, the fact that G is symmetric seems to be becoming clear. It's finally all coming together, I think.