Quantum Field Theory: Understanding Path Integrals and Limit Trick

latentcorpse
Messages
1,411
Reaction score
0
I'm trying to understand path integrals as described in my lecture notes (which are reinforced by Peskin &Schroeder).

Anyway on p284 of P&S, there is a formula inbetween eqns (9.17) and (9.18) that reads:

e^{-iHT} | \phi_a \rangle = \sum_n e^{-i E_n T} | n \rangle \langle n | \phi_a \rangle \rightarrow \langle \Omega | \phi_a \rangle e^{-i E_0 \cdot \infty (1-i \epsilon)} | \Omega \rangle as T \rightarrow \infty ( 1 - i \epsilon)

I can follow the equality on the left fair enough but I don't understand what happens when we take the limit? Apparently, this is quite a common trick in QFT so can anybody explain to me what is going on here?

Thanks!
 
Physics news on Phys.org
The presence of the \epsilon means that the exponential is a strongly decaying function. The ground state term is slowest decaying term, so it dominates. You should refer to the discussion around equ. (4.27) for the relationship between the free and interacting ground states.
 
fzero said:
The presence of the \epsilon means that the exponential is a strongly decaying function. The ground state term is slowest decaying term, so it dominates. You should refer to the discussion around equ. (4.27) for the relationship between the free and interacting ground states.

Ok. Thanks. Two things:

(i) In the expression I wrote down in my original post (out of my notes), why don't we set the e^{-E_o \infty (1-i \epsilon)} factor to e^0=1. Why do we leave it written out explicitly, even though we have taken the limit?

(ii) Just reading on in P&S, in (4.28), how do we go from the 2nd to the 3rd line?

Cheers!
 
latentcorpse said:
Ok. Thanks. Two things:

(i) In the expression I wrote down in my original post (out of my notes), why don't we set the e^{-E_o \infty (1-i \epsilon)} factor to e^0=1. Why do we leave it written out explicitly, even though we have taken the limit?

In the scattering formalism, we treat the interaction as occurring in the time region around t=t_0. At times far in the past and future, t\rightarrow\pm \infty, the particles are far enough from each other that they can be treated as free. The state |0\rangle is the free vacuum, while |\Omega\rangle is the vacuum for the interacting theory. On general principles, we know that

|\Omega\rangle = c_0 |0\rangle + O(T) + \cdots, (*)

where c_0 is a c-number.

If we drop all T dependence, we lose all information about anything past the first term in (*).

(ii) Just reading on in P&S, in (4.28), how do we go from the 2nd to the 3rd line?

Cheers!

Use the definition of U from (4.17).
 
fzero said:
In the scattering formalism, we treat the interaction as occurring in the time region around t=t_0. At times far in the past and future, t\rightarrow\pm \infty, the particles are far enough from each other that they can be treated as free. The state |0\rangle is the free vacuum, while |\Omega\rangle is the vacuum for the interacting theory. On general principles, we know that

|\Omega\rangle = c_0 |0\rangle + O(T) + \cdots, (*)

where c_0 is a c-number.

If we drop all T dependence, we lose all information about anything past the first term in (*).



Use the definition of U from (4.17).

I now get the first bit.

However, I have in my notes that these time evolution operators are given by Dyson's formula:

U(t,t_0)= Te^{-i \int_{t_0}^t \hat{H}(t') dt'}

Are these equivalent?
 
Dyson's formula is shown to be equivalent to (4.17) in the discussion following that formula.
 
fzero said:
Dyson's formula is shown to be equivalent to (4.17) in the discussion following that formula.

Ok. I'm not sure how easy this next bit will be for you to follow as I haven't been able to find t he corresponding stuff in P&S so this is all just from my notes.

I have the formula:

Z ( \vec{b})=\frac{1}{Z_A} \int d^Nx e^{-\frac{1}{2} \vec{x}^T \cdot A \vec{x} + \vec{b}^T \cdot \vec{x} - V( \vec{x} ) } (*)

where Z_A = \int d^N x e^{-\frac{1}{2} \vec{x}^T \cdot A \vec{x}}

and Z_{A, \vec{b}} = \int d^N x e^{-\frac{1}{2} \vec{x}^T \cdot A \vec{x} + \vec{b}^T \cdot \vec{x}}

Now we are told that (*) converges if V is bounded below and that for simplicity, V(0)=0, \frav{\partial V}{\partial \vec{x}} |_{\vec{x}=0}=0

Now I need to figure out how to write this as

Z( \vec{b})=e^{-V( \frac{\partial}{\partial \vec{b}} )} \frac{1}{Z_A} Z_{A, \vec{b}}

where V( \vec{b} ) = \sum_{n=0}^\infty \frac{1}{n!} V^{(n)}_{i_1, \dots , i_n}(0) \frac{\partial}{\partial b_i_1} \dots \frac{\partial}{\partial b_i_n}

Hopefully this makes sense or you've seen something like this before!

Thanks.
 
What have you tried so far? That doesn't seem like it's difficult at all to verify.
 
fzero said:
What have you tried so far? That doesn't seem like it's difficult at all to verify.

Yeah I managed it today by starting with the answer and working backwards...which is probably what I should have been doing in the first place!

I do have another question though:

if K(q,q_0;t-t_0)= \left( \frac{m}{2 \pi i (t-t_0)} \right)^{ \frac{1}{2}} e^{i m \frac{(q-q_0)^2}{2(t-t_0)}} with t>t_0

I was asked to show that -\frac{1}{2m} \frac{\partial^2}{\partial x^2} K(x,0;t) = i \frac{\partial}{\partial x} K(x,0;t)
I managed that fine - it was just a simple exercise in differentiation.

Now I have been asked to express the solutions of the Schrodinger equation for a free particle in terms of initial data \psi(x,0) and K(x,0;t).

So can I write \psi(x,t)=K(x,0;t) \psi(x,0)? since K describes the amplitude for a free particle to propagate from (x_0,t_0) to (x,t). I would have thought that using the definition of K that I just mentioned, we should be writing \psi(x,t)=K(x,0;t) \psi(0,0) since our x_0=0, no?

Anyway, regardless of that, I'm not sure what it wants me to do with this? Does it just want me to write

-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} (K(x,0;t) \psi(x,0))=i \hbar \frac{\partial}{\partial t} (K(x,0;t) \psi(x,0))

I assume I'm meant to do this differentiation now? Essentially this gives

-\frac{\hbar^2}{2m} (K' \psi+K \psi')' = i \hbar (\dot{K} \psi + K \dot{\psi}) \Rightarrow -\frac{\hbar^2}{2m} (K'' \psi + 2K' \psi' + K \psi'') = i \hbar (\dot{K} \psi + K \dot{\psi})
But then the K'' term will cancel the \dot{K} term, right? Leaving

-\frac{\hbar^2}{2m} (2K' \psi' + K \psi'') = i \hbar ( K \dot{\psi})

Now should I also get rid of the \dot{\psi}? I guess we are talking about \dot{\psi(x,0)} here so it will not have any time dependence, right? So I think it should go as well leaving

-\frac{\hbar^2}{2m} (2K' \psi' + K \psi'')
2K' \psi' + K \psi''=0

How does that look? I'm fairly sure there's an error because we are then asked to check for \psi(x,0)=e^{ikx} but the \psi' term will only have k in it and the \psi'' term will have a k^2 so they won't be able to cancel, will they?

Thanks.
 
Last edited:
  • #10
What's true is that

<br /> \psi(x,t)=\int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,0),<br />

so I would actually start from that. This satisfies Schrodinger's equation, but it doesn't appear to reduce to the formula you wrote down in any way.
 
  • #11
yeah its actually

\psi(x,t) = \int K(x-y,t)\psi(y,0) dy which satisfies all the requirements such as the Schroedinger equation and \lim_{t \to 0}\psi(x,t) =\psi(x) the question is a bit misleading in that sense
 
  • #12
fzero said:
What's true is that

<br /> \psi(x,t)=\int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,0),<br />

doesn't this make more sense

\psi(x,t)=\int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,t),

since

K(x,x&#039;;t) = \left\langle \psi(x,t)\right| e^{-iHt} \left|\psi(x&#039;,0)\right\rangle i.e. the coefficients necessary to express \psi(x,t) in terms of \psi(x&#039;,t)

however they do both satisfy

\lim_{t \to 0} \psi(x,t) = \psi(x,0)

and I'm guessing that your one is more attractive since it easily obeys the Schroedinger equation
 
Last edited:
  • #13
sgd37 said:
doesn't this make more sense

\psi(x,t)=\int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,t),

Not really. Using the notation here,

\psi(x,t)=\int d^3x&#039; K(x,x&#039;;t-t_0) \psi(x&#039;,t_0).

You certainly don't propagate a state at time t to another at time t.
 
  • #14
K is not an evolution operator its an amplitude. Anyway \psi(x&#039;,0) is a solution to the Schroedinger equation so it doesn't matter.
 
  • #15
fzero said:
Not really. Using the notation here,

\psi(x,t)=\int d^3x&#039; K(x,x&#039;;t-t_0) \psi(x&#039;,t_0).

You certainly don't propagate a state at time t to another at time t.

Ok.So,

\psi(x,t) = \int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,0)

So the Schrodinger equation implies
-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \int d^3 x&#039; K(x,x&#039;;t) \psi(x&#039;,0) = i \hbar \frac{\partial}{\partial t} \int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,0)
- \frac{\hbar^2}{2m} \int d^3x&#039; \frac{\partial^2 K}{\partial x^2} \psi + 2 \frac{\partial K}{\partial x} \frac{\partial \psi}{\partial x} + K \frac{\partial^2 \psi}{\partial x^2} = i \hbar \int d^3 x&#039; \dot{K} \psi + K \dot{\psi}
- \frac{\hbar^2}{2m} \int d^3x&#039; \frac{\partial^2 K}{\partial x^2} \psi + 2 \frac{\partial K}{\partial x} \frac{\partial \psi}{\partial x} + K \frac{\partial^2 \psi}{\partial x^2} = i \hbar \int d^3 x&#039; \dot{K} \psi as \dot{\psi}=0

But then I don't know how to simplify this from here? Is this as simple as it gets? The final part is to check this result for \psi(x,0)=e^{ikx}, so should I just substitute that in and see if the LHS=RHS?

Also, how would we show that \lim_{t \rightarrow 0} K(x,0;t) = \delta(x)

Thanks very much!
 
  • #16
latentcorpse said:
Ok.So,

\psi(x,t) = \int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,0)

So the Schrodinger equation implies
-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \int d^3 x&#039; K(x,x&#039;;t) \psi(x&#039;,0) = i \hbar \frac{\partial}{\partial t} \int d^3x&#039; K(x,x&#039;;t) \psi(x&#039;,0)
- \frac{\hbar^2}{2m} \int d^3x&#039; \frac{\partial^2 K}{\partial x^2} \psi + 2 \frac{\partial K}{\partial x} \frac{\partial \psi}{\partial x} + K \frac{\partial^2 \psi}{\partial x^2} = i \hbar \int d^3 x&#039; \dot{K} \psi + K \dot{\psi}
- \frac{\hbar^2}{2m} \int d^3x&#039; \frac{\partial^2 K}{\partial x^2} \psi + 2 \frac{\partial K}{\partial x} \frac{\partial \psi}{\partial x} + K \frac{\partial^2 \psi}{\partial x^2} = i \hbar \int d^3 x&#039; \dot{K} \psi as \dot{\psi}=0

Be more careful, \partial \psi(x&#039;,0) /\partial x = 0.


But then I don't know how to simplify this from here? Is this as simple as it gets? The final part is to check this result for \psi(x,0)=e^{ikx}, so should I just substitute that in and see if the LHS=RHS?

Schrodinger's equation is satisfied from the result from post 9:

<br /> -\frac{1}{2m} \frac{\partial^2}{\partial x^2} K(x,0;t) = i \frac{\partial}{\partial x} K(x,0;t)<br />

The same equation holds for x&#039;\neq 0.

Also, how would we show that \lim_{t \rightarrow 0} K(x,0;t) = \delta(x)

Thanks very much!

You should probably look at a sketch of the exponential expression to see what's happening around x=0.
 
  • #17
fzero said:
Be more careful, \partial \psi(x&#039;,0) /\partial x = 0.

So getting rid of those terms leaves
\frac{\hbar^2}{2m} \frac{\partial}{\partial x} \int d^3x&#039; \frac{\partial K(x,x&#039;;t)}{\partial x} \psi(x&#039;,0) = i \hbar \int d^3 x&#039; \dot{K}(x,x&#039;;t) \psi(x&#039;,0)
-\frac{\hbar^2}{2m} \int d^3 x&#039; \frac{\partial^2 K(x,x&#039;;t)}{\partial x^2} \psi(x&#039;,0) = i \hbar \int d^3x&#039; \dot{K}(x,x&#039;;t) \psi(x&#039;,0)

fzero said:
Schrodinger's equation is satisfied from the result from post 9:

<br /> -\frac{1}{2m} \frac{\partial^2}{\partial x^2} K(x,0;t) = i \frac{\partial}{\partial x} K(x,0;t)<br />

The same equation holds for x&#039;\neq 0.

Ok. Well I don't see how this can be applied to what I wrote above so I must have got the above wrong I think.


fzero said:
You should probably look at a sketch of the exponential expression to see what's happening around x=0.

Well K(x,0;t)=\left( \frac{m}{2 \pi i t} \right)^2 e^{\frac{imx^2}{2t}}

So as t \rightarrow 0, the coefficient will blow up to infinity but, more importantly (as it will blow up much faster), the exponential tends to e^\infty at teh origin thus giving a delta function. Is this question answered just by arguing about behaviour rather than actually doing any computations?
 
  • #18
latentcorpse said:
So getting rid of those terms leaves
\frac{\hbar^2}{2m} \frac{\partial}{\partial x} \int d^3x&#039; \frac{\partial K(x,x&#039;;t)}{\partial x} \psi(x&#039;,0) = i \hbar \int d^3 x&#039; \dot{K}(x,x&#039;;t) \psi(x&#039;,0)
-\frac{\hbar^2}{2m} \int d^3 x&#039; \frac{\partial^2 K(x,x&#039;;t)}{\partial x^2} \psi(x&#039;,0) = i \hbar \int d^3x&#039; \dot{K}(x,x&#039;;t) \psi(x&#039;,0)



Ok. Well I don't see how this can be applied to what I wrote above so I must have got the above wrong I think.

Well you had a typo in that formula, the RHS should be a time derivative.

As far as the above formulas, you shouldn't be writing down a RHS and LHS and trying to manipulate both. Start from one side and attempt to show the other, so you'd have

\frac{\hbar^2}{2m} \frac{\partial}{\partial x} \int d^3x&#039; \frac{\partial K(x,x&#039;;t)}{\partial x} \psi(x&#039;,0) =-\frac{\hbar^2}{2m} \int d^3 x&#039; \frac{\partial^2 K(x,x&#039;;t)}{\partial x^2} \psi(x&#039;,0) .

Now use the earlier result that you derived to relate this to the time derivative and hence show that Schrodinger's equation is satisfied.

Well K(x,0;t)=\left( \frac{m}{2 \pi i t} \right)^2 e^{\frac{imx^2}{2t}}

So as t \rightarrow 0, the coefficient will blow up to infinity but, more importantly (as it will blow up much faster), the exponential tends to e^\infty at teh origin thus giving a delta function. Is this question answered just by arguing about behaviour rather than actually doing any computations?

To prove it more formally, you'd have to use a suitable integral expression to verify the delta function behavior. However, this is a reasonably well-known representation of the delta function. There are also other expressions for the propagator that are better suited to verify delta functions in limits (Sakurai is one place to find the discussion).
 
  • #19
fzero said:
Well you had a typo in that formula, the RHS should be a time derivative.

As far as the above formulas, you shouldn't be writing down a RHS and LHS and trying to manipulate both. Start from one side and attempt to show the other, so you'd have

\frac{\hbar^2}{2m} \frac{\partial}{\partial x} \int d^3x&#039; \frac{\partial K(x,x&#039;;t)}{\partial x} \psi(x&#039;,0) =-\frac{\hbar^2}{2m} \int d^3 x&#039; \frac{\partial^2 K(x,x&#039;;t)}{\partial x^2} \psi(x&#039;,0) .

Now use the earlier result that you derived to relate this to the time derivative and hence show that Schrodinger's equation is satisfied.



To prove it more formally, you'd have to use a suitable integral expression to verify the delta function behavior. However, this is a reasonably well-known representation of the delta function. There are also other expressions for the propagator that are better suited to verify delta functions in limits (Sakurai is one place to find the discussion).

Ok. Think I get it now.

One little thing though.

You said that \psi(x,t) = \int d^3 x&#039; K(x,x&#039;;t) \psi(x&#039;,0)

That's fair enough, we're integrating over all possible positions x&#039; where the particle could ahve started from, right?

But, as in this case, if I am asked to express \psi(x,t) in terms of initial data K(x,0;t) and \psi(x,0)

Then surely either there is a misprint and the initial data should be \psi(0,0)? And in that case, we would have fixed the initial position so there would be no need to integrate it and so we would have \psi(x,t)=K(x,0;t) \psi(0,0)

And for proving the schrodigner situation holds, i started with the LHS (with the \hbar^2) but was unable to get rid of one fo the factors of \hbar - we expect teh RHS to be i \hbar \frac{\partial}{\partial t} \psi(x,t) but I'm out by this factor - any advice?

Thanks again,
 
Last edited:
  • #20
latentcorpse said:
Ok. Think I get it now.

One little thing though.

You said that \psi(x,t) = \int d^3 x&#039; K(x,x&#039;;t) \psi(x&#039;,0)

That's fair enough, we're integrating over all possible positions x&#039; where the particle could ahve started from, right?

But, as in this case, if I am asked to express \psi(x,t) in terms of initial data K(x,0;t) and \psi(x,0)

Then surely either there is a misprint and the initial data should be \psi(0,0)? And in that case, we would have fixed the initial position so there would be no need to integrate it and so we would have \psi(x,t)=K(x,0;t) \psi(0,0)

I'd assume that it's a misprint, because \psi(x,t)=K(x,0;t) \psi(0,0) makes no sense. You should go back and review the definition of the propagator in terms of quantum states to see this.

And for proving the schrodigner situation holds, i started with the LHS (with the \hbar^2) but was unable to get rid of one fo the factors of \hbar - we expect teh RHS to be i \hbar \frac{\partial}{\partial t} \psi(x,t) but I'm out by this factor - any advice?

Thanks again,

You're missing some factors of \hbar in your expression for the propagator. The proper expression should be

<br /> K(q,q_0;t-t_0)= \left( \frac{m}{2 \pi i \hbar (t-t_0)} \right)^{ \frac{1}{2}} e^{i m \frac{(q-q_0)^2}{2\hbar(t-t_0)}}<br />
 
  • #21
you should really read the question sheet because it says at the top that hbar =1 throughout

and I've already given you the solution

\psi(x,t) = \int K(x-x&#039;,0;t)\psi(x&#039;,0) dx&#039;
 
Last edited:
  • #22
sgd37 said:
you should really read the question sheet because it says at the top that hbar =1 throughout

and I've already given you the solution

\psi(x,t) = \int K(x-x&#039;,0;t)\psi(x&#039;,0) dx&#039;

For the last bit where we are asked to check this for \psi(x,0)=e^{ikx}

So we are asked here to find \psi(x,t) I guess.

So \psi(x,t) = \int d^3 x&#039; \left( \frac{m}{2 \pi i t} \right)^{\frac{1}{2}} e^{\frac{im(x-x&#039;)^2}{2t}} e^{ikx&#039;}
\psi(x,t) = \left( \frac{m}{2 \pi i t} \right)^{\frac{1}{2}} e^{\frac{imx^2}{2t}} \int d^3 x&#039; e^{\frac{im}{2t} ( x&#039;^2 - 2xx&#039; + \frac{2kt}{m}x&#039;)}

But I don't know how to do that integral?
 
  • #23
you complete the square

if you want the solutions are up on the web

http://www.damtp.cam.ac.uk/user/me288/teaching/aqft/aqft_solutions_1.pdf
 
Last edited by a moderator:
  • #24
sgd37 said:
you complete the square

if you want the solutions are up on the web

http://www.damtp.cam.ac.uk/user/me288/teaching/aqft/aqft_solutions_1.pdf

In the second last line, how does he get

\int d^y e^{\frac{im}{2t} (x&#039;+\frac{kt}{m}-x)^2} = \int dy e^{-\frac{m}{2it}y^2}

what happened to the 2nd and 3rd terms?
 
Last edited by a moderator:
  • #25
he used a change of variable, but he never bothered to change the notation. A clearer way to write what he did would be

<br /> \int dy e^{\frac{im}{2t} (y+\frac{kt}{m}-x)^2} = \int dy&#039; e^{-\frac{m}{2it}y&#039;^2}<br />

where y&#039; =y+\frac{kt}{m}-x from which it is clear that \frac{dy&#039;}{dy}=1
 
  • #26
fzero said:
I'd assume that it's a misprint, because \psi(x,t)=K(x,0;t) \psi(0,0) makes no sense. You should go back and review the definition of the propagator in terms of quantum states to see this.



You're missing some factors of \hbar in your expression for the propagator. The proper expression should be

<br /> K(q,q_0;t-t_0)= \left( \frac{m}{2 \pi i \hbar (t-t_0)} \right)^{ \frac{1}{2}} e^{i m \frac{(q-q_0)^2}{2\hbar(t-t_0)}}<br />

Hey. So thanks for your replies. I had a lecture today that went pretty fast and I've been going over some of the stuff.

At one point, he defines
Z[J] = \int [d \phi(x)] e^{i S[ \phi ] + i \int d^dx J(x) \phi(x)}
= \int [ d \phi(x) ] e^{i S [ \phi ]} \left( 1 + i \int d^dx J(x) \phi(x) + \frac{i^2}{2!} \int d^dx_1 d^dx_2 J(x_1) J(x_2) \phi(x_1) \phi(x_2) + \dots \right)
This is fair enough.

Then he somehow jumps to

\frac{Z[J]}{Z[0]} = \sum_{n=0}^\infty \frac{(-i)^n}{n!} \int d^dx_1 \dots \int d^d x_n \left( J(x_1) \dots J(x_2) \langle \phi(x_1) \dots \phi(x_n) \rangle \right)

I cannot for the life of me see how to show this!
 
  • #27
It should be pretty simple to show that if you have the definition of \langle \phi(x_1) \dots \phi(x_n) \rangle. There may be a question of whether \pm i appears in the source term, but that's a small issue.
 
  • #28
Im sure you can figure it out
 
  • #29
fzero said:
It should be pretty simple to show that if you have the definition of \langle \phi(x_1) \dots \phi(x_n) \rangle. There may be a question of whether \pm i appears in the source term, but that's a small issue.

Well I have \langle \phi(x_1) \dots \phi(x_n) \rangle = \frac{ \int [ d \phi(x) ] \phi(x_1) \dots \phi(x_n) e^{i S[\phi]}}{ \int [ d \phi(x) ] e^{i S [ \phi]}}

Does the denominator of that correspond to Z[0]? If so, then I think I can see it...
 
  • #30
You have a formula for Z[J] so you can work out Z[0]...
 
  • #31
fzero said:
You have a formula for Z[J] so you can work out Z[0]...
Thanks. I've got it now. However, I also have to show that

Z[J] = \displaystyle\sum_{n=0}^\infty \frac{1}{n!} \int d^dx_1 \dots d^dx_n \left( J(x_1) \dots J(x_n) \frac{ \delta^n Z[J]}{\delta J(x_1) \dots \delta J(x_n)} \right)_{J=0}

This seems to be a Taylor expansion of some sort but I can't seem to derive it from the previous expression for Z[J] - any advice?
 
  • #32
What do you find for \delta Z[J]/\delta J(x_1)?
 
  • #33
fzero said:
What do you find for \delta Z[J]/\delta J(x_1)?

Not completely convinced by what I have here but, \frac{\delta}{\delta J(x_1)} is a differential operator so it will act by chain rule so we get

\frac{ \delta Z[J]}{\delta J(x_1)} = \int [ d \phi(x) ] e^{i S[\phi] + i \int d^d x J(x) \phi(x)} \frac{\delta J(x)}{\delta J(x_1)}
\frac{ \delta Z[J]}{\delta J(x_1)} = \int [ d \phi(x) ] e^{i S[\phi] + i \int d^d x J(x) \phi(x)} \delta^{(d)}(x-x_1)

Is that right?
 
  • #34
latentcorpse said:
Not completely convinced by what I have here but, \frac{\delta}{\delta J(x_1)} is a differential operator so it will act by chain rule so we get

\frac{ \delta Z[J]}{\delta J(x_1)} = \int [ d \phi(x) ] e^{i S[\phi] + i \int d^d x J(x) \phi(x)} \frac{\delta J(x)}{\delta J(x_1)}
\frac{ \delta Z[J]}{\delta J(x_1)} = \int [ d \phi(x) ] e^{i S[\phi] + i \int d^d x J(x) \phi(x)} \delta^{(d)}(x-x_1)

Is that right?

You haven't quite used the chain rule, since you didn't include the term coming from the derivative of

e^{i S[\phi] + i \int d^d x J(x) \phi(x)},

That's actually the most important part.
 
  • #35
fzero said:
You haven't quite used the chain rule, since you didn't include the term coming from the derivative of

e^{i S[\phi] + i \int d^d x J(x) \phi(x)},

That's actually the most important part.

Ah! Missed the integral out. So it should have read

\frac{ \delta Z[J]}{\delta J(x_1)} = \int [ d \phi(x) ] e^{i S[\phi] + i \int d^d x J(x) \phi(x)} \times i \int d^dx \phi(x) \delta^{(d)}(x-x_1)=i \int [ d \phi(x) ] \phi(x_1) e^{i S[\phi] + i \int d^d x J(x) \phi(x)}

So extrapolating,

\frac{\delta^n Z[J]}{\delta J(x_1) \dots \delta J(x_n)} = i^n \int [d \phi(x) ] \phi(x_1) \dots \phi(x_n) e^{i S[\phi] + i \int d^d x J(x) \phi(x)}
 
  • #36
fzero said:
You haven't quite used the chain rule, since you didn't include the term coming from the derivative of

e^{i S[\phi] + i \int d^d x J(x) \phi(x)},

That's actually the most important part.


So, despite having worked all this out for the more copmlicated cases where we have path integrals, I am stumped for the finite dimensional question below

if Z[V] = \int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} - V( \vec{x} )}
where V(0)=0 and if V_{i_1 \dots i_n} = \partial_i_1 \dots \partial_i_n V( \vec{x} )_{ \vec{x}=0} with V_i=V_{ij}=0, use the result
G( \frac{\partial}{\partial b}) F(b)= F( \frac{\partial}{\partial u}) G(u) e^{ub}|_{u=0}
to show that
\frac{Z[V]}{Z[0]}= e^{\frac{1}{2} \frac{\partial}{\partial \vec{x}} \cdot A^{-1} \frac{\partial}{\partial \vec{x}}} e^{-V( \vec{x})}|_{\vec{x}=0}

I can't figure out what to take as F and what to take as G or why?
 
  • #37
latentcorpse said:
So, despite having worked all this out for the more copmlicated cases where we have path integrals, I am stumped for the finite dimensional question below

if Z[V] = \int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} - V( \vec{x} )}
where V(0)=0 and if V_{i_1 \dots i_n} = \partial_i_1 \dots \partial_i_n V( \vec{x} )_{ \vec{x}=0} with V_i=V_{ij}=0, use the result
G( \frac{\partial}{\partial b}) F(b)= F( \frac{\partial}{\partial u}) G(u) e^{ub}|_{u=0}
to show that
\frac{Z[V]}{Z[0]}= e^{\frac{1}{2} \frac{\partial}{\partial \vec{x}} \cdot A^{-1} \frac{\partial}{\partial \vec{x}}} e^{-V( \vec{x})}|_{\vec{x}=0}

I can't figure out what to take as F and what to take as G or why?

As a preliminary result, you'll want to show that

\int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} } \left(x_1^{n_1}\cdots x_N^{n_N} \right) \propto \left. \left( \frac{\partial^{n_1}}{\partial j_1^{n_1}} \cdots \frac{\partial^{n_N}}{\partial j_N^{n_N}} \right) e^{\frac{1}{2} \vec{j} \cdot A^{-1} \vec{j} } \right|_{\vec{j}=0}.

You do this by coupling sources \vec{j} to \vec{x} as in the scalar field theory a few posts back. As a result, we can write

\frac{Z[V]}{Z[0]} =\left. e^{V(\partial/\partial \vec{j})} e^{\frac{1}{2} \vec{j} \cdot A^{-1} \vec{j} }\right|_{\vec{j}=0}.

You're meant to use that change of variables result to simplify this expression, but I haven't worked out the details and probably missed something subtle above to make sure things work cleanly.
 
Last edited:
  • #38
fzero said:
As a preliminary result, you'll want to show that

\int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} } \left(x_1^{n_1}\cdots x_N^{n_N} \right) \propto \left( \frac{\partial^{n_1}}{\partial j_1^{n_1}} \cdots \frac{\partial^{n_N}}{\partial j_N^{n_N}} \right) e^{\frac{1}{2} \vec{j} \cdot A^{-1} \vec{j} }.

You do this by coupling sources \vec{j} to \vec{x} as in the scalar field theory a few posts back. As a result, we can write

\frac{Z[V]}{Z[0]} = e^{V(\partial/\partial \vec{j})} e^{\frac{1}{2} \vec{j} \cdot A^{-1} \vec{j} }.

You're meant to use that change of variables result to simplify this expression, but I haven't worked out the details and probably missed something subtle above to make sure things work cleanly.

This is annoying me because I can see what you want me to do, I just don't know how to do it!

So when I couple a source won't I get

Z[J]= \int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} - V(x) + \vec{j} \cdot \vec{x}}
but now I have this Z[J] floating about and I only want to be working with Z[V] and Z[0]
 
  • #39
There's no need to couple a source inside Z[V]. The connection between the formulas I wrote is entirely made by expanding e^{-V(\vec{x})} as a power series and recognizing the terms inside that power series as things that can be computed from Z[V=0,j].
 
  • #40
fzero said:
There's no need to couple a source inside Z[V]. The connection between the formulas I wrote is entirely made by expanding e^{-V(\vec{x})} as a power series and recognizing the terms inside that power series as things that can be computed from Z[V=0,j].

So you mean

Z[V, \vec{j} = e^{\vec{j} \cdot \vec{x}} \int d^N x e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} - V( \vec{x} )}
That's coupled the source outside of Z[V] right?
 
  • #41
latentcorpse said:
So you mean

Z[V, \vec{j} = e^{\vec{j} \cdot \vec{x}} \int d^N x e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} - V( \vec{x} )}
That's coupled the source outside of Z[V] right?

No that doesn't make any sense because the LHS is independent of \vec{x}. What I means is that

\frac{1}{Z[0]} \int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} } \left(x_1^{n_1}\cdots x_N^{n_N} \right) = \langle x_1^{n_1}\cdots x_N^{n_N} \rangle

and that Z[V] can be expressed as a linear combination of these correlators.
 
  • #42
fzero said:
No that doesn't make any sense because the LHS is independent of \vec{x}. What I means is that

\frac{1}{Z[0]} \int d^Nx e^{-\frac{1}{2} \vec{x} \cdot A \vec{x} } \left(x_1^{n_1}\cdots x_N^{n_N} \right) = \langle x_1^{n_1}\cdots x_N^{n_N} \rangle

and that Z[V] can be expressed as a linear combination of these correlators.

but that doesn't have any j's in it?

I might leave this for a few hours and hopefully when I come back to it I won't be going round in circles!
 
  • #43
latentcorpse said:
but that doesn't have any j's in it?

I might leave this for a few hours and hopefully when I come back to it I won't be going round in circles!

You should spend some time trying to make sense of the formulas in post #37. I corrected them to show that we're meant to set \vec{j}=0 after taking derivatives. Hopefully that clears up some confusion.

I've left out a few steps on purpose for you to fill in, since you've seen the necessary manipulations already. It's not really going to improve your understanding if I tell you how to do every single calculation.
 
Back
Top