How to solve the Klein Gordon Complex Field?

In summary: Could you elaborate on what you mean by "totally incomprehensible"?It's Puzzling because it just doesn't make sense. It's like you're saying derivative should be something different than it actually is.
  • #1
maverick280857
1,789
4
Hi

I am teaching myself QFT, and this is not a homework problem. Right now, I am learning Classical Field Theory, and I am not sure how to proceed with the following problem. I would be grateful to receive inputs and suggestions.

Problem Given the Klein Gordon Lagrangian with a complex field,

[tex]\mathcal{L} = \partial_{\mu}\Phi^{\dagger}\partial^{\mu}\Phi - m^2 \Phi^{\dagger}\Phi[/tex]

find the conserved current and charge.

My problem is two fold:

1. How do I differentiate the first term with respect to [itex]\partial_{\mu}\Phi[/itex] (the term with the conjugate)? I know this is probably a trivial question, but I am new to this notation.

2. What is the significance of a complex field, as opposed to a real field?

Thanks.
 
Physics news on Phys.org
  • #2
maverick280857 said:
1. How do I differentiate the first term with respect to [itex]\partial_{\mu}\Phi[/itex] (the term with the conjugate)? I know this is probably a trivial question, but I am new to this notation.

consider the other term as constant

2. What is the significance of a complex field, as opposed to a real field?

the real field describes particles with no charge (neutral), and complex field describe particles with two opposite charges.
 
  • #3
1. The function you're taking a partial derivative of is just a (first order) polynomial in 10 variables, so you differentiate it just like any other polynomial. There's nothing fancy going on here. It's no different than finding the partial derivatives of the function f defined by [itex]f(u,v,w,x,y,z)=uv-wx-m^2 yz[/itex] where m is a constant. (This is the 1+1-dimensional version of your [itex]\mathcal L[/itex]).

2. The particles that correspond to a real field are their own antiparticles. Not so for a complex field. You can think of [itex]\phi[/itex] and [itex]\phi^\dagger[/itex] as two independent fields. This can be justified by checking that you get the same results as if you write [itex]\phi=\phi_1+i\phi_2[/itex] with [itex]\phi_1[/itex] and [itex]\phi_2[/itex] both real, and treat them as two independent fields.
 
Last edited:
  • #4
There is nice section on the complex Klein-gordon field in the book my Mandl, chapter 3 section 5 if i remember correctly.
 
  • #5
Thanks, what I originally meant to ask was how to compute

[tex]\frac{\partial}{\partial[\partial_{\mu}\Phi]}\partial^{\nu}\Phi^{\dagger}[/tex]
 
  • #6
That's just [tex]\frac{\partial}{\partial A}B[/tex] with A and B expressed in a way that makes the expression look much more complicated than it is.
 
  • #7
Fredrik said:
That's just [tex]\frac{\partial}{\partial A}B[/tex] with A and B expressed in a way that makes the expression look much more complicated than it is.

With B = conjugate of A, right? What does it evaluate to? I'm a bit confused here.
 
  • #8
but A and B are INDEPENDENT here, see Fredrik post #3, thus

[tex]
\frac{\partial}{\partial A}B = 0
[/tex]
 
  • #9
malawi_glenn said:
but A and B are INDEPENDENT here, see Fredrik post #3, thus

[tex]
\frac{\partial}{\partial A}B = 0
[/tex]

Thank you malawi_glenn and everyone else. I get this now.
 
  • #10
maverick280857 said:
Thank you malawi_glenn and everyone else. I get this now.

Have you done your exercises concerning the REAL KG field yet? :-)
 
  • #11
If you mean questions about determining the field equations from the real field Lagrangian, conserved currents and charge, then yes I have. But I did not encounter the notion of independence of the field and its conjugate field anywhere when I was doing them. I just came across the complex field question and got me thinking -- evidently in the wrong way. I had no idea that the field and its conjugate are independent fields, hence my question.

Once again thank you for your help :-)
 
  • #12
well I personally think the REAL KG is harder to evaluate things with hehe, good luck, and just post the questions you have. Tell us also if you need more practice material.
 
  • #13
malawi_glenn said:
well I personally think the REAL KG is harder to evaluate things with hehe, good luck, and just post the questions you have. Tell us also if you need more practice material.

Oh okay, I'm just a beginner. Taken two courses on Quantum Mechanics with no prior exposure to relativistic quantum mechanics, so am having to learn stuff on the way. I am working through Landau CTF and Peskin/Schroeder mainly, and keep reading McMahon. Will definitely keep posting questions here :-)
 
  • #14
Well you can go a quick course on RQM here, since most QFT texts assumes that one is quite famililar with KG eq and Dirac eq and gamma matricies etc.

http://www.phys.uAlberta.ca/~gingrich/phys512/latex2html/node1.html
 
Last edited by a moderator:
  • #15
Even in classical physics, if you have a function f(z), then the derivative with respect to z-conjugate is zero, and if you have a function f(z-conjugate), then the derivative with respect to z is zero (at least this is what I recall).
 
  • #16
RedX said:
Even in classical physics, if you have a function f(z), then the derivative with respect to z-conjugate is zero, and if you have a function f(z-conjugate), then the derivative with respect to z is zero (at least this is what I recall).

Nevertheless, that *is* puzzling and in fact totally incomprehensible when you think of derivative as the thing given by its definition. I remember having had a hard time with that question too, as somehow you want to apply the chain rule of some kind:

if you have A and B = F(A), and then you have a function G(A,F(A)), and you're supposed to find the derivative wrt A, then it's difficult to understand how you should consider B = F(A) as an independent variable.

The trick is that in fact, your "true" fields are the real and the imaginary part of phi, and phi is just a construct to lump them nicely together: phi = R + i J say, with R and J real "true" fields. You want the field equations in fact for R and J.
Now, in that view, phi-dagger is nothing else but R - i J. And now you can consider phi as ONE linear combination of R and J, and phi-dagger as another, linearly independent one. Transformation of variables gives you then phi as one independent variable, and phi-dagger as another one, and not to be seen (for the moment) as the conjugate of phi, but rather as a different linear combination of R and J.

And then you can show, and it is fun to work it out for yourself, that if you were to do the algebra for the R and J fields, you'd find the same field equations for them than as if you were going to consider phi and phi-dagger as independent.

But indeed, at first sight, this doesn't make any sense.
 
  • #17
(I had a similar question once: click)
 
  • #18
vanesch said:
Nevertheless, that *is* puzzling and in fact totally incomprehensible when you think of derivative as the thing given by its definition. I remember having had a hard time with that question too, as somehow you want to apply the chain rule of some kind:

if you have A and B = F(A), and then you have a function G(A,F(A)), and you're supposed to find the derivative wrt A, then it's difficult to understand how you should consider B = F(A) as an independent variable.

The trick is that in fact, your "true" fields are the real and the imaginary part of phi, and phi is just a construct to lump them nicely together: phi = R + i J say, with R and J real "true" fields. You want the field equations in fact for R and J.
Now, in that view, phi-dagger is nothing else but R - i J. And now you can consider phi as ONE linear combination of R and J, and phi-dagger as another, linearly independent one. Transformation of variables gives you then phi as one independent variable, and phi-dagger as another one, and not to be seen (for the moment) as the conjugate of phi, but rather as a different linear combination of R and J.

And then you can show, and it is fun to work it out for yourself, that if you were to do the algebra for the R and J fields, you'd find the same field equations for them than as if you were going to consider phi and phi-dagger as independent.

But indeed, at first sight, this doesn't make any sense.

Well, the proof is simple I think:

[tex]
f(a,b)=f(\frac{z+z^*}{2},\frac{z-z^*}{2i}) [/tex]

[tex]
\frac{\partial f}{\partial z}=\frac{\partial f}{\partial a}\frac{\partial (z+z*)}{\partial z}\frac{1}{2}+\frac{\partial f}{\partial b}\frac{\partial (z-z*)}{\partial z}\frac{1}{2i}
[/tex]

and then setting f=z=a+ib (and noting that [tex]\frac{\partial a}{\partial b}=0[/tex] and vice versa) should give you that:

[tex]
\frac{\partial z*}{\partial z}=0
[/tex]

So it's not intuitive, but if you've seen the proof before, then it's easy to understand why the conjugate is independent.

But I agree that it is worthwhile breaking it up into 2 fields: O(2)~U(1) is related to this complex variables thing?
 
  • #19
RedX said:
Well, the proof is simple I think:

[tex]
f(a,b)=f(\frac{z+z^*}{2},\frac{z-z^*}{2i}) [/tex]

[tex]
\frac{\partial f}{\partial z}=\frac{\partial f}{\partial a}\frac{\partial (z+z*)}{\partial z}\frac{1}{2}+\frac{\partial f}{\partial b}\frac{\partial (z-z*)}{\partial z}\frac{1}{2i}
[/tex]

and then setting f=z=a+ib (and noting that [tex]\frac{\partial a}{\partial b}=0[/tex] and vice versa) should give you that:

[tex]
\frac{\partial z*}{\partial z}=0
[/tex]

Let's see:
If we take f = z = a + i b, then [itex]\frac {\partial f}{\partial a} = 1[/itex] and
[tex]\frac{\partial f}{\partial b} = i[/tex]

From this we have:
[tex]
\frac{\partial f}{\partial z}=1 \frac{\partial (z+z*)}{\partial z}\frac{1}{2}+i\frac{\partial (z-z*)}{\partial z}\frac{1}{2i} = \frac{\partial{z}}{\partial z}/2 + \frac{\partial{z*}}{\partial z}/2 + \frac{\partial{z}}{\partial z}/2 - \frac{\partial{z*}}{\partial z}/2 = \frac{\partial{z}}{\partial z}
[/tex]

and [itex]\frac{\partial{z*}}{\partial z}[/itex] disappears from the equation, no ?
 
  • #20
Okay, so just to confirm

for

[tex]\mathcal{L} = (\partial_{\mu}\Phi)^{\dagger}(\partial^{\mu}\Phi})-m^2\Phi^{\dagger}\Phi-V(\Phi^{\dagger}\Phi)[/tex]

[tex]\frac{\partial\mathcal{L}}{\partial[\partial_{\mu}\Phi]} = \frac{\partial}{\partial[\partial_{\mu}\Phi]}((\partial_{\nu}\Phi)^{\dagger}(\partial^{\nu}\Phi)})[/tex]
[tex]= \partial_{\nu}\Phi^{\dagger}\frac{\partial}{\partial[\partial_{\mu}\Phi]}g^{\nu\rho}\partial_{\rho}\Phi = \partial_{\nu}\Phi^{\dagger}\delta_{\rho}^{\mu}g^{\nu\rho} = \partial_{\nu}\Phi^{\dagger}g^{\nu\mu} = \partial^{\mu}\Phi^{\dagger}[/tex]

Is this correct?
 
  • #21
Okay, a revised proof. Let f(x,y)=u(x,y)+iv(x,y) and z=x+iy.

[tex]
df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy
[/tex]

Using dx=(1/2)(dz+dz*) and dy=(1/2i)(dz-dz*) one gets:


[tex]
df=\frac{1}{2}(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y})dz+
\frac{1}{2}(\frac{\partial f}{\partial x}+i\frac{\partial f}{\partial y})dz^*=
\frac{\partial f}{\partial z}dz+i\frac{\partial f}{\partial z^*}dz^*
[/tex]

so that:

[tex]\frac{\partial f}{\partial z}=\frac{1}{2}(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y})[/tex]

Now set f=z* to get the result.
 
  • #22
maverick280857 said:
Is this correct?
Yes.
 
  • #23
Thanks everyone.
 
  • #24
RedX said:
Okay, a revised proof. Let f(x,y)=u(x,y)+iv(x,y) and z=x+iy.

[tex]
df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy
[/tex]

Using dx=(1/2)(dz+dz*) and dy=(1/2i)(dz-dz*) one gets:


[tex]
df=\frac{1}{2}(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y})dz+
\frac{1}{2}(\frac{\partial f}{\partial x}+i\frac{\partial f}{\partial y})dz^*=
\frac{\partial f}{\partial z}dz+i\frac{\partial f}{\partial z^*}dz^*
[/tex]

so that:

[tex]\frac{\partial f}{\partial z}=\frac{1}{2}(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y})[/tex]

Now set f=z* to get the result.


Yes, that's about it, but that's because you've written something implicit:

[tex]
df=\frac{1}{2}(\frac{\partial f}{\partial x}_{y=const}-i\frac{\partial f}{\partial y}_{x=const})dz+
\frac{1}{2}(\frac{\partial f}{\partial x}_{y=const}+i\frac{\partial f}{\partial y}_{x=const})dz^*=
\frac{\partial f}{\partial z}_{z*=const}dz+i\frac{\partial f}{\partial z^*}_{z=const}dz^*
[/tex]

So if you then come to the conclusion that:

[tex]\frac{\partial z*}{\partial z}_{z*=const} = 0[/tex], then this is a pretty trivial conclusion. But it is indeed the basic idea: we call z and z* the two "independent" variables, and when varying them independently, their link as a conjugate disappears.
 
  • #25
If you just look at differential forms, you eliminate the need for making "choices". One fact is that the set of complex-valued differential forms over C is two-dimensional, with basis dx and dy. From that, it's easy to see that dz and dz* also form a basis.

The meaning of the partial differential does depend on a choice of basis, as vanesch rightly points out.

However, when I write f(x,y), I have quite unambiguously declared x and y to be my coordinate chart on K^2. (Where K is whatever field we're supposed to be using)


The problem with the notation of partial derivatives are awkward. The expression

[tex]\frac{\partial}{\partial A} f(A, g(A))[/tex]

quite unambiguously means

[tex]f_1(A, g(A)) + f_2(A, g(A)) g'(A)[/tex]

(where I have used the subscript notation to say "derivative with respect to the n-th argument")

while

[tex]\frac{\partial}{\partial A} f(A, B)[/tex]

quite unambiguously means

[tex]f_1(A, B)[/tex]

However, in this notation, there isn't convenient way to say

"Compute the partial derivative of f(A, B) with respect to A, and then evaluate at B = f(A)"

which leads to problems and abuses1 of notation and whatnot. :frown:


1: Or possibly just alternative notation -- I'm not well-versed enough in this field to judge whether people are abusing notation for convenience, or have actually developed an alternative syntax for this sort of thing
 
Last edited:
  • #26
Yeah you're probably right. I just remember proving that the partial of z* with respect to z was zero several years ago, and I've tried to remember how I did it, and my best guess is something like my first 2 attempts here, but maybe I got it wrong back then too. Maybe you have to prove it with the arguments you suggested, splitting it into two and stuff, and there's no other way.
 
  • #27
Hurkyl said:
The expression

[tex]\frac{\partial}{\partial A} f(A, g(A))[/tex]

quite unambiguously means

[tex]f_1(A, g(A)) + f_2(A, g(A)) g'(A)[/tex]

(where I have used the subscript notation to say "derivative with respect to the n-th argument")
That's not the notational convention I'm using. To make sure that I don't confuse myself with the notation, I never write

[tex]\frac{\partial}{\partial(something)}f(whatever)[/tex]

unless I mean a partial derivate of f. Note that f is always the map [itex](A,B)\mapsto f(A,B)[/itex] and never e.g. [itex]A\mapsto f(A,B)[/itex] or [itex]A\mapsto f(A,g(A))[/itex].

What you wrote there, I would write as

[tex]\frac{d}{dA} f(A, g(A))=f_{,1}(A, g(A)) + f_{,2}(A, g(A)) g'(A)[/tex]

When I write d/dA, I always mean the derivative of the function that takes A to whatever's on the right. The ",n" notation for the nth partial derivative is common in the physics literature, and it's convenient because we often deal with expressions like [itex]f_{ij,k}[/itex] (the kth partial derivative of component ij of f).

By the way, if I need the map [itex]A\mapsto f(A,B)[/itex], I would write it as [itex]f(\bullet,B)[/itex]. Unfortunately that notation isn't as standard as I think it should be.
 
  • #28
vanesch said:
Nevertheless, that *is* puzzling and in fact totally incomprehensible when you think of derivative as the thing given by its definition. I remember having had a hard time with that question too,

But indeed, at first sight, this doesn't make any sense.

A second sight it doesn't make sense either. Worse: It's mathematically illegal.
It doesn't obey Cauchy-Rieman's requirement for differentiation. The differentiation
should be independent of the line of differentiation in the complex plane.

But in this case we have:

[tex]\frac{\partial z^*}{\partial z} ~=~ +1 ~~~~\mbox{Along the real axis.}[/tex]

[tex]\frac{\partial z^*}{\partial z} ~=~ -1 ~~~~\mbox{Along the imaginary axis.}[/tex]


Regards, Hans
 
Last edited:
  • #29
Hans de Vries said:
It doesn't obey Cauchy-Rieman's requirement for differentiation.
Sure, z* is not complex differentiable. Therefore, it's a good thing we're not talking about the complex derivative. :tongue:
 
  • #30
RedX said:
Okay, a revised proof. Let f(x,y)=u(x,y)+iv(x,y) and z=x+iy.

[tex]
df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy
[/tex]

Using dx=(1/2)(dz+dz*) and dy=(1/2i)(dz-dz*) one gets:[tex]
df=\frac{1}{2}(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y})dz+
\frac{1}{2}(\frac{\partial f}{\partial x}+i\frac{\partial f}{\partial y})dz^*=
\frac{\partial f}{\partial z}dz+i\frac{\partial f}{\partial z^*}dz^*
[/tex]

so that:

[tex]\frac{\partial f}{\partial z}=\frac{1}{2}(\frac{\partial f}{\partial x}-i\frac{\partial f}{\partial y})[/tex]

Now set f=z* to get the result.

If phi and phi star are really independent variables, we should get Hamiltonian density as:

[tex]\mathcal{H}=\pi\partial_0\phi+\pi^*\partial_0\phi^*-\mathcal{L}[/tex]

[STRIKE]while the true answer is:[/STRIKE]

[STRIKE][tex]\mathcal{H}=\pi\partial_0\phi-\mathcal{L}[/tex].[/STRIKE](I made a mistake here, the first expression is right. Thanks for Ben.)

So I think the method of treating phi and phi star as independent variables is just a handwaving technique we use to simplify the calculation. It is justified by that we get the same result by expressing the phi in terms of two real fields and seeing them as independent variables. There may be nothing deeper here.
 
Last edited:
  • #31
The point to remember is that a complex field actually has two degrees of freedom: a real part and an imaginary part. Alternatively, we can use a more convenient basis by using linear combinations:

[tex]z = x + iy[/tex]
[tex]\bar z = x - iy[/tex]

and the inverse transformation

[tex]x = \frac12 z + \frac12 \bar z[/tex]
[tex]y = -i \frac12 z + i \frac12 \bar z[/tex]

and so, instead of using x and y, we can just as easily use z and z* as independent variables. This is why we treat phi and its conjugate as independent fields.
 
  • #32
Ben Niehoff said:
The point to remember is that a complex field actually has two degrees of freedom: a real part and an imaginary part. Alternatively, we can use a more convenient basis by using linear combinations:

[tex]z = x + iy[/tex]
[tex]\bar z = x - iy[/tex]

and the inverse transformation

[tex]x = \frac12 z + \frac12 \bar z[/tex]
[tex]y = -i \frac12 z + i \frac12 \bar z[/tex]

and so, instead of using x and y, we can just as easily use z and z* as independent variables. This is why we treat phi and its conjugate as independent fields.

but how do you define the derivative with respect to z*?
 
  • #33
Like I said, you just treat z and z* as independent variables. You can call them z and w if it makes you feel better. Then just apply the formula

w = z*

as a constraint equation when you're done.Or, here's yet another way you can look at it, using the definitions above:

[tex]\frac{\partial}{\partial z} = \frac{\partial x}{\partial z} \frac{\partial}{\partial x} + \frac{\partial y}{\partial z} \frac{\partial}{\partial y} = (\frac12) \frac{\partial}{\partial x} + (-i \frac12) \frac{\partial}{\partial y}[/tex]

and therefore

[tex]\frac{\partial \bar z}{\partial z} = \frac{\partial}{\partial z} (x - iy) = \frac12 \frac{\partial}{\partial x} (x - iy) - i \frac12 \frac{\partial}{\partial y} (x - iy) = \frac12 - i \frac12 (-i) = \frac12 (1 + i^2) = 0[/tex]
 
Last edited:
  • #34
Ben Niehoff said:
Like I said, you just treat z and z* as independent variables. You can call them z and w if it makes you feel better. Then just apply the formula

w = z*

as a constraint equation when you're done.Or, here's yet another way you can look at it, using the definition z = x + iy:

[tex]\frac{\partial}{\partial z} = \frac{\partial x}{\partial z} \frac{\partial}{\partial x} + \frac{\partial y}{\partial z} \frac{\partial}{\partial y} = (1) \frac{\partial}{\partial x} + (-i) \frac{\partial}{\partial y}[/tex]

and therefore

[tex]\frac{\partial \bar z}{\partial z} = \frac{\partial}{\partial z} (x - iy) = \frac{\partial}{\partial x} (x - iy) - i \frac{\partial}{\partial y} (x - iy) = 1 - i (-i) = 1 + i^2 = 0[/tex]
Okay. One can somewhat define it formally, but it's not a very decent derivative I think. May I ask that what's your opinion on the point I raised in the post #30?
 
Last edited:
  • #35
sadness said:
Okay. One can somewhat define it formally, but it's not a very decent derivative I think. May I ask that what's your opinion on the point I raised in the post #32?

In what sense is it not "decent"? Note, I forgot some factors of 1/2, went and fixed them.

The first expression you have for the Hamiltonian is correct (with both [itex]\pi \partial_t \phi[/itex] and [itex]\pi^* \partial_t \phi^*[/itex]). Try working the problem in detail to see why.
 

Similar threads

Replies
24
Views
1K
Replies
3
Views
1K
  • Quantum Physics
Replies
13
Views
754
Replies
24
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
321
  • Quantum Physics
Replies
1
Views
611
Replies
41
Views
4K
  • Quantum Physics
Replies
2
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • Quantum Physics
Replies
4
Views
1K
Back
Top