Differentiating the complex scalar field

  • #1

Main Question or Discussion Point

Basic question on scalar filed theory that is getting on my nerves. Say that we have the langrangian density of the free scalar (not hermitian i.e. "complex") field

[tex]L=-1/2 (\partial_{\mu} \phi \partial^{\mu} \phi^* + m^2 \phi \phi^*)[/tex]

Thus the equations of motion are

[tex](\partial_{\mu} \partial^{\mu} - m ) \phi=0 [/tex] the kg equation plus the complex conjugate equation. fine. Now I have been taught to do this calculation by thinking of the scalar field as really a complex function i.e.
[tex] \phi=\phi_1 + i \phi_2[/tex] with phi1 phi2 reals.

This is giving the right results e.g. [tex]\frac{\partial}{\partial \phi^*} \phi \phi^*=2 \phi[/tex] but also in the same way I get

[tex]\frac{\partial}{\partial \phi} {\phi} =2 [/tex]

which is quite crazy. So how should one actually do the differentiation in Lagrange's equations? The functional derivative doesn't really help me here it is the product of phi and it's complex that is giving the problem.
 
Last edited:

Answers and Replies

  • #2
368
12
The derivatives aren't with respect to [tex]\phi[/tex], they're with respect to the components of [tex]x[/tex]. The notation [tex]\partial_u\phi(x)[/tex] is short for [tex]({\frac{\partial}{\partial x^0}\phi(x), \frac{\partial}{\partial x^1}\phi(x), \frac{\partial}{\partial x^2}\phi(x), \frac{\partial}{\partial x^3}\phi(x)})[/tex]. Each of those partial derivatives should work out correctly with respect to either [tex]\phi[/tex] or [tex]\phi^*[/tex]. Is that what you meant, or did I misunderstand the question?
 
Last edited:
  • #3
Avodyne
Science Advisor
1,396
87
  • #4
I will explain where my question is a bit better. The e-l equations are

[tex]\frac{\delta L}{\delta \phi} - \partial_{\mu} \frac{\delta L} {\delta \partial_{\mu} \phi}=0[/tex]

where L is the langrangian density in the first post meaning that say the action is a functional

[tex]S=\int{ dx^D L}[/tex]

so the mass*phi term in the equations of motion comes from

[tex]\frac{\delta L}{\delta \phi^{*}}=\frac{\delta( m^2 \phi \phi^{*})}{\delta \phi}=2 m \phi[/tex]

Now I do not know why this last result holds, I only know it does. (There are no components in this part of the calculation). The way I know to calculate this is to think of phi as
[tex] \phi = \phi_{1} + i \phi_{2}[/tex] and thus in a sloppy way do the chain rule etc, take inverse of derivatives (i.e. assuming [tex]\frac {\partial \phi_1 }{\partial \phi} ^{-1}= \frac {\partial \phi }{\partial \phi_1} [/tex]) and coming up with (if you write it down it is simple and sloppy)

[tex] \frac{\partial}{\partial \phi^{*}}=\frac{\partial}{\partial \phi_1}+i\frac{\partial}{\partial \phi_2}[/tex]

thus for [tex]\phi \phi^* =\phi_1 ^2 +\phi_2 ^2 [/tex] it works fine, also for the part with the partial derivatives. But if we assume that this recipe works we also get in this sense

[tex]
\frac{\partial}{\partial \phi} {\phi} =2
[/tex]

which kind of sucks.
 
  • #5
while i was writing the last reply anodyne replied. i am checking the link now
 
  • #6
2,111
17
Physicists enjoy getting correct results by nonsensical calculations. I would recommend not to waste time trying to understand the physicists' explanations. Compute partial derivatives

[tex]
\frac{\partial}{\partial(\textrm{Re}(\phi))},\quad \frac{\partial}{\partial(\textrm{Im}(\phi))}
[/tex]

and everything will remain clear.

Physicists often assume that [itex]z=(x,y)[/itex] and [itex]z^*=(x,-y)[/itex] would be independent. Now, what does it mean that you "assume them to be independent", when they clearly are not independent? It's like genuine notorious Orwellian double think: CONSTRAINED VARIABLES ARE INDEPENDENT VARIABLES.

----

btw, I have a little story about this topic. Couple of years ago I wrote one answer to an exercise, and my answer was like this:

[tex]
(x,y) = (0,0)
[/tex]

However, I didn't get full points, because I should have written

[tex]
(x,y) = (0,0),\quad\textrm{and}\quad (x,-y) = (0,0)
[/tex]

I tried to explain to the course assistant that the second equation was redundant, but he explained yes it is redundant but it is not redundant (or something like that). The catch was that these were "physical dynamical variables".
 
  • #7
368
12
...

Ok, that's clearer now. I think Avodyne probably has the answer to your immediate question. You might also be interested in looking at http://www.physics.upenn.edu/~chb/phys253a/coleman/06-1009.pdf" [Broken], specifically the section titled "Internal Symmetries". These are from Sidney Coleman's lectures at Harvard. He goes in the opposite direction from what you're trying to do--he begins by treating [tex]\phi_1[/tex] and [tex]\phi_2[/tex] as two independent real fields, and uses symmetry arguments to show how they are isomorphic to one complex field. He then uses that to show that even if you start with a complex field, you can treat [tex]\phi[/tex] and [tex]\phi^*[/tex] as if they were separate fields when minimizing the Lagrangian, and things still come out ok.
 
Last edited by a moderator:
  • #8
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,851
407
[tex]\frac{\partial}{\partial\phi} f(\phi_1,\phi_2) =\frac{\partial}{\partial\phi} f\Big(\frac{\phi+\phi^*}{2},\frac{\phi-\phi^*}{2i}\Big)=\frac{\partial f}{\partial\phi_1}\frac{1}{2}+\frac{\partial f}{\partial\phi_2}\frac{1}{2i}[/tex]

So

[tex]\frac{\partial}{\partial\phi}\phi=\frac{1}{2}\Big(\frac{\partial}{\partial\phi_1}-i\frac{\partial}{\partial\phi_2}\Big)(\phi_1+i\phi_2)=\frac{1+0+0+1}{2}=1[/tex]
 
  • #9
So, cheers to anodyne([edit] and all the rest that answered while writing this) ! you saved my day that was exactly what I was doing wrong and it is quite simple too. So in writing [tex]\phi=\phi_1+i \phi_2 [/tex] and taking the chain rule I was taking say
[tex] \frac{\partial \phi }{\partial \phi_2}=i= \frac{\partial \phi_2 }{\partial \phi} ^{-1}[/tex]
(last sloppy equality is what i was doing wrong) and coming up with
[tex]\frac{\partial \phi_2 }{\partial \phi}=-i[/tex]

while in noting that [tex]\phi_2=(\phi - \phi^*)/2i[/tex] obviously

[tex]\frac{\partial \phi_2 }{\partial \phi}=-i/2[/tex]. (in case anyone has the same question i think it is explained)

So cool i know where I was wrong. How about the 2 factor between [tex]\frac{\partial \phi_2 }{\partial \phi}=2(\frac{\partial \phi }{\partial \phi_2})^{-1}[/tex]. I understand now that this is the case for any complex variable [tex]\phi=\phi_1 + i \phi_2[/tex]. Just asking any intuitive reason for this to happen?

[edit] fredrik had it right too and I am checking the pdf from jostpuur. cheers everyone I can't believe how many people replied while writing my reply, actually now I have 2 or 3 ways more to think about this
 
Last edited:
  • #10
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,851
407
How about the 2 factor between [tex]\frac{\partial \phi_2 }{\partial \phi}=2(\frac{\partial \phi }{\partial \phi_2})^{-1}[/tex]. I understand now that this is the case for any complex variable [tex]\phi=\phi_1 + i \phi_2[/tex]. Just asking any intuitive reason for this to happen?
The 2 comes from the fact that there's a factor of 1/2 in

[tex]\phi_2=\frac{\phi-\phi^*}{2i}[/tex]

but not in

[tex]\phi=\phi_1+i\phi_2[/tex].

What your intuition needs is probably just a reminder that you're working with partial derivatives. You expected a factor of 1 instead of 2 because of your experience with ordinary derivatives.
 
  • #11
strangerep
Science Advisor
3,086
909
Physicists often assume that [itex]z=(x,y)[/itex] and [itex]z^*=(x,-y)[/itex] would be independent. Now, what does it mean that you "assume them to be independent", when they clearly are not independent? [...]
If x and y are independent variables, then so are [itex]z[/itex] and [itex]z^*[/itex].
This is straight calculus, not sloppy assumptions.

Proof:

"x and y are independent variables" means (by definition) that they're not functions
of each other, i.e.,

[tex]
\frac{ \partial x }{\partial y} ~=~ 0 ~=~ \frac{ \partial y }{\partial x} ~~.
[/tex]

So if

[tex]
z ~=~ x + iy ~~;~~~~~~ z^* ~=~ x - iy ~,
[/tex]

then
[tex]
x ~=~ \frac{z + z^*}{2} ~~;~~~~~~ y ~=~ \frac{z - z^*}{2i} ~.
[/tex]

Hence (using the chain rule),

[tex]
\frac{ \partial z }{\partial z^*}
~=~ \frac{ \partial z }{\partial x} \frac{ \partial x }{\partial z^*}
~+~ \frac{ \partial z }{\partial y} \frac{ \partial y }{\partial z^*}
~=~ \frac{1}{2} ~+~ i\,\frac{-1}{2i} ~=~ 0 ~,
[/tex]

which means [itex]z[/itex] and [itex]z^*[/itex] are indeed independent variables.
 
  • #12
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,851
407
If x and y are independent variables, then so are [itex]z[/itex] and [itex]z^*[/itex].
This is straight calculus, not sloppy assumptions.
I'm not convinced by that argument. I don't see anything wrong with your calculations, but I also don't see how they answer jostpuur's concern. Clearly there's something very strange about saying that [itex]\phi^*(x)[/itex] is the complex conjugate of [itex]\phi(x)[/itex] for all x and then saying that [itex]\phi[/itex] and [itex]\phi^*[/itex] are two independent functions to be determined by a condition we impose on the action. Hmm...I suppose that if we can show that the map [itex](f,g)\mapsto S[f,g][/itex] is minimized only by pairs (f,g) such that g(x)=f(x)* for all x, then there's no problem, because then we have derived the relationship between [itex]\phi[/itex] and [itex]\phi^*[/itex] rather than assumed it. (I wrote that sentence after all the stuff below, so I haven't had time to think about whether this is the case).

I decided to take a closer look at your calculations, to see if I can find out how, or if, they're relevant. It seems that all I accomplished was to show that if you make sure that you know what functions are involved in your calculation, and at what points they're evaluated, there's no need to do the calculation at all. I had already typed most of this when I understood that it doesn't add all that much to the discussion, so I thought about throwing it away, but since I think it adds something, I figured I might as well post it:

We're interested in the derivative [itex]\partial z/\partial z^*[/itex]. The first thing we need to do is to think about what this expression means. To find a derivative, we need to know what function we're taking the derivative of. We also need to know at what point in the derivative's domain the derivative is to be evaluated.

We seem to be talking about the projection function [itex](\alpha,\beta)\mapsto\alpha[/itex], and points in the domain of the form (z,z*) (i.e. points such that the value of the second variable is the complex conjugate of the value of the first). So I interpret [itex]\partial z/\partial z^*[/itex] to mean this:

[tex]\frac{\partial z}{\partial z^*}=D_2\big|_{(z,z^*)}\Big((\alpha,\beta)\mapsto \alpha\Big)[/tex]

This is of course trivially =0. So there's nothing to prove, and yet it looks like you have proved something. Now we just have to figure out what you proved. :smile:

We make the following definitions:

[tex]u(x,y)=x+iy[/tex]
[tex]v(x,y)=x-iy[/tex]

[tex]h(\alpha,\beta)=\frac{\alpha+\beta}{2}[/tex]
[tex]k(\alpha,\beta)=\frac{\alpha-\beta}{2i}[/tex]

These can all be thought of as functions from ℂ2 into ℂ, but we will of course be especially interested in the restrictions of u and v to ℝ2, and h and k evaluated at points of the form (z,z*).

Any complex number [itex]\alpha[/itex] can be expressed as [itex]\alpha=u(x,y)[/itex] for some x,y in ℝ. This equality and the definitions above imply that

[tex]\alpha=u(h(\alpha,\alpha^*),k(\alpha,\alpha^*))[/tex]

So

[tex]\frac{\partial z}{\partial z^*}=D_2\big|_{(z,z^*)}\Big((\alpha,\beta)\mapsto \alpha\Big) =D_2\big|_{(z,z^*)}\Big((\alpha,\beta)\mapsto u(h(\alpha,\alpha^*),k(\alpha,\alpha^*))\Big)[/tex]

We're going to use the chain rule now, and the notation becomes less of a mess if we define

[tex]f(\alpha,\beta)=u(h(\alpha,\beta),k(\alpha,\beta))[/tex]


[tex]\frac{\partial z}{\partial z^*}=D_2(u\circ f)(z,z^*)=D_1u(f(z,z^*))\, D_2h(z,z^*)+D_2u(f(z,z^*))D_2k(z,z^*)[/tex]

[tex]=1\frac{1}{2}+i\frac{-1}{2i}=0[/tex]

This seems to be the same calculation you did, except that I kept track of what functions are involved, and at what points they're evaluated. To be able to do that, I had to start with an expression that's obviously equal to 0, so the chain rule doesn't tell us anything new.
 
Last edited:
  • #13
strangerep
Science Advisor
3,086
909
Fredrik,

I'm not sure what more I can usefully say, except that I think you're making
it much more complicated than it needs to be. IMHO, it really is as simple as
saying that if we have functions of two independent variables then it's
possible to make a change of those two variables into two other (independent)
variables.

It's surprising how independent variables x and y generate no confusion,
but z and z* as independent variables does. :-)

BTW, it sometimes helps to remember that the Cauchy-Riemann equations
that define what a complex-analytic function is can also be expressed as

[tex]
\frac{\partial f(z)}{\partial z^*} ~=~ 0 ~~.
[/tex]

Or maybe not. :-)

Cheers.
 
  • #14
2,111
17
Physicists often assume that [itex]z=(x,y)[/itex] and [itex]z^*=(x,-y)[/itex] would be independent. Now, what does it mean that you "assume them to be independent", when they clearly are not independent?
If x and y are independent variables, then so are [itex]z[/itex] and [itex]z^*[/itex].
You are wrong.
 
  • #15
strangerep
Science Advisor
3,086
909
You are wrong.
Really? Then so are heaps of calculus textbooks.

Your bald assertion is of no value as it stands.
I gave a proof, but you did not.
 
  • #16
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,851
407
It's surprising how independent variables x and y generate no confusion,
but z and z* as independent variables does. :-)
"x and y are independent variables" means (by definition) that they're not functions
of each other, i.e.,

[tex]
\frac{ \partial x }{\partial y} ~=~ 0 ~=~ \frac{ \partial y }{\partial x} ~~.
[/tex]
What that phrase means to me is that we're dealing with a function f:X×Y→Z (often with X=Y=ℝ) and use the notation (x,y) for points in its domain. This is why "independent variables x and y" cause no confusion and is entirely trivial.

"z and z* are independent variables" should mean that we use the notation (z,z*) for points in the domain of the function we're dealing with. This is of course just as trivial, but if * denotes complex conjugation, the domain of the function would have to be (a subset of) the subset of ℂ2 that consists of pairs of the form (z,z*), and now we have a problem. Suppose that g:D→ℂ is such a function (where D is the set of pairs (z,z*)). What does the expression

[tex]\frac{\partial g(z,z^*)}{\partial z}[/tex]

mean? This should be the partial derivative of g with respect to the first variable, evaluated at (z,z*), but

[tex]\lim_{h\rightarrow 0}\frac{g(z+h,z^*)-g(z,z^*)}{h}[/tex]

is undefined! So if z and z* are "independent variables" (in the sense I described) and complex conjugates of each other, the partial derivatives of the function are undefined.


I gave a proof
I really don't think you have proved anything. I mean, what function are you taking a partial derivative of when you write [itex]\partial z/\partial z^*[/itex]? "z"? z isn't a function, it's a point in the range of a function. I don't see what function you could have meant other than "the function that takes (z,z*) to z", and in that case, you must have assumed that z and z* are independent in the sense I described above. So my argument above applies here, and that means that if you don't postpone setting z* equal to the complex conjugate of z until after you've taken the partial derivative, the partial derivative is ill-defined. And if you do postpone it, the function you're taking a partial derivative of is just Proj1:ℂ2→ℂ defined by Proj1(z,w)=z, and your proof just verifies the trivial fact that D2Proj1(z,z*)=0.


In the case of the complex scalar field, we have essentially the same problem that I described above. If S(f,g) (where S is the action) is only defined on pairs (f,g) such that g(x) is the complex conjugate of f(x) for all x, then the derivative that we're setting to 0 at the start of the derivation of the Euler-Lagrange equation is ill-defined. I see no way out of this other than to wait until after we've minimized the action to set one of the fields equal to the complex conjugate of the other. The justification for this has to be that S is minimized by any pair of scalar fields that both satisfy the Klein-Gordon equation, and that if a field satisfies the Klein-Gordon equation, than so does its complex conjugate.
 
  • #17
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,272
3,160
What does the expression

[tex]\frac{\partial g(z,z^*)}{\partial z}[/tex]

mean? This should be the partial derivative of g with respect to the first variable, evaluated at (z,z*), but

[tex]\lim_{h\rightarrow 0}\frac{g(z+h,z^*)-g(z,z^*)}{h}[/tex]

is undefined!

It is quite well-defined, and called Wirtinger derivative; see, e.g., http://en.wikipedia.org/wiki/Wirtinger_derivatives The basics are as follows:

Consider a continuous function g:D subset C to C, mapping z=x+iy in C (with x,y in R) to g(z) in C. If g is differentiable with respect to x and y then for complex w=u+iv,
[tex]\frac{d}{dh} g(z+hw)|_{h=0} = \frac{dg}{dx} u + \frac{dg}{dy}v
=\frac{dg}{dx} \frac{w+w^*}{2}+ \frac{dg}{dy}\frac{w-w^*}{2i}.[/tex]
With the definitions
[tex]\frac{dg}{dz}:=\frac{1}{2}(\frac{dg}{dx}-i\frac{dg}{dy}),[/tex]
[tex]\frac{dg}{dz^*}:=\frac{1}{2}(\frac{dg}{dx}+i\frac{dg}{dy}),[/tex]
and noting that 1/i=-i, we therefore find
[tex]\frac{d}{dh} g(z+hw)|_{h=0} =\frac{dg}{dz} w +\frac{dg}{dz^*} w^*.[/tex]
This relation (with h understood to be real) can also be taken as the definition of
dg/dz and dg/dz^*, since the latter are uniquely determined by it.

Sincce dz/dz^*=dz^*/dz=0, the rules for the Wirtinger calculus are as that for bivariate calculus, with z and z^* replacing real variables x and y.

Note that g is analytic iff dg/dz^*=0, and then dg/dz has the standard meaning from complex calculus.

Edit: I corrected some inaccuracies in the derivation, and added the link to Wikipedia and another comment.
Some of the dx in the denominators were corrected to be dy in the source version but still appear as dx - I don't know why my change doesn't show in the view version.
 
Last edited:
  • #18
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,851
407
The expression I said is undefined contains a function that's being evaluated at a point that's not in its domain, so it's clearly undefined.

The Wikipedia page you linked to takes

[tex]\frac{\partial}{\partial z} = \frac{1}{2} \left( \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right),\qquad \frac{\partial}{\partial z^*}= \frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right)[/tex]

as definitions. These operators are clearly meant to act on functions defined on subsets of ℝ2. I understand that we can use them to assign a meaning to the expression

[tex]\frac{\partial g(z,z^*)}{\partial z*}[/tex]

by defining it to mean

[tex]\frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right)\Big((x,y)\mapsto g(x+iy,x-iy)\Big)[/tex]

instead of the usual limit, which is ill-defined here, but I don't really see why we would want to. More importantly, I don't see how it sheds any light on the main issue here, which is the question of whether it makes sense to say that a scalar field and its complex conjugate are independent variables in the action.
 
  • #19
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,272
3,160
The expression I said is undefined contains a function that's being evaluated at a point that's not in its domain, so it's clearly undefined.
I wasn't specifically talking of your limit but of making sense of dg/dz and dg/dz^* when g is a function of a complex variable z. One just needs to take a slightly different limit than the one you chose and found undefined.

The Wikipedia page you linked to takes

[tex]\frac{\partial}{\partial z} = \frac{1}{2} \left( \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right),\qquad \frac{\partial}{\partial z^*}= \frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right)[/tex]

as definitions. These operators are clearly meant to act on functions defined on subsets of ℝ2. I understand that we can use them to assign a meaning to the expression

[tex]\frac{\partial g(z,z^*)}{\partial z*}[/tex]

by defining it to mean

[tex]\frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right)\Big((x,y)\mapsto g(x+iy,x-iy)\Big)[/tex]

instead of the usual limit, which is ill-defined here, but I don't really see why we would want to.
We would want to because it is frequently used, both in the quantum mechanics of oscillating systems written in terms of complex variables, and in the field theory of a complex scalar field with general interaction term V(Phi,Phi^*). Wirtinger's calculus gives the rigorous justification for proceeding in the customary way.

More importantly, I don't see how it sheds any light on the main issue here, which is the question of whether it makes sense to say that a scalar field and its complex conjugate are independent variables in the action.
It is the common way to express the fact that in the Wirtinger calculus, one can apply the standard rules of calculus if one pretends that z and z^* are independent real variables. Since there is an underlying rigorous interpretation, it makes sense to use this way of speaking about it.

It makes certainly more sense than the Feynman path integral for interacting fields in Minkowski space.
 
  • #20
708
7
...one can apply the standard rules of calculus if one pretends that z and z^* are independent real variables. Since there is an underlying rigorous interpretation, it makes sense to use this way of speaking about it.
It has always bothered me to think of them as independent. Could someone direct me to a text where this is explained rigorously? Thanks!
 
  • #21
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,272
3,160
It has always bothered me to think of them as independent. Could someone direct me to a text where this is explained rigorously? Thanks!
Look at the cited wikipedia article.

The fact than dz/dz^*=0 and dz^*/dz=0 inplies that in the chain rule
d/du f(g(u),h(u)) = df/dg dg/du + df/dh dh/du
the mixed derivatives are not present when you specialize u to z or z^*, g to z, and h to z^*.
 
  • #22
708
7
Isn't that repeating what has already been claimed several times? I understand how the formulas are generated - it is just that the reasoning looks flawed to me. I may be thick, but could you please give me an idea how to vary [tex]\bar{z}[/tex] while keeping z constant? The last time I checked, the conjugate was a function of z (or vice versa), unless there is some definition of the complex conjugate that I don't know yet (which is certainly possible). I am guessing that this is the same thing that jostpuur was complaining about earlier in the thread. In everything I have read, it seems like this is a convenient definition, not a derivation. I would be happy to be proven wrong as my complex-fu is pretty basic.

I will quote some lines from Ahlfors (3rd edition, page 27):

We present this procedure with an explicit warning to the reader that it is purely formal and does not possess the power of proof.

...snip...

With this change of variable, we can consider f(x,y) as a function of z and [tex]\bar{z}[/tex] which we will treat as independent variables (forgetting that they are in fact conjugate to each other). If the rules of calculus were applicable....

...snip...

These expressions have no convenient definition as limits, but we can introduce them as symbolic derivatives with respect to z and [tex]\bar{z}[/tex].
I certainly see that the formalism has some practical use, but even Ahlfors seems to be saying pretty clearly that it is just a handy trick and not to be taken literally. As I said, though, my ability in complex analysis is still basic so if there is a rigorous derivation of this, I would really love to see it in print (not Wikipedia).

Thanks!
 
  • #23
strangerep
Science Advisor
3,086
909
Isn't that repeating what has already been claimed several times? I understand how the formulas are generated - it is just that the reasoning looks flawed to me. I may be thick, but could you please give me an idea how to vary [tex]\bar{z}[/tex] while keeping z constant? The last time I checked, the conjugate was a function of z (or vice versa), [...]
Here lies the root of the confusion. "Existence of a mapping between A and B"
does not necessarily imply "A and B are dependent on each other".

Let's take step back and consider a simpler example. Let x and y be independent
real variables, and let f(x,y) and g(x,y) be functions on [itex]R^2[/itex], at least
once-differentiable thereon. Then ask the question: "Is the function f dependent
on the function g, or are they independent of each other?". If f is dependent on g,
it means (by definition) that f is a function of g, so we can evaluate the derivative
via the standard multivariate chain rule, i.e.,

[tex]
\frac{ \partial f }{\partial g}
~=~ \frac{ \partial f }{\partial x} \frac{ \partial x }{\partial g}
~+~ \frac{ \partial f }{\partial y} \frac{ \partial y }{\partial g} ~.
[/tex]

At this point we can't say any more about whether f and g are/aren't
independent functions unless we know more about them.

Now take the specific case:

[tex]
f(x,y) ~:=~ x + iy ~~~~~;~~~ g(x,y) ~:=~ x - iy ~,
[/tex]

and we get 0 in the above, showing that these particular two functions
are independent of each other.

(And if I still "haven't proved anything", I'd sure like to know why not.)
 
  • #24
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,851
407
Here lies the root of the confusion.
I think the root of the confusion is that we're talking about variables when we should be talking about functions.

Let's take step back and consider a simpler example. Let x and y be independent
real variables, and let f(x,y) and g(x,y) be functions on [itex]R^2[/itex], at least
once-differentiable thereon. Then ask the question: "Is the function f dependent
on the function g, or are they independent of each other?". If f is dependent on g,
it means (by definition) that f is a function of g, so we can evaluate the derivative
via the standard multivariate chain rule, i.e.,

[tex]
\frac{ \partial f }{\partial g}
~=~ \frac{ \partial f }{\partial x} \frac{ \partial x }{\partial g}
~+~ \frac{ \partial f }{\partial y} \frac{ \partial y }{\partial g} ~.
[/tex]

At this point we can't say any more about whether f and g are/aren't
independent functions unless we know more about them.
I'm going to nitpick every little detail, because I think this reply would get kind of incoherent if I try to be more selective. I would never call f(x,y) a function. f is the function. f(x,y) is a member of its range. If [itex]f:\mathbb R^2\rightarrow\mathbb R[/itex], then the claim that "x and y are independent" doesn't add any information. It just suggests that we intend to use the symbols x and y to represent real numbers and intend to put x to the left of y in ordered pairs.

I agree that phrases like "f and g are independent" must be defined, if we are going to use them at all. But I don't think we should. A function from ℝ2 into ℝ is by definition a subset of ℝ2×ℝ (that satisfies a couple of conditions). It seems quite odd to describe two members of the same set (the power set of ℝ2×ℝ) as "dependent" or "independent", based on things other than the sets f,g and ℝ2×ℝ.

But OK, let's move on. The partial derivative of f with respect to the ith variable is another function, which I like to denote by Dif or f,i. The notations [itex]\partial f/\partial x[/itex] and [itex]\partial f/\partial y[/itex] are much more common in the physics literature. This is unfortunate, because I think a student is much less likely to misinterpret an expression like

[tex]D_1f(x,g(x,y))[/tex]

than

[tex]\frac{\partial f(x,g(x,y))}{\partial x}[/tex]

which of course means the same thing: The value of D1f at (x,g(x,y)).

OK, back to the f and g that you're talking about. What does [itex]\partial f/\partial g[/itex] mean? What function are you taking a partial derivative of, and which one of its partial derivatives does this expression refer to?

You're using the chain rule in a way that strongly suggests that what you call [itex]\partial f/\partial g[/itex] is the partial derivative with respect to the second variable of

[tex](s,t)\mapsto f(x(s,t),y(s,t))[/tex]

from ℝ into ℝ, where x and y have been redefined to refer to two unspecified functions from ℝ into ℝ. But why write [itex]\partial x/\partial g[/itex] for [itex]D_1x[/itex] unless you intend to "denote the first variable by g", but that means either

a) that g denotes a function of the type you mentioned, and the partial derivative is to be evaluated at a point of the form (x(s,g(a,b)),y(s,g(c,d))), or

b) that g denotes a number, and the partial derivative is to be evaluated at a point of the form (x(s,g),y(s,g)).

If we choose option b), we get

[tex]D_1f(x(s,g),y(s,g))D_2x(s,g)+D_2f(x(s,g),y(s,g))D_2(s,g)[/tex]

which can be written as

[tex]\frac{ \partial f }{\partial g}
~=~ \frac{ \partial f }{\partial x} \frac{ \partial x }{\partial g}
~+~ \frac{ \partial f }{\partial y} \frac{ \partial y }{\partial g} ~[/tex]

if we suppress the points at which the functions are being evaluated, and accept the rather odd notation [itex]\partial x/\partial g[/itex] for [itex]D_2x[/itex], and similarly for y.

Now take the specific case:

[tex]
f(x,y) ~:=~ x + iy ~~~~~;~~~ g(x,y) ~:=~ x - iy ~,
[/tex]

and we get 0 in the above, showing that these particular two functions
are independent of each other.
Ugh...how do you intend to insert this into the chain rule calculation above? Things are already messy, and it gets a lot worse if we try to insert this into the above. I think your previous attempt was much clearer, and I believe I showed why that doesn't work in my previous posts.
 
Last edited:
  • #25
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,272
3,160
Isn't that repeating what has already been claimed several times? I understand how the formulas are generated - it is just that the reasoning looks flawed to me. I may be thick, but could you please give me an idea how to vary [tex]\bar{z}[/tex] while keeping z constant?
If H is an analytic function of z^* and z then
[tex] dH(z^*,z)/dz^*=lim_{h\to 0} (H(z^*+h,z)-H(z^*,z))/h[/tex]
makes perfect sense and gives the right result.

my ability in complex analysis is still basic so if there is a rigorous derivation of this, I would really love to see it in print (not Wikipedia).
The wikipedia article lists a number of references where you can see things in print.
Many of the references are in pure math, so there should be no question that this is rigorous stuff. It is very useful to make things short and comprehensible that would otherwise be somewhat messy.

For example, if you have a classical anharmonic oscillator with Hamiltonian H(a^*,a),
the dynamics defined by it is
da/dt = i dH/da^*(a^*,a).
This would become an impractical, messy, and much less comprehensible formula if one would have to interpret it in terms of the real and imaginary parts. Mathematical notation is there to make life easy!

And it is very easy to apply unambiguously. Typically, H(a^*,a) is a formula rather than an abstract function. Thus you can replace every a^* by a temporary variable u and every remaining a by another temporary variable v, This gives you an expression H(u,v) that defines a function of two variables. You take the partial derivatives, and then substitute back the a^* for u and the a for v - this and nothing else is meant by treating a^* and a as independent variables. And you get provably correct results that way.

But it is a waste of effort to actually do the substitutions since it is very clear what to do without that. E.g., if
H = omega a^*a + lambda (a^*a)^2
then one sees directly
dH/da^* = omega a +2 lambda a^* a^2,
without having first to write
H(u,v)= omega uv+lambda(uv)^2,
dH/du=omega v + 2 lambda uv^2,
dH/da^* = dH/du|u=a^* = omega a +2 lambda a^* a^2.
 

Related Threads on Differentiating the complex scalar field

  • Last Post
Replies
2
Views
2K
Replies
2
Views
2K
Replies
1
Views
434
  • Last Post
Replies
2
Views
681
Replies
16
Views
2K
Replies
13
Views
1K
Replies
4
Views
2K
Replies
4
Views
6K
  • Last Post
Replies
2
Views
6K
Replies
2
Views
521
Top