Paradox regarding energy of dipole orientation

In summary, the energy of a dipole's orientation in an external field is incorrectly calculated due to a incorrect Delta function term.
  • #1
JustinLevy
895
1
"paradox" regarding energy of dipole orientation

I've ran into a "paradox" concerning deriving the energy of a dipole's orientation in an external field. For example, the energy of a magnetic dipole m in an external field B is known to be:

[tex]U= - \mathbf{m} \cdot \mathbf{B}[/tex]

In Griffiths Intro to Electrodynamics, this is argued by looking at the torque on a small loop of current in an external magnetic field.

The "paradox" arises instead when we try to derive it by looking at the energy in the magnetic field.

[tex] U_{em} = \frac{1}{2} \int (\epsilon_0 E^2 + \frac{1}{\mu_0} B^2) d^3r [/tex]

If we consider an external field B and the field due to a magnetic dipole B_dip, we have:

[tex] U = \frac{1}{2\mu_0} \int (\mathbf{B} + \mathbf{B}_{dip})^2 d^3r [/tex]
[tex] U = \frac{1}{2\mu_0} \int (B^2 + B^2_{dip} + 2 \mathbf{B} \cdot \mathbf{B}_{dip})d^3r[/tex]

the [tex]B^2[/tex] and [tex]B^2_{dip}[/tex] terms are independent of the orientation and so are just constants we will ignore. Which leaves us with:

[tex] U = \frac{1}{\mu_0} \int \mathbf{B} \cdot \mathbf{B}_{dip} d^3r [/tex]

Now we have:
[tex]\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] + \frac{2 \mu_0}{3} \mathbf{m} \delta^3(\mathbf{r})[/tex]

If you work through the math, only the delta function term will contribute, which gives us:

[tex]U=\frac{2}{3} \mathbf{m} \cdot \mathbf{B}[/tex]

Which not only has the wrong magnitude, but the wrong sign. And thus the "paradox". Obviously, there is no paradox and I am just calculating something wrong, but after talking to several students and professors I have yet to figure out what is wrong here.

One complaint has been that while doing the [tex]d\theta[/tex] and [tex]d\phi[/tex] integrals show that the non-delta function term doesn't contribute, it is unclear if this argument holds for the point right at r=0. I believe it still cancels, but to alleviate that, let's look at a "real" dipole instead of an ideal one. A spinning spherical shell of uniform charge with a magnetic dipole m, has the magnetic field outside the sphere:

for r>=R [tex]\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] [/tex]

So this is a good stand in for the idealized dipole, as it has a pure dipole field outside of the sphere. Inside the sphere the magnetic field is:

for r<R [tex]\mathbf{B}_{dip} = \frac{2 \mu_0}{3} \frac{\mathbf{m}}{\frac{4}{3} \pi R^3}[/tex]

which as you can see, reduces to the "ideal" case in the limit R -> 0.

Here there is no funny business at r=0. The math again gives:

[tex]U=\frac{2}{3} \mathbf{m} \cdot \mathbf{B}[/tex]


What gives!?
Who can help solve this "paradox"?
 
Last edited:
Physics news on Phys.org
  • #2
This reminds me of what is called the "4/3 problem" of classical electrodynamic
energy/mass which has been studied by many physicist including Pointcaré
and Feynman. See chapter 28 in Volume II of the Feynman lectures on physics.

If the electrostatic energy of a point field with a cut off would be equivalent
with a mass M then the momentum of the EM field of this particle moving at
speed v corresponds to a momentum of a mass 4/3 M moving at v.

http://www.google.com/search?num=100&hl=en&safe=off&q=“4/3+problem”+energy&btnG=SearchRegards, Hans
 
Last edited:
  • #3
I've talked to some more grad students and still no one can figure this one out. I'm sure it is something really simple and we'll all feel like idiots afterward, but we just can't see it.

Please, if anyone can help here it would be much appreciated. This problem has been nagging at the back of my head for a week now.
 
  • #4
Interesting paradox. I'll work it out and see what I can come up with.
 
  • #5
i don't know if this resolves the paradox, but the following can happen in the classical dipole model when applied to molecular systems:

the charge distributions of two molecules approaching each other create and E field - this E field induces molecular dipoles. These molecular dipoles in turn contribute to the field, which enhances the dipoles. Dipole forces attract the molecules. At a certain distance, you reach a region of discontinuity where the dipole will grow unboundedly and the field will blow up - clearly a non-physical result. The reason this doesn't physically happen is due to the Pauli exclusion principle and electron-electron electrostatics, neither of which is captured in the classical dipole model. For this reason, computer simulations of molecules using explicit many-body polarization are done with "damping" regions that serve to remove the discontinuity.

it may also turn out that in your case, the higher order multipole moments are important? what happens if you include quadrupolar terms?

im just tossing these ideas out here, I am not sure what the source of your seemingly paradoxical result actually is.
 
  • #6
arunma said:
Interesting paradox. I'll work it out and see what I can come up with.
Have you had any luck in figuring it out? Even if not, I'd be interested in hearing what results you got.

I've been working on the electric dipole case now, which has a similar problem. Equivalently, the answer should still be U = - (p.E) However the electric dipole field has a different factor in front of the delta function term ... which confuses this even more.

Please, if anyone has time to sit down and do a couple quick integrals and think through this, please do so. The math looks correct, so there must be some mistake in the logic and I can't find it.
 
  • #7


[I saw the mention of this thread as an "unsolved paradox" in some other thread]

The standard expressions for the Maxwell energy density and energy flow (Poynting vector) have been shown to be inconsistent under Lorentz transformations except when dealing with situations involving propagation of waves at c, with no local sources (charges or dipoles).

The problems and some specific solutions were described quite clearly in an old paper by J W Butler which I don't have to hand right now, but from a Google search I'd guess it's probably called "A Proposed Electromagnetic Momentum-Energy 4-Vector for Charged Bodies".

Butler's more general expression for electromagnetic energy density is described in Jackson "Classical Electrodynamics" (2nd edition) section 17.5 "Covariant Definitions of Electromagnetic Energy and Momentum". It should have a better chance of giving consistent results.
 
  • #8


http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.3421v3.pdf [Broken]
 
Last edited by a moderator:
  • #9


Oh wow! I didn't really expect this thread to be revived. Thanks guys.

The only real updates since I last posted here is that a friend and I were confused why the [itex]\rho V + A \cdot j[/itex] worked for the electric dipole but the energy in the fields term did not. Looking at the derivation it quickly because obvious we neglected the surface term.

We also studied this with finite source fields and finite dipoles to make us more sure of the math ... it gave the same answer.

This also fixes the magnitude of the constant in front of [itex] m \cdot B[/itex] for the magnetic dipole case. However, the sign is still wrong. Using A.j of course gives the same answer (since they are mathematically equivalent here).

I've asked professors and grad-students, but unfortunately, no one has an idea yet (although someone found a letter to a journal were Prof. David Griffith poses the same question. I could not find a response to his question though).

clem said:
http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.3421v3.pdf [Broken]
I feel uncomfortable with some of their claims, and will have to read this closer later.
Regardless, saying B.B is not an energy but A.j is does not solve this problem. It gives you the wrong sign just as the energy in the fields equation does (and as they must, since they are related by a vector calc identity).

Jonathan Scott said:
[I saw the mention of this thread as an "unsolved paradox" in some other thread]

The standard expressions for the Maxwell energy density and energy flow (Poynting vector) have been shown to be inconsistent under Lorentz transformations except when dealing with situations involving propagation of waves at c, with no local sources (charges or dipoles).

The problems and some specific solutions were described quite clearly in an old paper by J W Butler which I don't have to hand right now, but from a Google search I'd guess it's probably called "A Proposed Electromagnetic Momentum-Energy 4-Vector for Charged Bodies".

Butler's more general expression for electromagnetic energy density is described in Jackson "Classical Electrodynamics" (2nd edition) section 17.5 "Covariant Definitions of Electromagnetic Energy and Momentum". It should have a better chance of giving consistent results.

Thanks for the heads up.
I'll give it a read through. This stupid sign error in my math/reasoning has always bugged me.
 
Last edited by a moderator:
  • #10


Jonathan Scott said:
[I saw the mention of this thread as an "unsolved paradox" in some other thread]

The standard expressions for the Maxwell energy density and energy flow (Poynting vector) have been shown to be inconsistent under Lorentz transformations except when dealing with situations involving propagation of waves at c, with no local sources (charges or dipoles).

The problems and some specific solutions were described quite clearly in an old paper by J W Butler which I don't have to hand right now, but from a Google search I'd guess it's probably called "A Proposed Electromagnetic Momentum-Energy 4-Vector for Charged Bodies".

Butler's more general expression for electromagnetic energy density is described in Jackson "Classical Electrodynamics" (2nd edition) section 17.5 "Covariant Definitions of Electromagnetic Energy and Momentum". It should have a better chance of giving consistent results.
A very elementary example which requires Butler's expression to get the
correct value for the energy flux is that of the (infinite) parallel plate
capacitor moving perpendicular to the plane.

Whatever the perpendicular velocity, the magnetic field B stays 0 always
and so the energy flux contribution from the Poynting vector is 0 also.

In Butler's expression the whole stress-energy tensor is used and the
energy flux becomes:

(energy density E2 at rest) TIMES (velocity) TIMES (gamma).

Where the gamma term stems from the Lorentz contraction which results
in a higher density of the energy flux. This is the result you would expect.

One need Bloch's expression also to calculate the electromagnetic momentum
of virtual photons from Klein Gordon transition currents.Regards, Hans
 
  • #11


JustinLevy said:
However, the sign is still wrong.
This stupid sign error in my math/reasoning has always bugged me.
The + sign for mu.B is explained on page 10 of
http://arxiv.org/PS_cache/arxiv/pdf/...707.3421v3.pdf [Broken]
 
Last edited by a moderator:
  • #12


I feel that fundamentally, you are somehow missing the energy of the current itself inside the loop. If you think about it, what you are calculating is just the energy stored in the fields outside of the loop. A dipole tends to point along the field lines, and when it does, the field it produces points along the magnetic field line. This of course strengthens the field and thus increases the energy contribution. However, the more important term would be the energy inside that loop of current, and it should be calculated in terms of ∫A·j dx (with an appropriate choice of j, then take the limit r->0). In your calculation, this infinite j term is completely neglected. The delta function in the B term does not take this into account, as it merely captures the B field at the center of the loop going to infinity. Indeed, the A·j term should capture the essential interaction of a twisting dipole.
 
Last edited:
  • #13


Jonathan Scott said:
The problems and some specific solutions were described quite clearly in an old paper by J W Butler which I don't have to hand right now, but from a Google search I'd guess it's probably called "A Proposed Electromagnetic Momentum-Energy 4-Vector for Charged Bodies".

Butler's more general expression for electromagnetic energy density is described in Jackson "Classical Electrodynamics" (2nd edition) section 17.5 "Covariant Definitions of Electromagnetic Energy and Momentum". It should have a better chance of giving consistent results.

Butler's paper has nothing to do with a magnetic moment at rest in a magnetic field.
 
  • #14


clem said:
Butler's paper has nothing to do with a magnetic moment at rest in a magnetic field.

That appears true to me. The main point which was relevant here is that it shows that the conventional energy density (both electrostatic and magnetic) only holds in the absence of sources, so it explains why it didn't work here. It also gives an expression which works for the energy of a single moving charge, but I don't know whether it can be integrated to cover those cases.

The paper by J Franklin looks very interesting and useful, thanks, and as you point out it's very relevant to this particular case.
 
  • #15


tim_lou said:
I feel that fundamentally, you are somehow missing the energy of the current itself inside the loop. If you think about it, what you are calculating is just the energy stored in the fields outside of the loop. A dipole tends to point along the field lines, and when it does, the field it produces points along the magnetic field line. This of course strengthens the field and thus increases the energy contribution. However, the more important term would be the energy inside that loop of current, and it should be calculated in terms of ∫A·j dx (with an appropriate choice of j, then take the limit r->0). In your calculation, this infinite j term is completely neglected. The delta function in the B term does not take this into account, as it merely captures the B field at the center of the loop going to infinity. Indeed, the A·j term should capture the essential interaction of a twisting dipole.
No, I am including the energy inside the dipole as well. As mentioned above, I worked this out with a finite sized source and got the same answer as the point dipole (with the delta-function term ... which is indeed the 'inside' contribution). Also as mentioned, the A.j method is mathematically equivalent and sure enough, gives the same answer.

Jonathan Scott said:
That appears true to me. The main point which was relevant here is that it shows that the conventional energy density (both electrostatic and magnetic) only holds in the absence of sources, so it explains why it didn't work here. It also gives an expression which works for the energy of a single moving charge, but I don't know whether it can be integrated to cover those cases.
Okay, I read up on the portion you mentioned in Jackson.
That is not the problem here. What Jackson mentions is that the usual stress energy tensor doesn't transform correctly if there are singularities in the stress-energy tensor. So if there are point (or line or plane, etc.) sources, some steps can be taken to give a better definition.

This doesn't apply to this problem for three reasons:
1] The usual energy and momentum is still conserved. It can still be treated as an energy momentum as long as you are not transforming and mixing result from other coordinate systems.
2] In a static situation (the sources do not depend on time), it appears that covariant defintion reduces to the usual one.
3] I can easily choose a magnetic dipole source that is not a singularity (for example a sphere of uniform charge density spinning at a constant rate ... and super-impose another of opposite charge spinning the opposite way if you want to remove the electric field).

clem said:
The + sign for mu.B is explained on page 10 of
http://arxiv.org/PS_cache/arxiv/pdf/...707.3421v3.pdf [Broken]
As I already stated, I am very uncomfortable with this paper.
In SR it does not make a difference if you use the usual energy density and surface term, or the A.j term. They keep complaining about the surface term as if that makes that form of the electromagnetic energy wrong. But it is not wrong. They are mathematically equivalent. And if the surface term bothers you that much, just note that the surface term only gives a contribution when you consider infinite sources (like they do, with an external B field everywhere or similarly with an external E field) or infinite time.

Secondly, they do not explain the plus sign. They waive it away, and with what appears to be false logic.

Third, while it doesn't matter in SR, it DOES matter in GR where the energy/momentum is located (in the fields or purely in the charges). The canonical method has always been to use the stress-energy tensor of the fields. This paper blatantly flies in the face of that.

For all those reasons, let's please move on from that paper for this discussion. If you want to start another thread discussing their opinions in that paper, so be it.
 
Last edited by a moderator:
  • #16


So what if the math here is correct and there really is a paradox? What would this mean if the paradox is true? Can an experiment be conducted to verify the existence of such a paradox?
 
  • #17


Charlie_V said:
So what if the math here is correct and there really is a paradox? What would this mean if the paradox is true? Can an experiment be conducted to verify the existence of such a paradox?
With the energy having the opposite sign, it would mean that magnetic dipoles would try to anti-align with a magnetic field. We know experimentally that they allign with the field.

There is no real paradox here (hence the use of "scare quotes").
However, learning the resolution to this "paradox" will probably give me greater insight into the problem.
The fact that the author of a popular electrodynamics textbook asked similar questions worries me that we might not be able to figure this out ourselves. I'd like to keep trying though. Any ideas people?
 
Last edited:
  • #18


JustinLevy said:
Okay, I read up on the portion you mentioned in Jackson.
That is not the problem here. What Jackson mentions is that the usual stress energy tensor doesn't transform correctly if there are singularities in the stress-energy tensor. So if there are point (or line or plane, etc.) sources, some steps can be taken to give a better definition.

This doesn't apply to this problem for three reasons:
1] The usual energy and momentum is still conserved. It can still be treated as an energy momentum as long as you are not transforming and mixing result from other coordinate systems.
2] In a static situation (the sources do not depend on time), it appears that covariant defintion reduces to the usual one.
3] I can easily choose a magnetic dipole source that is not a singularity (for example a sphere of uniform charge density spinning at a constant rate ... and super-impose another of opposite charge spinning the opposite way if you want to remove the electric field).

I don't remember the details that Jackson mentions right now, but I'm fairly sure Butler's paper shows that the standard expressions for the integrals of the assumed energy density (and Poynting vector flow) in space are only equal to the energy densities calculated by the other method (from the charges in the potentials) if there are no sources of any sort within the volume being considered; the equivalence is usually demonstrated by letting the volume extend to infinity and assuming the surface term tends to zero, but it doesn't if there are sources.

I think he goes on to show that you do get a consistent result if you assume energy density related to E2/2 in the rest frame of a charge, in which case I think the energy-momentum density part of the tensor becomes (E2-B2)/2 times the four-velocity in some other frame, where the sign of the B term is the opposite of that obtained for waves traveling at c in the absence of sources.

I first came across Butler's paper when I tentatively worked out the same result myself using four-vector algebra and thought that it contradicted the standard expressions but someone referred me to his paper. I still wonder about where the energy is "really" located, from the gravitational point of view, but until this new paper came along (which I haven't yet fully digested) I thought it could be consistently assumed to be in the field.
 
  • #19


Please reread the discussion in Jackson, for one of us is misunderstanding something, for I still disagree with you here. However, on a closer second reading I see that by divergent he didn't mean "singularity" but merely in the Del.whatever sense. So you are correct about that (any sources, singularities or not, ruins the general covariance).

Yet, again, this doesn't mean the usual energy and field momentums cannot be used consistently within one inertial coordinate system.

pg 756 in Jackson "Classical Electrodynamics"
"the usual spatial integrals at a fixed time of the energy and momentum densities,
[tex]u = \frac{1}{8\pi}(E^2 +B^2), \ \ \ \ g = \frac{1}{4\pi c} (E \times B)[/tex]
may be used to discuss conservation of electromagnetic energy or momentum in a given inertial frame, but they do not transform as components of a 4-vector unless the fields are source-free."


Also, later when working out the covariant definitions for the densities, he states that if the total momentum of the fields is zero, then the covariant definition just reduces to the usual definition. So in the cases considered here, (the total momentum in the fields is zero), we are already using the covariant definitions.
 
  • #20


JustinLevy said:
Please reread the discussion in Jackson, for one of us is misunderstanding something, for I still disagree with you here. However, on a closer second reading I see that by divergent he didn't mean "singularity" but merely in the Del.whatever sense. So you are correct about that (any sources, singularities or not, ruins the general covariance).

Yet, again, this doesn't mean the usual energy and field momentums cannot be used consistently within one inertial coordinate system.

pg 756 in Jackson "Classical Electrodynamics"
"the usual spatial integrals at a fixed time of the energy and momentum densities,
[tex]u = \frac{1}{8\pi}(E^2 +B^2), \ \ \ \ g = \frac{1}{4\pi c} (E \times B)[/tex]
may be used to discuss conservation of electromagnetic energy or momentum in a given inertial frame, but they do not transform as components of a 4-vector unless the fields are source-free."


Also, later when working out the covariant definitions for the densities, he states that if the total momentum of the fields is zero, then the covariant definition just reduces to the usual definition. So in the cases considered here, (the total momentum in the fields is zero), we are already using the covariant definitions.

I can't identify which bit you mean by the last reference.

As I see it, the first part of section 17.5 says that although the conventional expressions do not transform correctly, you can get an arbitrary but correctly transforming quantity by assuming that the conventional expression is correct in some frame then transforming that expression.

Jackson then goes on to say (paragraph containing equation 17.44) that in the special case that there is a frame in which all the charges are at rest (so B is zero) then transforming to any other frame (17.44 and 17.45) gives an expression for the energy and momentum density which gives a physically plausible four-vector (and is the same as Butler's expression).
 
  • #21


This might not be a very satisfactory answer for you; but I think you are making this way too complicated.

When you calculate the energy by finding the amount of work required to move the dipole in from infinity and rotating it into position (as per Griffiths) you get -m.B. That is the amount of work the field does; the amount of energy that is stored in the field as a result is -(-m.B)=+m.B.
 
Last edited:
  • #22


Jonathan Scott,
I feel we are going in circles.

Do you agree with these two points:
1] The usual energy and field momentums can be used consistently within one inertial coordinate system.
2] In this case with static fields and E=0, the covariant definition just reduces to the usual definition.

If you disagree with either of these, please show the math explaining specifically what you mean ... for I feel I am misunderstanding the specifics of your complaint.

gabbagabbahey said:
This might not be a very satisfactory answer for you; but I think you are making this way too complicated.

When you calculate the energy by finding the amount of work required to move the dipole in from infinity and rotating it into position (as per Griffiths) you get -m.B. That is the amount of work the field does; the amount of energy that is stored in the field as a result is -(-m.B)=+m.B.
Please work the math out for yourself, for you appear to be post-dicting an argument to fix the sign to match what you feel is correct, instead of actually doing the calculation. In creating such a "post-dicting correction" you are actually creating new problems instead of solving the main problem.

For example, consider an electric dipole. The energy is -p.E , analogous to the magnetic dipole case. If you calculate the energy in the fields, you get the correct answer (-p.E). If your argument was correct, then now the energy in the fields for an electric dipole is wrong!
 
  • #23


JustinLevy said:
Jonathan Scott,
I feel we are going in circles.

Do you agree with these two points:
1] The usual energy and field momentums can be used consistently within one inertial coordinate system.
2] In this case with static fields and E=0, the covariant definition just reduces to the usual definition.

If you disagree with either of these, please show the math explaining specifically what you mean ... for I feel I am misunderstanding the specifics of your complaint.

"1] The usual energy and field momentums can be used consistently within one inertial coordinate system."

I'd agree that this is what Jackson says (and so do others). However, it really depends what you use it for. Basically, the "stuff" described by the conventional expressions obviously obeys a conservation law by Poynting's theorem, and has the right dimensions, but it doesn't really physically match the expected distribution of energy and momentum in many cases.

"2] In this case with static fields and E=0, the covariant definition just reduces to the usual definition."

The case in Jackson where the covariant definition reduces to the usual definition (around equation 17.44 in my 2nd edition copy) is quite specifically the one where there is a frame in which all charges are at rest and B is zero, not the other way round. There may well be an analogous result for a static B field, but I don't know.
 
  • #24


Jonathan Scott said:
"2] In this case with static fields and E=0, the covariant definition just reduces to the usual definition."

The case in Jackson where the covariant definition reduces to the usual definition (around equation 17.44 in my 2nd edition copy) is quite specifically the one where there is a frame in which all charges are at rest and B is zero, not the other way round. There may well be an analogous result for a static B field, but I don't know.
Maybe I'm missing something, but I don't understand why we can't agree on that point.

Following Jackson, the covariant definition reduces to the usual definition in a frame he denotes as K'. Later he defines K' as the frame in which the total momentum in the fields (using the usual definition) is zero. Since I have chosen a frame where E=0 everywhere, this is indeed that frame. Yes, if there were no moving charges, that would also be a specific example of such a frame ... but in this case there is no such inertial frame where all the charges are at rest.

So again, I feel we are completely justified in using the usual energy and momentum densities for the electromagnetic fields. This caveat you brought up is interesting, but is ultimately unrelated to this problem as it does not require changing any of our calculations.
 
  • #25


JustinLevy said:
Maybe I'm missing something, but I don't understand why we can't agree on that point.

Following Jackson, the covariant definition reduces to the usual definition in a frame he denotes as K'. Later he defines K' as the frame in which the total momentum in the fields (using the usual definition) is zero. Since I have chosen a frame where E=0 everywhere, this is indeed that frame. Yes, if there were no moving charges, that would also be a specific example of such a frame ... but in this case there is no such inertial frame where all the charges are at rest.

So again, I feel we are completely justified in using the usual energy and momentum densities for the electromagnetic fields. This caveat you brought up is interesting, but is ultimately unrelated to this problem as it does not require changing any of our calculations.

I have very little more I can say, as I've already stated what I believe he is saying, but I'll have another go at expressing it in a different way.

In the earlier part (starting at the paragraph containing equation 17.37 end ending at the paragraph containing equation 17.43) he says that the "correct four-vector character" can be obtained by taking the conventional expression in an arbitrary frame, of which the "natural choice" is a "rest" frame where ExB is zero, and transforming that to the actual frame. This doesn't mean it's the "correct" energy and momentum, but rather that whatever it is, its integral transforms correctly as a four-vector. (I'm not even entirely sure that he has considered the case where ExB is zero due to E being zero but B being non-zero, as the rest of the discussion is about the motion of charges.)

In the next section, starting at the paragraph containing equation 17.44, he restricts discussion to the specific case where there is a frame in which all charges are at rest and B is therefore zero (which does NOT include your case). In that case, we get Butler's four-vector expression for the energy and momentum density, which has the opposite sign for the B2 term compared with the conventional expression. However, this obviously reduces to the conventional expression in the rest frame because B is zero.

I do not know whether there is some closely related result which can be obtained for the case where there are static magnetic fields and no unbalanced charges, but I do know that no such result is described in that section of my copy of Jackson and I don't think that one can extrapolate to one without providing specific reasoning.
 
  • #26


JustinLevy said:
I've been working on the electric dipole case now, which has a similar problem. Equivalently, the answer should still be U = - (p.E) ...

Do you encounter the same "paradox" in the more general problem of determining the energy of a arbitrary assembly of charges? It's straightforward to determine the energy both in terms of the work required to bring the charges into the final configuration, and in terms of the integral of the field energy. Naturally they both give the same result. I would think the question about a dipole is just a special case of this, so it might help if you started with the more general case (where you know you can get the same answer both ways) and then specialize to the case of a simple dipole.
 
  • #27


Sam Park said:
Do you encounter the same "paradox" in the more general problem of determining the energy of a arbitrary assembly of charges?
That post you referred to is quite a ways back in the discussion. At that point I was still using infinite sources. Using a finite source (or including a surface term at infinity) solves this problem. So the electric dipole case works fine.

However, even with finite sources, the energy of a magnetic dipole in a magnetic field has the wrong sign. This happens when calculating using A.j as the energy density as well.


Jonathan Scott said:
I have very little more I can say, as I've already stated what I believe he is saying, but I'll have another go at expressing it in a different way.

In the earlier part (starting at the paragraph containing equation 17.37 end ending at the paragraph containing equation 17.43) he says that the "correct four-vector character" can be obtained by taking the conventional expression in an arbitrary frame, of which the "natural choice" is a "rest" frame where ExB is zero, and transforming that to the actual frame.
Good, so we agree on that. So using his choice laid out there, the covariant expression he presents reduces to the usual equations we've been using so far.

Jonathan Scott said:
This doesn't mean it's the "correct" energy and momentum, but rather that whatever it is, its integral transforms correctly as a four-vector.
Oh come on!
Previously you agreed that the usual energy and momentum terms can be used as invariants of time and spatial translations respectively for a given inertial frame.
NOW, you even agree that the covariant expression (which is equal to the equations we're using here) transform correctly as a four-vector.

So, it fits the definition of the energy, and has the properties you'd expect from an energy. What more do you need!?


[EDIT: You seem to be complaining that I am generalizing a specific example, and want to know if that is justified. Jackson actually says the opposite. When he gets to the part where he discusses the static charges he says "For electromagnetic configurations in which all the charges are at rest in some frame (the Abraham-Lorentz model of a charged particle is one example), the general formulas can be reduced to more attractive and transparent forms." (emphasis added is mine)

If you want to maintain your objection, explain why that one specific case of the general formula makes my calculations wrong despite the fact that my calculations agree with the general formula and is not of the subset included in that "specific case" the reduced equations were for? ]
 
Last edited:
  • #28


JustinLevy said:
That post you referred to is quite a ways back in the discussion. At that point I was still using infinite sources. Using a finite source (or including a surface term at infinity) solves this problem. So the electric dipole case works fine.

However, even with finite sources, the energy of a magnetic dipole in a magnetic field has the wrong sign. This happens when calculating using A.j as the energy density as well.

Hmmm... So you're saying that, with your original approach, the sign came out wrong for both the electric dipole and magnetic dipole, but now with your new approach the sign problem has disappeared from the electric dipole case, but remains unchanged for the magnetic dipole case? So there must have been two completely independent errors in your original approaches, one reversing the sign of the electric dipole energy, and another reversing the sign of the magnetic dipole energy, and you've corrected one of them but not the other. This reminds me of a story about a student of Dirac who asked for help because he got the wrong sign for some quantity, so he knew he must have made a mistake with the sign somewhere, and Dirac said "Yes, or an odd number of them".

The only post in this thread that presented actual equations to explain the sign problem you're talking about was the very first post, but I gather your thinking has changed since then, so it might be helpful for you to present an update to that original post, showing whatever calculation(s) you currently believe give the wrong sign.
 
  • #29


Originally I was considering infinite sources (as you can see in the original posting). Working out the magnetic dipole in an external magnetic field and using just a volume integral of the (E^2+B^2) energy density term resulted in:
U = + (2/3) m.B
Doing likewise for the electric dipole gave:
U = -(1/3) p.E

So there were two problems originally, the magnitude was wrong for both, and the sign was wrong for the magnetic dipole case (not the electric dipole case).


What has changed since then:
It turns out that the energy density equations are not correct if there are infinite sources, unless one includes a surface term at infinity. Including the surface term, or using finite sources fixed the magnitude problem. Now we have:
U = + m.B
U = - p.E


Once writing the equations down, I am confident of my ability to solve them as I have checked many times, and multiple grad students have worked it out themselves, and no one here objects to the math. As mentioned, Prof. David Griffiths (who wrote a popular E&M textbook) even wrote to a journal asking a similar question (unfortunately I didn't see a followup answer article or anything anywhere). So I don't expect to find a sign error in my math.

The problem must lie in the application of the equations ... maybe I'm mis-applying the equations to this situation. Jonathan Scott brought up one such idea. It brought up some interesting facts that I hadn't heard before, and was an interesting read, but it appears to me to be a dead end. If you have any other ideas, please do let us know.
 
  • #30


I didn't read trough this thread very carefully, but are you sure that you are allowed to do the manipulations

[tex]
U = \frac{1}{2\mu_0} \int (\mathbf{B} + \mathbf{B}_{dip})^2 d^3r
[/tex]

[tex]
U = \frac{1}{2\mu_0} \int (B^2 + B^2_{dip} + 2 \mathbf{B} \cdot \mathbf{B}_{dip})d^3r
[/tex]

On [tex]\mathbf{B}_{dip}[/tex] which contains the dirac delta function?

I think it might work out if you consider [tex]\mathbf{B}_{dip}[/tex] as a weird function defined in terms of the dirac delta function from the beginning, instead of plugging it in at the end of the calculation, having assumed it to be ordinary.
 
  • #31


To clarify: you should look around to find out if there is any formalism for squaring the dirac delta function.

If there isn't then the starting point of the derivation is unfounded.
 
  • #32


cup said:
I didn't read trough this thread very carefully, but are you sure that you are allowed to do the manipulations

[tex]
U = \frac{1}{2\mu_0} \int (\mathbf{B} + \mathbf{B}_{dip})^2 d^3r
[/tex]

[tex]
U = \frac{1}{2\mu_0} \int (B^2 + B^2_{dip} + 2 \mathbf{B} \cdot \mathbf{B}_{dip})d^3r
[/tex]

On [tex]\mathbf{B}_{dip}[/tex] which contains the dirac delta function?

I think it might work out if you consider [tex]\mathbf{B}_{dip}[/tex] as a weird function defined in terms of the dirac delta function from the beginning, instead of plugging it in at the end of the calculation, having assumed it to be ordinary.

From just a simplistic point of view, the quantity U is being defined as the integral of a squared magnitude, (Bf + Bd)^2, which is obviously positive-definite, but of course the self-terms Bf^2 and Bd^2 are then subtracted from this, leaving just the cross term 2Bf*Bd. The sign of this is positive if Bf and Bd both have the same sign, so the only way for the argument of the integral to ever be negative is for Bd to have the opposite sign of Bf somewhere. Depending on how you define those separate quantities, you could write the original argument as (Bf - Bd)^2, in which case, after subtracting the self-terms, the argument is -2Bf*Bd, which is to say, the sign of the overall answer is reversed. So it's crucial for Bf and Bd to have their signs defined on a consistent basis. Neither of their signs matters individually, but the product of their signs matters, so it's important to make an even number of errors when assigning their signs!

If Bd and Bf really always have the same sign, then clearly the integral of Bf*Bd must be positive, so I'd suggest focusing on the question of whether they really always have the same sign. Remember it's a dot product of two vectors. Which way do those two vectors point?
 
  • #33


cup said:
To clarify: you should look around to find out if there is any formalism for squaring the dirac delta function.

If there isn't then the starting point of the derivation is unfounded.
To make sure that is not a problem, this was worked out with finite sources (a spinning sphere of charge has a non-zero magnetic dipole moment outside, a constant field inside, and all other multipole moments are zero). The answer is exactly the same (you don't even need to take the limit the size goes to zero because it is a perfect dipole ... but you could if you want, and of course it reproduces the "point" dipole).

So that is not the problem here.


Sam Park said:
If Bd and Bf really always have the same sign, then clearly the integral of Bf*Bd must be positive, so I'd suggest focusing on the question of whether they really always have the same sign. Remember it's a dot product of two vectors. Which way do those two vectors point?
Sam, I don't mean to be disrespectful, but can you please work out the problem yourself to check? Your comments are not making any sense, and I think this is because you haven't worked out the problem yourself.

Bd and Bf are vector fields. Their direction and magnitude are defined at every point. Regardless of how I orient the dipole, there are places where the fields are parallel, and places where the fields are anti-parallel. Only after doing the whole integral can I find out if the result is positive or negative. Furthermore, the fields are completely defined by the sources. Are you saying I calculated the fields from the sources wrong? Because not only have I checked those equations myself, but at least two textbooks agree with those equations for the fields of a dipole. I didn't even bother deriving that in the openning post; I just gave them verbatum from a textbook.

Please, if you insist the math in my calculation is faulty, please try working it out yourself so you can be convinced and we can move beyond that. For I am convinced the error is not in the calculation itself, but in the assumptions we've made in writing down / applying the equations to the physics.
 
  • #34


A formula such as

[tex]
U_{em} = \frac{1}{2} \int (\epsilon_0 E^2 + \frac{1}{\mu_0} B^2) d^3r
[/tex]

comes with certain requirements on the objects represented by the symbols in the formula.
The integrand has to be a function.

If you go ahead and (implicitly) assume that it's a function. This allows you to use the familiar algebraic rules that you've learned, and make some derivations, like:

[tex]
U = \frac{1}{2\mu_0} \int (\mathbf{B} + \mathbf{B}_{dip})^2 d^3r
[/tex]
[tex]
U = \frac{1}{2\mu_0} \int (B^2 + B^2_{dip} + 2 \mathbf{B} \cdot \mathbf{B}_{dip})d^3r
[/tex]
[tex]
U = \frac{1}{\mu_0} \int \mathbf{B} \cdot \mathbf{B}_{dip} d^3r
[/tex]

...then that's totally fine and dandy.

But when you're done with your derivations, you can't just change your mind about the symbols and say:

"
well, the thing that I (implicitly) said was a function above wasn't really a function, it was this thing:

[tex]
\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] + \frac{2 \mu_0}{3} \mathbf{m} \delta^3(\mathbf{r})
[/tex]

"

which is not a function, i.e. does not obide by all of the algebraic rules that you used in the initial derivation.

As soon as you changed the meaning of [tex]\mathbf{B}_{dip}[/tex], the algebraic manipulations you performed initially are not valid anymore: the string of equations don't follow anymore.

To sum up:

The final integral incidentally DOES make sense mathematically... but you lost the physics (coming from the very first formula) when you changed the meaning of [tex]\mathbf{B}_{dip}[/tex], because the string of equations in the initial derivation does not hold anymore, because the algebraic manipulations are unjustified, because the thing is not a function.
 
  • #35


JustinLevy said:
Sam, I don't mean to be disrespectful, but can you please work out the problem yourself to check? Your comments are not making any sense, and I think this is because you haven't worked out the problem yourself. Bd and Bf are vector fields. Their direction and magnitude are defined at every point.

Yes, I know B is a vector field that varies from place to place. That’s why I said, in order for the integral to give a negative value, the vectors must be pointing in opposite directions somewhere. My intent was to suggest that you check the directions of your vectors at some key points near the dipole where the biggest contributions to the integral will be, and convince yourself that the two vectors are indeed pointing in negative directions relative to each other (i.e., the dot product is negative) at those points, as a sanity check on your assignment of signs to the two components.

JustinLevy said:
Please, if you insist the math in my calculation is faulty, please try working it out yourself so you can be convinced and we can move beyond that. For I am convinced the error is not in the calculation itself, but in the assumptions we've made in writing down / applying the equations to the physics.

I entirely agree that the problem is not in the arithmetic, it’s in the assignment of physical meanings to the symbols. That has been the point of my messages. Sorry if I didn’t make that clear.

Representing magnetic dipoles as two equal and opposite “magnetic charges”, we can carry through the derivation of the potential energy of a given dipole in a given magnetic field. We regard the given field as produced by a suitable distribution of magnetic charges, and then we compute the work required to bring another pair of magnetic charges to a certain configuration in that field. Now, we can also compute the corresponding change in the magnetic field energy, and these come out to be the same, both equal –u*B. The derivation is essentially identical to the case of electric dipoles. Are you saying this derivation is inapplicable in the magnetic case?
 

Similar threads

Replies
1
Views
547
  • Classical Physics
Replies
3
Views
814
  • Classical Physics
Replies
1
Views
1K
Replies
2
Views
819
  • Classical Physics
Replies
21
Views
1K
  • Classical Physics
Replies
4
Views
714
  • Advanced Physics Homework Help
Replies
1
Views
268
Replies
16
Views
2K
  • Special and General Relativity
Replies
5
Views
269
  • Advanced Physics Homework Help
Replies
1
Views
1K
Back
Top