In magnetism, what is the difference between the B and H fields?

AI Thread Summary
The discussion clarifies the distinction between the B and H fields in magnetism, where the B field represents the total magnetic induction, while the H field corresponds to the magnetic field in a vacuum, modified by the presence of materials. The relationship between these fields is expressed through the equations B = μ₀H and B = μ₀(H + M), where M is the magnetization of the material. The H field is considered a mathematical construct that simplifies calculations by excluding the effects of induced magnetizations, while the B field is viewed as a more fundamental physical quantity. The conversation also touches on the implications of using these fields in practical applications, emphasizing that B and H have different units in SI, which can lead to confusion. Ultimately, understanding the interplay between these fields is crucial for accurate modeling in electromagnetism.
  • #51
I just can cite my former solid state theory teacher in that context: "Never average!
Jackson is not a solid state theorist but from particle physics, he does not know that."
In solid state theory you don't calculate the polarization from the moments of the atoms or molecules but you determine epsilon from e.g. linear response.

If you do not average dipole moments, you are calculating different thing than macroscopic polarization.

But as I understood it, the procedure Adler is describing is just a particular way to evaluate the macroscopic polarization, using the formalism of epsilon in Fourier domain. This epsilon then gives macroscopic polarization, whose meaning is exactly average bulk dipole moment. It may be hidden in the flood of formulae, but he uses density matrix to calculate averages.
 
Physics news on Phys.org
  • #52
Jano L. said:
If you do not average dipole moments, you are calculating different thing than macroscopic polarization.
True, but the point I want to make is that averaging is something you can do if you like and quite trivially so but it is not some essential ingredient needed to define magnetisation or polarisation as many introductory e-dynamics books try to make you believe. Also I don't think that macroscopic H and D fields are any better defined than their microscopic counterparts. The freedom of the choice of gauge as you call it is also present here. E.g. in optics one sets B=H by definition and only considers polarisation P. The price one has to pay is that P becomes non-local, which is unavoidable at higher frequencies anyhow. See Landau Lifgarbagez, Electrodynamics of continua.
Only at zero frequency this is inconvenient as P then diverges in this scheme.
Another example are metals whose response may either be described in terms of a complex dielectric constant (leading to polarisation) or of a real conductivity (giving rise to a current density).
 
  • #53
it is not some essential ingredient needed to define magnetisation or polarisation

So then, how would you define polarization without averaging? As the polarization potential \mathbf p of microscopic charge and current densities? This is highly oscillating in space and is not related to the equally highly oscillating microscopic field \mathbf e in any simple way. Such epsilon would be very irregular function determined by the exact position of the atoms, and hence describing particular body instead of general character of the medium.

The linear response relation, if forced upon these microscopic quantities, would make the corresponding \epsilon(\mathbf q, \omega) depend very violently on both spatial coordinates and q.

In short, this all would mean complex microscopic theory. But in microscopic domain the polarization and linear response description is not very useful - one works directly with charged particles and equations of motion.

E.g. in optics one sets B=H by definition and only considers polarization P.

I do not think B=H is an arbitrary definition. It is an approximation, valid because the magnetic moments are weak and negligible - Landau comments this. We could be exact and keep the weak magnetization in H.

Another example are metals whose response may either be described in terms of a complex dielectric constant (leading to polarisation) or of a real conductivity (giving rise to a current density).

Hmm, conduction can be described by Hertz polarization potential, current being its time derivative. It has mathematical advantages, that's right, but I do not think this is enough to call it polarization. In case the current is direct, the polarization potential grows linearly in time. Calling it polarization leads to misleading picture. There is nothing in metal that increases in time, the metal does not undergo any change, unlike dielectric, where growing polarization means growing displacement of bound charge.
 
  • #54
Jano L. said:
I do not think B=H is an arbitrary definition. It is an approximation, valid because the magnetic moments are weak and negligible - Landau comments this. We could be exact and keep the weak magnetization in H.



Hmm, conduction can be described by Hertz polarization potential, current being its time derivative. It has mathematical advantages, that's right, but I do not think this is enough to call it polarization. In case the current is direct, the polarization potential grows linearly in time. Calling it polarization leads to misleading picture. There is nothing in metal that increases in time, the metal does not undergo any change, unlike dielectric, where growing polarization means growing displacement of bound charge.
For the understanding of the approach with B=H, I find the following article by Agranovich and Gartstein very useful
http://siba.unipv.it/fisica/articoli/P/PhysicsUspekhi2006_49_1029.pdf
The prototype of a dielectric function of a metal is the Lindhard (or refinements like Sjolander Singwi) dielectric function which is treated in every textbook on many body physics see e.g. here:
www.itp.phys.ethz.ch/education/lectures_fs10/Solid/Notes.pdf
Evidently, it includes the effect of conductivity.
 
Last edited by a moderator:
  • #55
tiny-tim said:
So
E = D + P (except that for historical reasons E is defined differently, so we need to multiply it by the permittivity, and for some reason P is multiplied by minus-one :rolleyes:).

Maybe P is multiplied by -1 because the dipole moment points from the negative to the positive charge, so a positive polarization actually reduces the field due to the displacement field.
 
  • #56
DrDu,
thank you for the links. I think the ambiguity of M arises only if one stays within macroscopic theory. There one cannot define P and M solely by

rho = -div P
j = dP/dt + ∇×M,

as there is infinitely many solutions of these equations.

So if one requires only the above equations, it is possible to define P an M in many ways - P and M are not unique. They do not even have to be zero in the vacuum.

However, P and M are usually introduced with help of a microscopic model, as average electric dipole moment of neutral molecules and average magnetic moment of those molecules, multiplied by density of those molecules. As long as the medium is thought to consist of small neutral aggregates of bound charged particles , P and M are unique and so are D and H, so there is no additional freedom in their choice.

On the other hand, metals and other conducting media probably cannot be modeled by localized aggregates of bound charged particles, and the quantities P and M cannot have the same meaning as in the above theory. I think, when one uses p and m for conducting media, one really uses polarization potentials with the only defining requirement - the above equations for j and rho.

The confusion arises because for oscillating fields, even the free charges behave as if they were bound - they oscillate around temporary equilibrium positions. The polarization potentials p, m then look almost as if they corresponded to the polarization and magnetization M from the theory of dielectrics/magnets.
 
  • #57
Jano L. said:
However, P and M are usually introduced with help of a microscopic model, as average electric dipole moment of neutral molecules and average magnetic moment of those molecules, multiplied by density of those molecules. As long as the medium is thought to consist of small neutral aggregates of bound charged particles , P and M are unique and so are D and H, so there is no additional freedom in their choice.
That's an important point. I don't think it is rigorously feasible any more to interpret P and M as dipole moment densities once you are using quantum mechanics.
 
  • #58
I am not sure of that. In molecular spectroscopy, Schroedinger calculated expected value of dipole moment of hydrogen atom using his time-dependent equation. This was one of triumphs of his wave mechanics. The same equation is used today in orthodox quantum mechanics and the calculation itself remains the same. Similar calculations are made for molecules today - one calculates induced oscillating dipole moment as linear combination of matrix elements of dipole operator. I think as long as the molecule is neutral (most of spectroscopy), the definition of P as

density * <dipole moment>

makes good sense. I agree though that in conducting media, things are different and polarization in this sense most probably cannot be introduced.
 
  • #59
Yes, but in extended media as opposed to isolated molecules, you cannot say exactly where one atom or molecule ends and the other one begins. Computationally, it is easier to calculate the polarisation directly.
The difference in polarisation of metals and insulators reflects itself in the spatial dispersion (k dependence) of the dielectric function in metals while this dispersion is often negligible in insulators.
 
  • #60
Jano L. said:
I think as long as the molecule is neutral (most of spectroscopy), the definition of P as

density * <dipole moment>

makes good sense.
This definition does not work even for materials which show spontaneous static polarisation (ferroelectrics or piezoelectrics). In fact one needs some quite advanced topological methods to calculate this polarisation:
http://www.physics.rutgers.edu/~dhv/pubs/local_preprint/dv_piezo.html
 
  • #61
You can express the H-field as the gradient of a scalar potential when all currents are steady (have no time dependence). The more general case of time-varying currents means you can no longer do that.
 
  • #62
DrDu.,
thanks, the connection with the Berry phase is surprising.

Muphrid,
that cannot be true globally, since in vacuum H is proportional to B which can have closed lines of force when currents are stationary (circles around straight wire)
 
  • #63
Yep, you're right, I was trying to rederive it a bit too fast. Per wiki, H can be a gradient of a scalar potential only when there is no free current.
 
  • #64
Can I ask one further question

With all these fields I find it useful to think of a logical cause-effect sequence to understand what happens

First we have a B field, say

then we put in this B-field a paramagnet

The interaction between the dipole moments in the paramagnet and the B-field leads to some degree of alignment, and a nonzero magnetisation M vector arises. M = f(B)

This then adds to the B-field inside the paramagnet, giving it a boost.

But we don't say B = B + M(B). We instead use H. Why is a THIRD necessary?

so H = B/mu0 + M


This third quantity then has entirely different units, so if I want to know how many Webers are passing through some current loop for a practical calculation, I can't use H, because the units are no longer Wb/m^2.

Can someone frame this apparent overkill of vector fields in terms of a useful, logical calculation? Why can't we just have a single B-field with a corrective term due to bound magnetic dipoles in the same unit system?
 
  • #65
The resulting field ##B## is not a sum of the initial field ##B_{ext}## and magnetization ##M##, but a sum ##B_{ext} + \delta B##, where ##\delta B## is the field of the paramagnet. This contribution due to the paramagnet depends also on the shape of the paramagnet and there is no general relation between it and the magnetization. The field ##H## is just another convenient quantity to describe the field in magnetostatics; again, there is no general relation between it and the magnetic field ##B##.
 
  • #66
isn't a BH curve a general relation?
 
  • #67
I do think it ugly and confusing that, in the SI, H and M are defined so as to have different units from B, owing to the factor of \mu_0. Similarly for D, P and E. Can someone persuade me that I'm wrong to think this?
 
  • #68
There is the relation

$$
\mathbf B = \mu\mathbf H
$$

but this is only approximation, and ##\mu## depends on the kind of material, so I would say, there is no general relation between the two. B, H are independent variables with different meaning in general.
 
  • Like
Likes 1 person
  • #69
I do think it ugly and confusing that, in the SI, H and M are defined so as to have different units from B, owing to the factor of μ0. Similarly for D, P and E. Can someone persuade me that I'm wrong to think this?
The reason they have different units is a good one.

##\mathbf M## is so defined so that ##\nabla \times\mathbf M## gives magnetization current density ##\mathbf j##, so the unit of ##\mathbf M## is ##\text{A.m}^{-1}##. ##\mathbf B## is defined so that ##q\mathbf v \times \mathbf B## gives force, so the unit of B is ##\text{N.A}^{-1}\text{.m}^{-1}##.
 
  • #70
Jano L. Agreed, but I want to go back a stage... In a vacuum, for steady current, we have
\nabla\times\vec{B}=\mu_0 \vec{J}
in which \vec J is the free current density.
I just wish that \vec M had been defined similarly by \nabla\times\vec{M}=\mu_0 \vec{j}.
 
  • #71
What's the advantage?
 
  • #72
Why have the \mu_0 in one equation, but not in the other (post 70)? Why give the field vector due to bound currents a different unit from the field vector due to free currents?
 
  • #73
I think one good answer is that I've given in #69. In other words, magnetic field and magnetization are different quantities with different meaning, use and therefore different units.

Suppose we defined magnetization in the way you suggest. What benefit is there in calling magnetization ##\mathbf M## a quantity which does not give average magnetic moment of element, but gives ##\mu_0## times that magnetic moment?
 
  • #74
Yes, I agree that you'd lose the interpretation of M as mean magnetic moment per unit volume - but not if the magnitude of magnetic moment for a current loop had been defined as \mu_{0} I \Delta S rather than I \Delta S in the first place (Georgi?)! At this stage, I hear howls of protest!
 
  • #75
So you see it, such definition puts the awkward constant ##\mu_0## into another equation. It is equally bad as the SI convention.

I think the best thing to do is to stick to SI when talking to general audience, and to use whatever system suits you in your own research. I like the convention where only ##c## appears in the Maxwell equations and no crazy ##\epsilon_0,\mu_0## appears at all. But when communicating with others, the ugly SI convention is beneficial because it is widely known and accepted.
 
  • #76
To the contrary of the previous opinions I consider the use of the SI in theoretical electrodynamics a desease ;-)). Even Jackson in his newest 4th German edition of his textbook commits this sin, only to change back to Gaussian units in the part on the relativistic formulation. My reasons are the following:

There is only one electromagnetic field. Introducing a (local) inertial reference frame with corresponding space-time basis vectors you can split it in electric and magnetic components. Why should those have different units? Of course there is no contradiction in introducing different units for all kinds of things as the SI shows, but IMHO it obscures the physics of the electromagnetic field.

The macroscopic fields \vec{D} and \vec{H} are emergent quantities that can be derived by averaging over the microscopic fundamental quantities as are charge-current densities and magnetization of the fundamental consituents ("elementary particles") of matter. Thus there is no need either to introduce new units for those fields. The physics of the matter is hidden in the constitutive relations, which usually are given in linear-response approximation (see Jackson's or Schwinger's books on this; I'm not aware that there is a textbook writing the relations in relativistically covariant form, which is another great sin in didactics of theoretical electromagnetism, but that's another story).

So I prefer to use rationalized Heaviside-Lorentz units as is the standard in theoretical high-energy physics.
 
  • #77
I don't think either of us (Jano L. or I) were trying to defend SI for use in theoretical electrodynamics.
 
  • #78
SI unit system is a very good one. I used cgs system when I worked in transformer/inductor/magnetics design in the first half of the 80's. The cgs system has its drawbacks as well. The permeability and permittivity of free space is "1", without any units. Also, speed of light in relation to mu and epsilon works great in SI system. I don't understand why anyone would say that SI is sub-optimal. It is rationalized, it accounts for important phenomena, and to me cgs is inferior. But maybe there are compelling reasons to favor cgs that I do not know. Anyway my 2 cents.

Claude
 
  • #79
It is interesting that you find bad what I'd call a good thing about the traditional Gaussian units ;-). I think it's a good thing not to introduce superfluous "constants". Why should the (classical) vacuum have another permeability and permittivity than 1? The classical vacuum is really empty after all, and why should there be anything to be polarized, i.e., react to the electromagnetic field.

I agree that the unrationalized nature of the traditional Gaussian units, i.e., the appearance of (superfluous ;-)) factors of 4 \pi in the fundamental equations is not nice either, but there is no problem to use rationalized Heaviside-Lorentz units as is common in theoretical HEP.

Of course, the SI is the right choice for experimental physics and engineering, because it gives nicely handable numbers for everyday situations, i.e., 1 V and 1 A are common everyday voltages and currents, respectively. You'd also not give your height in fm or your weight in GeV ;-).
 
  • #80
I don't understand why anyone would say that SI is sub-optimal.
It depends on what is it you are trying to do with the unit system. For basic lab measurements of macroscopic properties SI is great. In theoretical physics, sometimes one deals with such involved calculations that obscuring relativistic nature of the theory by using the asymmetric SI convention for units of E, B is not reasonable.
 
  • #81
vanhees71 said:
It is interesting that you find bad what I'd call a good thing about the traditional Gaussian units ;-). I think it's a good thing not to introduce superfluous "constants". Why should the (classical) vacuum have another permeability and permittivity than 1? The classical vacuum is really empty after all, and why should there be anything to be polarized, i.e., react to the electromagnetic field.

I agree that the unrationalized nature of the traditional Gaussian units, i.e., the appearance of (superfluous ;-)) factors of 4 \pi in the fundamental equations is not nice either, but there is no problem to use rationalized Heaviside-Lorentz units as is common in theoretical HEP.

Of course, the SI is the right choice for experimental physics and engineering, because it gives nicely handable numbers for everyday situations, i.e., 1 V and 1 A are common everyday voltages and currents, respectively. You'd also not give your height in fm or your weight in GeV ;-).

Bold quote - "Why should vacuum have permeability/permittivity other than 1 since it is empty?" Good question. It deserves a good answer. Remember that I approach this world from an engineering viewpoint. I make widgets and things that function, so I value μ and ε being a physical constant with units.

If I place 2 conducting plates, flat rectangles with area "A", apart with a gap "g", I know that to compute capacitance I use:

C = ε0A/g.

Though the medium between the plats is air (or vacuum), it still possesses energy storage ability with units. An empty space between the plates stores energy in the form of an E field. A similar scenario holds for inductors with air (vacuum) core. The energy storage ability, electric and/or magnetic, using vacuum as a core/dielectric, has units and constants.

To a theoretical scientist this property may not be as relevant as it is to me, but as I've stated, I cannot fathom how anyone can say that SI is not pure enough. Units are man made and arbitrary, I don't know that there is one "right" way to define them. Comments welcome.

Claude
 
Last edited:
  • #82
Sure, but if there is an electromagnetic field, it's not vacuum but there is a field, and it has energy. Why do you need some \epsilon_0 there? The capacitance is (neglecting finite-size effects) C=\epsilon_0 A/d, where A is the area of the plates and d the distance of the plates. In Heaviside-Lorentz units \epsilon_0=1 and the capacitance is measured in centimetres instead of Farad. So what?
 
  • #83
Wow, that was a point I was going to make, that capacitance in cgs is measured in "cm", and I feel that is horrendous. You say "so what" as in "no big deal". Frankly, I have an issue with a centimeter being both a unit of length and a unit of capacitance. This makes little sense. But like I said, I'm not a theoritician, I invent widgets, so what is convenient for me may not be for others.

Either system works, and a highly skilled practitioner should be able to use either and get the right answer. Just thought I'd mention it.

Claude
 
  • #84
vanhees71 said:
The physics of the matter is hidden in the constitutive relations, which usually are given in linear-response approximation (see Jackson's or Schwinger's books on this; I'm not aware that there is a textbook writing the relations in relativistically covariant form, which is another great sin in didactics of theoretical electromagnetism, but that's another story).

The book 'Electromagnetic theory' by Attay Kovetz has a section (23) on relativistic response functions. Incidentally although Kovetz's book is mostly on macroscopic electromagnetism, its definitions of P and M as 'charge-current potentials' agree with the microscopic definitions given in post 46 of this thread and no averaging over microscopic scales is used.

Francis Chen in the preface of his book on plasma physics says:

"It is, of course, senseless to argue about units; any experienced physicist can defend his favorite system eloquently and with faultless logic."

An interesting argument for E and B having different dimensions is given here http://arxiv.org/abs/physics/0407022.
 
  • #85
It's of course true. Physics is independent of the system of units used, but there are units that are more natural than others. Of course there is no objective reasoning for the preference of the one or the other system, but I think the SI makes electromagnetism more difficult to learn than Gaussian or Heaviside-Lorentz units.

I further don't understand in which sense you do not need averaging (coarse graining) to go from the microscopic description (QED) to a macroscopic approximation (classical macroscopic electrodynamics). Also the quantities in post #46 only make sense to me in a macroscopic picture. In the microscopic formalism all these quantities are represented by operators! The book by Craig mentions the usual coarse-graining argument already on page 5!
 
  • #86
My EM professor explained that he preferred Gaussian units in EM for much the same reasons he (and a score of others) preferred natural units in SR/GR; we used Gaussian units throughout the class and to be honest it made everything so much more elegant. Practicality of course is an entirely different story.
 
  • #87
vanhees71 said:
I further don't understand in which sense you do not need averaging (coarse graining) to go from the microscopic description (QED) to a macroscopic approximation (classical macroscopic electrodynamics). Also the quantities in post #46 only make sense to me in a macroscopic picture. In the microscopic formalism all these quantities are represented by operators! The book by Craig mentions the usual coarse-graining argument already on page 5!

I believe the microscopic magnetization in iron for example can be measured by spin-polarized neutron scattering. My reference is the article 'The microscopic magnetization: concept and application' by L.L. Hirst which begins by discussing the view that magnetization is a macroscopic quantity defined by averaging.
 
  • #88
Also worth noting is that in cgs system, which I consider a good system (not the best), not only is capacitance expressed in cm, but so is inductance as well. So we have length, inductance, and capacitance all expressed in the same units.

They are not the same entity at all. While L & C are closely related, I think that they are distinct from length. L & C behave differently, they exhibit markedly different circuit characteristics. To fully distinguish them differing units are needed.

Like I said, I've used cgs in magnetics design and consistently arrived at the correct answer. But SI for me makes more sense to a widget designer who works in industry, not a theoretical research lab. To me, I like the distinction between H & B. They are both important and neither is "derived" from the other. In Halliday-Resnick elementary physics, the authors stated that the decision to treat B as the basis, and regard H as derived from B, is purely arbitrary. I concur.

Neither is the more "fundamental" at all. Our universe consists of free space as well as molecular structures and quantized atomic energy levels. Polarization, electric or magnetic, is just as "real" as a vacuum. Besides, more than one poster has attempted to propagate the crackpot heresy that B is the counterpart of E, while H is that to D. This is nonsense.

The problem is that E behaves like B in one respect, relativity frame transformations show that E corresponds with vXB. Likewise D with vXH. Same units. So in 1 respect, B appears to correlate with E and H with D.

But there are 2 cases where it is the opposite. If we energize a dielectric capacitor, we get the D-E hysteresis curve. If E is taken down to zero, D remains, as well as remnant energy. The D represents the polarization or remnance.

But in the magnetic domain, if we energize a ferrous core with current, we get the B-H hysteresis curve. When the current is taken down to zero, it is H that vanishes. In the dielectric cap case, it was E that vanished. The remnance and stored energy is B, not H. So in the 2 cases, external power is cut, the vanished quantities are E & H, while the remnant quantities are D & B. This correlation is opposite to that of relativity frame transformations.

Another example which demolishes the "E relates to B, D to H" nonsense is as follows. A capacitor is formed with 2 dielectrics in series between the plates. The E fields for the 2 media differ, but the D fields are the same except for the case of surface charge at the boundaries, where the 2 D values differ by the constant rhos, the charge density. So for the 2 series configured dielectric regions D1 = D2, except at the boundaries where D1 - D2 = rhos.

If the 2 dielectrics are in parallel, we get equal E in both media, but D varies in accordance with ε. I.e. E1 = E2, and D1/D2 = ε1/ε2.

In magnetic domain take a ferrous core with a gap (air), a series mag circuit. The flux densities B are the same for series media of differing μ values. H, however differs. So B1 = B2, while H1/H2 = μ2/μ1.

If the gap is ferrous media in *parallel* with air, we get the same H value for both, with differing B values. H1 = H2, and B1/B2 = μ1/μ2.

In parallel the E values are the same, as are the H values. In series D values are the same (differing by only a constant at a boundary with surface polarization), and B values are the same.

This case clearly relates E with H, and D with B.

Now the final test. E and D can exist as conservative vector fields, as well as non-conservative. They can be closed loops, solenoidal, with a curl and no divergence, or as curvilinear segments with a beginning and end, having no curl but non-zero divergence.

But B and H do not exist in the segmented form, only closed loop form. The divergence of these vector fields is always zero. There is no correlation between E & H, nor E & B, when considering this property.

Is it B or H that is the counterpart of E? One test shows it to be B, two tests show it to be H, and one test shows it to be neither. So I can only conclude that it is impossible to say which is the counterpart of E, neither B nor H can be said to be the case.

All 4 quantities are real, significant, helpful, and relevant. Setting 2 as basis quantities, and treating the other 2 as derivations is not a problem, but the choice of which are basis quantities is arbitrary. Personally, I recommend the following. Don't worry about it.

I use B & H all the time, and laws of physics so far has not demonstrated that one is more basic than the other. I will likely get flamed for this, but I felt compelled to say this. Have a great Labor Day weekend.

Claude
 
  • #89
Claude, in macroscopic theory I agree with you, all EBDH are equally important. However nowadays we know about atoms and molecules and have the possibility to understand the macroscopic theory on the basis of microscopic theory. In microscopic theory, 4 independent field quantities make little sense. Usually we think that there is only one EM field with two components: electric and magnetic, as vanhees71 said. These may be denoted as ##\mathbf e, \mathbf b## (microscopic) and their meaning (definition) is that they give us the force on a point-like test particle

$$
\mathbf F = q\mathbf e + q\frac{\mathbf v}{c} \times \mathbf b.
$$

So in this picture, the fields e,b are more basic, since they directly give force. The macroscopic fields E,B can then be sought as a kind of average of these microscopic fields. The fields D,H are then necessarily only auxiliary quantities that play little role in the logic of microscopic theory; there is little reason to consider microscopic fields ##\mathbf d,\mathbf h##.
 
  • #90
marmoset said:
I believe the microscopic magnetization in iron for example can be measured by spin-polarized neutron scattering. My reference is the article 'The microscopic magnetization: concept and application' by L.L. Hirst which begins by discussing the view that magnetization is a macroscopic quantity defined by averaging.

I haven't read the paper in detail, but as far as I understand it this paper uses implicitly also averaging procedures, because it describes magnetization as "density of dipole moments". This is already a semiclassical description.

The most fundamental treatment is in-medium QED. Then you define in-medium (real-time Schwinger-Keldysh) Green's functions and then do some coarse-graining procedure for the Kadanoff-Baym equations, which are derived from the self-consistent 2PI formalism. Coarse graining then is formally done by assuming the separation of microscopic and macroscopic scales and use the gradient expansion of the KB equations to obtain semi-classical transport equations. There are various levels of description. Which one to use is a question of the application you have in mind and most of the approximation procedures are not derivable in a strict way from the fundamental equations of QED but are very much driven by phenomenology. Also the cited paper starts at a certain level that is already pretty far from QED in the sense that implicitly some averaging procedure has been assume. The very fact that you describe, e.g., magnetization as a density of dipole moments already is such an effective description from an averaging procedure! There is nothing wrong with this. To the contrary, it explains to a certain extent how the macroscopic phenomenology is related to the very fundamental level, as far as we know it today, and how the classical behavior can be understood as an emergent phenomenon from the most fundamental level of QED, so incomplete this understanding still is.
 
  • #91
cabraham said:
[...]

Like I said, I've used cgs in magnetics design and consistently arrived at the correct answer. But SI for me makes more sense to a widget designer who works in industry, not a theoretical research lab. To me, I like the distinction between H & B. They are both important and neither is "derived" from the other. In Halliday-Resnick elementary physics, the authors stated that the decision to treat B as the basis, and regard H as derived from B, is purely arbitrary. I concur.

[...]

Neither is the more "fundamental" at all. Our universe consists of free space as well as molecular structures and quantized atomic energy levels. Polarization, electric or magnetic, is just as "real" as a vacuum. Besides, more than one poster has attempted to propagate the crackpot heresy that B is the counterpart of E, while H is that to D. This is nonsense.

Claude

From the point of view of classical electrodynamics (which itself is an approximation of QED) the fundamental field is the one and only electromagnetic field, whose components with respect to an arbitrary inertial reference frame (for simplicity let's consider special-relativistic spacetime, neglecting gravity) we use to call \vec{E} and \vec{B}. These are no "counterparts" but just components of the electromagnetic field, which in the manifestly covariant description are given as the antisymmetric 2nd-rank Faraday tensor field in Minkowski space, F_{\mu \nu}. This is not crackpotery but well-established since Minkowski great work on these issues in 1908.

When it comes to the electromagnetism at presence of matter, you can describe this with pretty good accuracy with linear-response theory, and the usual (relativistic!) constitutive relations. It is very clear, and has been already clarified by Minkowski in 1908, that \vec{E} and \vec{H} are the components of the corresponding antisymmetric tensor D_{\mu \nu}. Of course you have to distinguish between these two tensor fields and both are important in macroscopic electromagnetics, but it's not very clear to me, why these quantities should have different units. In the SI even the components belonging to the same tensor field have different units. This is pretty confusing rather than illuminating. It's of course not wrong, because you simply include the appropriate \mu_0 and \epsilon_0-conversion factors of the SI, but it's unnecessarily complicated for the theoretical treatment, which best is done in the manifestly covariant way, which you can easily "translate" into the 1+3-dimensional formalism by splitting into temporal and spatial components whenever necessary for applications.

You find a very good description of all this already in pretty classical textbooks like vol. III of the Sommerfeld Lectures (which, by the way also use the SI!) or Abraham/Becker/Sauter. A very nice more uptodate treatment is also found in vol. VIII of Landau and Lifshitz. A more formal transport-theoretical treatment is given in the book by de Groot and Suttorp, Foundations of Electrodynamics.

Of course, sometimes you have memory effects and spatial correlations in some materials like ferromagnets (hysteresis) etc. That's no contradiction to what I've said above.
 
  • #92
Jano L. said:
Claude, in macroscopic theory I agree with you, all EBDH are equally important. However nowadays we know about atoms and molecules and have the possibility to understand the macroscopic theory on the basis of microscopic theory. In microscopic theory, 4 independent field quantities make little sense. Usually we think that there is only one EM field with two components: electric and magnetic, as vanhees71 said. These may be denoted as ##\mathbf e, \mathbf b## (microscopic) and their meaning (definition) is that they give us the force on a point-like test particle

$$
\mathbf F = q\mathbf e + q\frac{\mathbf v}{c} \times \mathbf b.
$$

So in this picture, the fields e,b are more basic, since they directly give force. The macroscopic fields E,B can then be sought as a kind of average of these microscopic fields. The fields D,H are then necessarily only auxiliary quantities that play little role in the logic of microscopic theory; there is little reason to consider microscopic fields ##\mathbf d,\mathbf h##.

I already acknowledged that relativistic transformations per Lorentz, Lorentz force, etc., are expressed in canonical form via E & B, since B is independent of medium for this narrow condition. Computing force acting on a charge by a mag field w/o considering what generates said mag field is done best by using B as the basis as it is medium-independent.

But take another example where we generate a mag field by setting up a current in a wire loop. The loop is circular w/ radius R, the current is I, what is B/H at the center?

Per Biot-Savart: B = μI/2R, or H = I/2R.

If we want the mag field generated by a current loop, the canonical form would be the equation with H, not B, as it is medium independent. Do you see what I mean about this question being arbitrary? A particle physicist bangs particles together in a super-conducting super-collider, cyclotron, etc. The force on a free charge in the presence of a mag field is best expressed using B.

But a widget inventing nerd like myself, deals with motors, generators, transformers, relays, solenoids, etc. If I'm generating a mag field with a current, H is medium-independent and canonical.

Bottom line, if we wish to attract a charge to a wire loop, we cannot eliminate μ either way. The H field generated by a current loop is independent of μ. But the attractive force depends on μH since it depends on B. So a charge in the center of the loop will incur a force of e(vXB) = e(vH) = eμ(vXI/2R).

The permeability constant μ shows up no matter if you use H or B.

Likewise 2 parallel wires each carrying current incur attractive/repulsive force based on the product of the 2 currents, distance between them, and μ the medium.

If Lorentz force acting on a charge is more important to you than mag field surrounding a current loop, then it makes sense to use B as the basis quantity, then derive H as B/μ. Either method produces the correct answer. If the physics community prefers to regard B as the basis, no problems should be created by doing so.

I just want to emphasize that such a convention is arbitrary, one could just as well treat H as the basis. Depending on boundary conditions, like the ones I mentioned with ferrous cores having series and parallel boundaries, the quantity that is independent of media could be either, B or H.

It's no big deal, you can start at B basis, then derive H as B/μ. But you can do the opposite. If we carefully keep track of our variables, the answer should be the same either way.

One exception is when the medium is ferrous, operating at or near saturation. Then the relation B = μH is not linear any more. In such a case, the B-H curve must be examined, and graphical analysis can be used. B & H in this case, cannot be interchanged because μ is not constant.

I will elaborate. Comments/questions welcome.

Claude
 
Last edited:
  • #93
If I'm generating a mag field with a current, H is medium-independent and canonical.

I do not know what do you mean by the word "canonical". Field H depends on the conduction currents which are well controlled and can be easily calculated as their function. Field B depends also on the properties of the medium and may be harder to calculate, especially in ferromagnetic material.

This does not imply any of them as more basic.

But the attractive force depends on μH since it depends on B. So a charge in the center of the loop will incur a force of e(vXB) = e(vXμH) = eμ(vXI/2R).

The force on charge is established to be e(vXB) in vacuum, where μ = μ0. In medium, I do not think one can simply take the same formula. The microscopic field in the medium varies on atomic length scale and most probably neither H nor B is sufficient to find the force.

I would say, in macroscopic theory, both fields are equally important and neither seems more basic. But in microscopic theory, which is more detailed than the macroscopic theory and explains it in more elementary notions, there is place only for one magnetic field. It used to be denoted by ##\mathbf h## but I think it should be written as ##\mathbf b##.
 
  • #94
Jano L. said:
I do not know what do you mean by the word "canonical". Field H depends on the conduction currents which are well controlled and can be easily calculated as their function. Field B depends also on the properties of the medium and may be harder to calculate, especially in ferromagnetic material.

This does not imply any of them as more basic.


My point exactly. I've been saying the same since day 1. I concede to your point with no argument whatsoever. The current loop was brought up by me not to establish H as a basis, but rather to illustrate the futility of trying to establish either as more basic. We agree perfectly.

The force on charge is established to be e(vXB) in vacuum, where μ = μ0. In medium, I do not think one can simply take the same formula. The microscopic field in the medium varies on atomic length scale and most probably neither H nor B is sufficient to find the force.

I would say, in macroscopic theory, both fields are equally important and neither seems more basic. But in microscopic theory, which is more detailed than the macroscopic theory and explains it in more elementary notions, there is place only for one magnetic field. It used to be denoted by ##\mathbf h## but I think it should be written as ##\mathbf b##.

Well then, we now have to form a consensus on just what is the most "elementary" .
 
  • #95
The main reason I consider ##\mathbf B## as basic is that microscopic theories give it as a result of (some of many kinds of) averaging of the microscopic field ##\mathbf b##. While ##\mathbf H## does not seem to be result of such direct averaging - there is no microscopic##\mathbf h##. Instead, ##\mathbf H## is defined only in macroscopic theory as the quantity that gives only part of total current density - the conduction current density due to mobile charges - via

$$
\nabla \times \mathbf H = \mathbf j_{\text{cond.}}
$$
 
  • #96
Jano L. said:
The main reason I consider ##\mathbf B## as basic is that microscopic theories give it as a result of (some of many kinds of) averaging of the microscopic field ##\mathbf b##. While ##\mathbf H## does not seem to be result of such direct averaging - there is no microscopic##\mathbf h##. Instead, ##\mathbf H## is defined only in macroscopic theory as the quantity that gives only part of total current density - the conduction current density due to mobile charges - via

$$
\nabla \times \mathbf H = \mathbf j_{\text{cond.}}
$$

I understand, but I still need an answer for the following. Take a parallel plate cap w/ dielectric, ε>1, and excite it with an ac generator plus resistor. There is a sinusoidal I & V in the cap. The conduction current in the cap leads and plates and partial mag field intensity will obey the relation you posted above:

$$
\nabla \times \mathbf H = \mathbf j_{\text{cond.}}
$$

What about the mag field inside the dielectric per Ampere's Law, aka "displacement current"? It is:

curl H = dD/dt.

Is there a B inside the dielectric but no H present? I find that ir-reconcilable since the dielectric contains no ferrous material, hence B = μH. So even inside the dielectric, μ = μ0, so that B & H co-exist. Why one would be the basis vs. the other seems pretty arbitrary unless I am missing something else not covered yet. The atomic structures are displacing in the dielectric. Electrons are moving towards one direction, with nuclear protons moving towards the opposite direction, both in a sinusoidal fashion. By definition these displacing charges constitute an ac current, and thus are surrounded by an ac mag field intensity, as well as density.

Inside the dielectric must exist both B & H in unison, inter-related per B = μ0H, as long as no ferrous material is involved.

Again, I am having trouble understanding the line of demarcation between "mAcroscopic vs. mIcroscopic". Thanks.

Any comments would be appreciated.

Claude
 
  • #97
Of course, both fields are present and non-zero in the dielectric. The distinction between macro and micro is that macroscopic field is that used in basic theory when we ignore the atomic structure of the material. The microscopic field varies randomly on the scale of molecules since it reflects their presence.
 
  • #98
I'm not sure if this is answered yet, but why are H and D different units?

Can't we define: epsilon*div(D) = free charge density?
Can't we define: curl(H)/mu - epsilon*dD/dt = free current density?

Can't we do the same for P and M with the bound charges and currents?

Then E = P + D and B = M + H

All have the same units.

What does this neglect?
 
  • #99
Jano L. said:
Of course, both fields are present and non-zero in the dielectric. The distinction between macro and micro is that macroscopic field is that used in basic theory when we ignore the atomic structure of the material. The microscopic field varies randomly on the scale of molecules since it reflects their presence.

Ok we are in agreement, but no closer to solving the question "mAcro vs. mIcro". You gave a definition, but I would like an example akin to my cap given above, showing that in a vacuum, or non-ferrous material, that just 1 quantity is needed. Thanks in advance.

Claude

P.S. In my cap example, the dielectric being non-ferrous would mean that 2 quantities are un-needed, only 1 being necessary. But it looks like we get 2 of them. How do we decide which is basis, which is derived?
 
Last edited:
  • #100
Formally it's pretty simple: The fundamental microscopic description is quantum electrodynamics for many-body systems, macroscopic is a description derived from that via the one or other type of coarse graining, i.e., the derivation of tranport equations for the matter coupled to mean fields (Vlasov-Boltzmann-Uehling-Uhlenbeck -> Vlasov-Boltzmann) and some simplifications. The usual "macroscopic classical electrodynamics" we learn in introductory E+M is then linear response theory, where "matter" is reduced to the electromagnetic four-current and consititutive relations (response functions like the dielectric function, conductivity, ...).

To really establish these connections is, of course, rather complicated.
 

Similar threads

Back
Top