In magnetism, what is the difference between the B and H fields?

Click For Summary
The discussion clarifies the distinction between the B and H fields in magnetism, where the B field represents the total magnetic induction, while the H field corresponds to the magnetic field in a vacuum, modified by the presence of materials. The relationship between these fields is expressed through the equations B = μ₀H and B = μ₀(H + M), where M is the magnetization of the material. The H field is considered a mathematical construct that simplifies calculations by excluding the effects of induced magnetizations, while the B field is viewed as a more fundamental physical quantity. The conversation also touches on the implications of using these fields in practical applications, emphasizing that B and H have different units in SI, which can lead to confusion. Ultimately, understanding the interplay between these fields is crucial for accurate modeling in electromagnetism.
  • #61
You can express the H-field as the gradient of a scalar potential when all currents are steady (have no time dependence). The more general case of time-varying currents means you can no longer do that.
 
Physics news on Phys.org
  • #62
DrDu.,
thanks, the connection with the Berry phase is surprising.

Muphrid,
that cannot be true globally, since in vacuum H is proportional to B which can have closed lines of force when currents are stationary (circles around straight wire)
 
  • #63
Yep, you're right, I was trying to rederive it a bit too fast. Per wiki, H can be a gradient of a scalar potential only when there is no free current.
 
  • #64
Can I ask one further question

With all these fields I find it useful to think of a logical cause-effect sequence to understand what happens

First we have a B field, say

then we put in this B-field a paramagnet

The interaction between the dipole moments in the paramagnet and the B-field leads to some degree of alignment, and a nonzero magnetisation M vector arises. M = f(B)

This then adds to the B-field inside the paramagnet, giving it a boost.

But we don't say B = B + M(B). We instead use H. Why is a THIRD necessary?

so H = B/mu0 + M


This third quantity then has entirely different units, so if I want to know how many Webers are passing through some current loop for a practical calculation, I can't use H, because the units are no longer Wb/m^2.

Can someone frame this apparent overkill of vector fields in terms of a useful, logical calculation? Why can't we just have a single B-field with a corrective term due to bound magnetic dipoles in the same unit system?
 
  • #65
The resulting field ##B## is not a sum of the initial field ##B_{ext}## and magnetization ##M##, but a sum ##B_{ext} + \delta B##, where ##\delta B## is the field of the paramagnet. This contribution due to the paramagnet depends also on the shape of the paramagnet and there is no general relation between it and the magnetization. The field ##H## is just another convenient quantity to describe the field in magnetostatics; again, there is no general relation between it and the magnetic field ##B##.
 
  • #66
isn't a BH curve a general relation?
 
  • #67
I do think it ugly and confusing that, in the SI, H and M are defined so as to have different units from B, owing to the factor of \mu_0. Similarly for D, P and E. Can someone persuade me that I'm wrong to think this?
 
  • #68
There is the relation

$$
\mathbf B = \mu\mathbf H
$$

but this is only approximation, and ##\mu## depends on the kind of material, so I would say, there is no general relation between the two. B, H are independent variables with different meaning in general.
 
  • Like
Likes 1 person
  • #69
I do think it ugly and confusing that, in the SI, H and M are defined so as to have different units from B, owing to the factor of μ0. Similarly for D, P and E. Can someone persuade me that I'm wrong to think this?
The reason they have different units is a good one.

##\mathbf M## is so defined so that ##\nabla \times\mathbf M## gives magnetization current density ##\mathbf j##, so the unit of ##\mathbf M## is ##\text{A.m}^{-1}##. ##\mathbf B## is defined so that ##q\mathbf v \times \mathbf B## gives force, so the unit of B is ##\text{N.A}^{-1}\text{.m}^{-1}##.
 
  • #70
Jano L. Agreed, but I want to go back a stage... In a vacuum, for steady current, we have
\nabla\times\vec{B}=\mu_0 \vec{J}
in which \vec J is the free current density.
I just wish that \vec M had been defined similarly by \nabla\times\vec{M}=\mu_0 \vec{j}.
 
  • #71
What's the advantage?
 
  • #72
Why have the \mu_0 in one equation, but not in the other (post 70)? Why give the field vector due to bound currents a different unit from the field vector due to free currents?
 
  • #73
I think one good answer is that I've given in #69. In other words, magnetic field and magnetization are different quantities with different meaning, use and therefore different units.

Suppose we defined magnetization in the way you suggest. What benefit is there in calling magnetization ##\mathbf M## a quantity which does not give average magnetic moment of element, but gives ##\mu_0## times that magnetic moment?
 
  • #74
Yes, I agree that you'd lose the interpretation of M as mean magnetic moment per unit volume - but not if the magnitude of magnetic moment for a current loop had been defined as \mu_{0} I \Delta S rather than I \Delta S in the first place (Georgi?)! At this stage, I hear howls of protest!
 
  • #75
So you see it, such definition puts the awkward constant ##\mu_0## into another equation. It is equally bad as the SI convention.

I think the best thing to do is to stick to SI when talking to general audience, and to use whatever system suits you in your own research. I like the convention where only ##c## appears in the Maxwell equations and no crazy ##\epsilon_0,\mu_0## appears at all. But when communicating with others, the ugly SI convention is beneficial because it is widely known and accepted.
 
  • #76
To the contrary of the previous opinions I consider the use of the SI in theoretical electrodynamics a desease ;-)). Even Jackson in his newest 4th German edition of his textbook commits this sin, only to change back to Gaussian units in the part on the relativistic formulation. My reasons are the following:

There is only one electromagnetic field. Introducing a (local) inertial reference frame with corresponding space-time basis vectors you can split it in electric and magnetic components. Why should those have different units? Of course there is no contradiction in introducing different units for all kinds of things as the SI shows, but IMHO it obscures the physics of the electromagnetic field.

The macroscopic fields \vec{D} and \vec{H} are emergent quantities that can be derived by averaging over the microscopic fundamental quantities as are charge-current densities and magnetization of the fundamental consituents ("elementary particles") of matter. Thus there is no need either to introduce new units for those fields. The physics of the matter is hidden in the constitutive relations, which usually are given in linear-response approximation (see Jackson's or Schwinger's books on this; I'm not aware that there is a textbook writing the relations in relativistically covariant form, which is another great sin in didactics of theoretical electromagnetism, but that's another story).

So I prefer to use rationalized Heaviside-Lorentz units as is the standard in theoretical high-energy physics.
 
  • #77
I don't think either of us (Jano L. or I) were trying to defend SI for use in theoretical electrodynamics.
 
  • #78
SI unit system is a very good one. I used cgs system when I worked in transformer/inductor/magnetics design in the first half of the 80's. The cgs system has its drawbacks as well. The permeability and permittivity of free space is "1", without any units. Also, speed of light in relation to mu and epsilon works great in SI system. I don't understand why anyone would say that SI is sub-optimal. It is rationalized, it accounts for important phenomena, and to me cgs is inferior. But maybe there are compelling reasons to favor cgs that I do not know. Anyway my 2 cents.

Claude
 
  • #79
It is interesting that you find bad what I'd call a good thing about the traditional Gaussian units ;-). I think it's a good thing not to introduce superfluous "constants". Why should the (classical) vacuum have another permeability and permittivity than 1? The classical vacuum is really empty after all, and why should there be anything to be polarized, i.e., react to the electromagnetic field.

I agree that the unrationalized nature of the traditional Gaussian units, i.e., the appearance of (superfluous ;-)) factors of 4 \pi in the fundamental equations is not nice either, but there is no problem to use rationalized Heaviside-Lorentz units as is common in theoretical HEP.

Of course, the SI is the right choice for experimental physics and engineering, because it gives nicely handable numbers for everyday situations, i.e., 1 V and 1 A are common everyday voltages and currents, respectively. You'd also not give your height in fm or your weight in GeV ;-).
 
  • #80
I don't understand why anyone would say that SI is sub-optimal.
It depends on what is it you are trying to do with the unit system. For basic lab measurements of macroscopic properties SI is great. In theoretical physics, sometimes one deals with such involved calculations that obscuring relativistic nature of the theory by using the asymmetric SI convention for units of E, B is not reasonable.
 
  • #81
vanhees71 said:
It is interesting that you find bad what I'd call a good thing about the traditional Gaussian units ;-). I think it's a good thing not to introduce superfluous "constants". Why should the (classical) vacuum have another permeability and permittivity than 1? The classical vacuum is really empty after all, and why should there be anything to be polarized, i.e., react to the electromagnetic field.

I agree that the unrationalized nature of the traditional Gaussian units, i.e., the appearance of (superfluous ;-)) factors of 4 \pi in the fundamental equations is not nice either, but there is no problem to use rationalized Heaviside-Lorentz units as is common in theoretical HEP.

Of course, the SI is the right choice for experimental physics and engineering, because it gives nicely handable numbers for everyday situations, i.e., 1 V and 1 A are common everyday voltages and currents, respectively. You'd also not give your height in fm or your weight in GeV ;-).

Bold quote - "Why should vacuum have permeability/permittivity other than 1 since it is empty?" Good question. It deserves a good answer. Remember that I approach this world from an engineering viewpoint. I make widgets and things that function, so I value μ and ε being a physical constant with units.

If I place 2 conducting plates, flat rectangles with area "A", apart with a gap "g", I know that to compute capacitance I use:

C = ε0A/g.

Though the medium between the plats is air (or vacuum), it still possesses energy storage ability with units. An empty space between the plates stores energy in the form of an E field. A similar scenario holds for inductors with air (vacuum) core. The energy storage ability, electric and/or magnetic, using vacuum as a core/dielectric, has units and constants.

To a theoretical scientist this property may not be as relevant as it is to me, but as I've stated, I cannot fathom how anyone can say that SI is not pure enough. Units are man made and arbitrary, I don't know that there is one "right" way to define them. Comments welcome.

Claude
 
Last edited:
  • #82
Sure, but if there is an electromagnetic field, it's not vacuum but there is a field, and it has energy. Why do you need some \epsilon_0 there? The capacitance is (neglecting finite-size effects) C=\epsilon_0 A/d, where A is the area of the plates and d the distance of the plates. In Heaviside-Lorentz units \epsilon_0=1 and the capacitance is measured in centimetres instead of Farad. So what?
 
  • #83
Wow, that was a point I was going to make, that capacitance in cgs is measured in "cm", and I feel that is horrendous. You say "so what" as in "no big deal". Frankly, I have an issue with a centimeter being both a unit of length and a unit of capacitance. This makes little sense. But like I said, I'm not a theoritician, I invent widgets, so what is convenient for me may not be for others.

Either system works, and a highly skilled practitioner should be able to use either and get the right answer. Just thought I'd mention it.

Claude
 
  • #84
vanhees71 said:
The physics of the matter is hidden in the constitutive relations, which usually are given in linear-response approximation (see Jackson's or Schwinger's books on this; I'm not aware that there is a textbook writing the relations in relativistically covariant form, which is another great sin in didactics of theoretical electromagnetism, but that's another story).

The book 'Electromagnetic theory' by Attay Kovetz has a section (23) on relativistic response functions. Incidentally although Kovetz's book is mostly on macroscopic electromagnetism, its definitions of P and M as 'charge-current potentials' agree with the microscopic definitions given in post 46 of this thread and no averaging over microscopic scales is used.

Francis Chen in the preface of his book on plasma physics says:

"It is, of course, senseless to argue about units; any experienced physicist can defend his favorite system eloquently and with faultless logic."

An interesting argument for E and B having different dimensions is given here http://arxiv.org/abs/physics/0407022.
 
  • #85
It's of course true. Physics is independent of the system of units used, but there are units that are more natural than others. Of course there is no objective reasoning for the preference of the one or the other system, but I think the SI makes electromagnetism more difficult to learn than Gaussian or Heaviside-Lorentz units.

I further don't understand in which sense you do not need averaging (coarse graining) to go from the microscopic description (QED) to a macroscopic approximation (classical macroscopic electrodynamics). Also the quantities in post #46 only make sense to me in a macroscopic picture. In the microscopic formalism all these quantities are represented by operators! The book by Craig mentions the usual coarse-graining argument already on page 5!
 
  • #86
My EM professor explained that he preferred Gaussian units in EM for much the same reasons he (and a score of others) preferred natural units in SR/GR; we used Gaussian units throughout the class and to be honest it made everything so much more elegant. Practicality of course is an entirely different story.
 
  • #87
vanhees71 said:
I further don't understand in which sense you do not need averaging (coarse graining) to go from the microscopic description (QED) to a macroscopic approximation (classical macroscopic electrodynamics). Also the quantities in post #46 only make sense to me in a macroscopic picture. In the microscopic formalism all these quantities are represented by operators! The book by Craig mentions the usual coarse-graining argument already on page 5!

I believe the microscopic magnetization in iron for example can be measured by spin-polarized neutron scattering. My reference is the article 'The microscopic magnetization: concept and application' by L.L. Hirst which begins by discussing the view that magnetization is a macroscopic quantity defined by averaging.
 
  • #88
Also worth noting is that in cgs system, which I consider a good system (not the best), not only is capacitance expressed in cm, but so is inductance as well. So we have length, inductance, and capacitance all expressed in the same units.

They are not the same entity at all. While L & C are closely related, I think that they are distinct from length. L & C behave differently, they exhibit markedly different circuit characteristics. To fully distinguish them differing units are needed.

Like I said, I've used cgs in magnetics design and consistently arrived at the correct answer. But SI for me makes more sense to a widget designer who works in industry, not a theoretical research lab. To me, I like the distinction between H & B. They are both important and neither is "derived" from the other. In Halliday-Resnick elementary physics, the authors stated that the decision to treat B as the basis, and regard H as derived from B, is purely arbitrary. I concur.

Neither is the more "fundamental" at all. Our universe consists of free space as well as molecular structures and quantized atomic energy levels. Polarization, electric or magnetic, is just as "real" as a vacuum. Besides, more than one poster has attempted to propagate the crackpot heresy that B is the counterpart of E, while H is that to D. This is nonsense.

The problem is that E behaves like B in one respect, relativity frame transformations show that E corresponds with vXB. Likewise D with vXH. Same units. So in 1 respect, B appears to correlate with E and H with D.

But there are 2 cases where it is the opposite. If we energize a dielectric capacitor, we get the D-E hysteresis curve. If E is taken down to zero, D remains, as well as remnant energy. The D represents the polarization or remnance.

But in the magnetic domain, if we energize a ferrous core with current, we get the B-H hysteresis curve. When the current is taken down to zero, it is H that vanishes. In the dielectric cap case, it was E that vanished. The remnance and stored energy is B, not H. So in the 2 cases, external power is cut, the vanished quantities are E & H, while the remnant quantities are D & B. This correlation is opposite to that of relativity frame transformations.

Another example which demolishes the "E relates to B, D to H" nonsense is as follows. A capacitor is formed with 2 dielectrics in series between the plates. The E fields for the 2 media differ, but the D fields are the same except for the case of surface charge at the boundaries, where the 2 D values differ by the constant rhos, the charge density. So for the 2 series configured dielectric regions D1 = D2, except at the boundaries where D1 - D2 = rhos.

If the 2 dielectrics are in parallel, we get equal E in both media, but D varies in accordance with ε. I.e. E1 = E2, and D1/D2 = ε1/ε2.

In magnetic domain take a ferrous core with a gap (air), a series mag circuit. The flux densities B are the same for series media of differing μ values. H, however differs. So B1 = B2, while H1/H2 = μ2/μ1.

If the gap is ferrous media in *parallel* with air, we get the same H value for both, with differing B values. H1 = H2, and B1/B2 = μ1/μ2.

In parallel the E values are the same, as are the H values. In series D values are the same (differing by only a constant at a boundary with surface polarization), and B values are the same.

This case clearly relates E with H, and D with B.

Now the final test. E and D can exist as conservative vector fields, as well as non-conservative. They can be closed loops, solenoidal, with a curl and no divergence, or as curvilinear segments with a beginning and end, having no curl but non-zero divergence.

But B and H do not exist in the segmented form, only closed loop form. The divergence of these vector fields is always zero. There is no correlation between E & H, nor E & B, when considering this property.

Is it B or H that is the counterpart of E? One test shows it to be B, two tests show it to be H, and one test shows it to be neither. So I can only conclude that it is impossible to say which is the counterpart of E, neither B nor H can be said to be the case.

All 4 quantities are real, significant, helpful, and relevant. Setting 2 as basis quantities, and treating the other 2 as derivations is not a problem, but the choice of which are basis quantities is arbitrary. Personally, I recommend the following. Don't worry about it.

I use B & H all the time, and laws of physics so far has not demonstrated that one is more basic than the other. I will likely get flamed for this, but I felt compelled to say this. Have a great Labor Day weekend.

Claude
 
  • #89
Claude, in macroscopic theory I agree with you, all EBDH are equally important. However nowadays we know about atoms and molecules and have the possibility to understand the macroscopic theory on the basis of microscopic theory. In microscopic theory, 4 independent field quantities make little sense. Usually we think that there is only one EM field with two components: electric and magnetic, as vanhees71 said. These may be denoted as ##\mathbf e, \mathbf b## (microscopic) and their meaning (definition) is that they give us the force on a point-like test particle

$$
\mathbf F = q\mathbf e + q\frac{\mathbf v}{c} \times \mathbf b.
$$

So in this picture, the fields e,b are more basic, since they directly give force. The macroscopic fields E,B can then be sought as a kind of average of these microscopic fields. The fields D,H are then necessarily only auxiliary quantities that play little role in the logic of microscopic theory; there is little reason to consider microscopic fields ##\mathbf d,\mathbf h##.
 
  • #90
marmoset said:
I believe the microscopic magnetization in iron for example can be measured by spin-polarized neutron scattering. My reference is the article 'The microscopic magnetization: concept and application' by L.L. Hirst which begins by discussing the view that magnetization is a macroscopic quantity defined by averaging.

I haven't read the paper in detail, but as far as I understand it this paper uses implicitly also averaging procedures, because it describes magnetization as "density of dipole moments". This is already a semiclassical description.

The most fundamental treatment is in-medium QED. Then you define in-medium (real-time Schwinger-Keldysh) Green's functions and then do some coarse-graining procedure for the Kadanoff-Baym equations, which are derived from the self-consistent 2PI formalism. Coarse graining then is formally done by assuming the separation of microscopic and macroscopic scales and use the gradient expansion of the KB equations to obtain semi-classical transport equations. There are various levels of description. Which one to use is a question of the application you have in mind and most of the approximation procedures are not derivable in a strict way from the fundamental equations of QED but are very much driven by phenomenology. Also the cited paper starts at a certain level that is already pretty far from QED in the sense that implicitly some averaging procedure has been assume. The very fact that you describe, e.g., magnetization as a density of dipole moments already is such an effective description from an averaging procedure! There is nothing wrong with this. To the contrary, it explains to a certain extent how the macroscopic phenomenology is related to the very fundamental level, as far as we know it today, and how the classical behavior can be understood as an emergent phenomenon from the most fundamental level of QED, so incomplete this understanding still is.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
942
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K