Ampere's Law & Biot-Savart: Dealing with Infinity

AI Thread Summary
The discussion centers on the implications of infinite current in the context of Ampere's Law and the Biot-Savart Law. It highlights that infinite currents violate the assumptions of Helmholtz's theorem, leading to complications in deriving Ampere's Law from the Biot-Savart Law. The Biot-Savart Law does not involve surface integrals, as it directly relates current to magnetic fields without needing to consider surface current density. The derivation of Ampere's Law from Biot-Savart becomes complex when current density does not converge at infinity. Understanding these relationships requires a solid grasp of vector potentials and the underlying Maxwell equations.
zb23
Messages
27
Reaction score
2
How to lose surface integral in derivation of ampere law from biot-savart law if current goes to infinity?
How does current that goes to infinity obey Helmholtz theorem for vector fields?
 
Physics news on Phys.org
zb23 said:
Summary:: biot-savart, ampere law

How does current that goes to infinity obey Helmholtz theorem for vector fields?
It doesn’t satisfy the assumptions of the theorem. Currents going to infinity cause problems for that reason among others.
 
zb23 said:
Summary:: biot-savart, ampere law

How to lose surface integral in derivation of ampere law from biot-savart law if current goes to infinity?
Biot-Savart has no surface integral. It has current ## i ## to get ## d\bf B = (\mu_0i/4\pi) d\bf l \times \bf r/r^2 ##.

Perhaps you were thinking of the magnetic vector potential ## \bf A ## which does have surface current density ##\bf j ## in it. But you can just assume any wire cross-section a and substitute ## j = i/a ## and ## dv = a~ dl ##, v = volume, to get ## d\bf A = (\mu_0/4\pi) \bf j dv/r ## then integrate to get ## \bf B = \nabla \times \bf A ##.

The integral is from ## l = -\infty ## to ## +\infty ##.
 
Last edited:
In Griffits there is a derivation of ampere s law from biot savart law and it works when J to zero in infinity (surface integral) but not when J does not converge. Griffits says that for that latter case derivation is more complicated.
forum.png
 
Correction: Case when J does converge in infinity but it is not zero, how does this integral on the right go to zero if there is finite current in infinity,
 
I don't have Griffith and I don't recognize his equation as referring to the Biot-Savart law..

The Biot-Savart law I'm familiar with invokes current only. This current can extend to +/- infinity. Ampere's law is easily derived from it.

Biot-Savart is in turn derivable from the magnetic vector potential.

P.S. that Griffith equation looks like just a general statement of the Divergence theorem:
## \iint_S \bf E \cdot \hat {\bf n} ~dS = \iiint_V \nabla \cdot \bf E~ dV.## for a closed surface S enclosing a volume V for any vector ## \bf E ##. As I said, I don't see the relevance with Biot-Savart. Sorry.
 
Last edited:
Well with currents at infinity it's not that easy. Let's first see, where the Biot-Savart law comes from. One should be aware that the fundamental equations are the Maxwell equations in local form, i.e., in terms of partial differential equations.

For magnetostatics the electric and magnetic field separate, and you can treat the magnetic field only. In vacuo the equations read (in SI units)
$$\vec{\nabla} \cdot \vec{B}=0, \quad \vec{\nabla} \times \vec{B}=\mu_0 \vec{j},$$
where ##\vec{j}## is the current density.

To derive the Biot-Savart Law (which almost always is more complicated to evaluate the field than to use the partial differential equations directly) one usually introduces the vector potential. Because of the first equation, there's a vector field ##\vec{A}## such that
$$\vec{B}=\vec{\nabla} \times \vec{A}.$$
Plugging this in the other equation leads to
$$\vec{\nabla} \times \vec{B} = \vec{\nabla} \times (\vec{\nabla} \times \vec{A}) = \vec{\nabla}(\vec{\nabla} \cdot \vec{A})-\Delta \vec{A}=\mu_0 \vec{j}.$$
Now ##\vec{A}## is only determined up to a gradient field (gauge invariance), which implies that we can impose one condition on ##\vec{A}##. Here the most convenient "choice of gauge" is the Coulomb gauge,
$$\vec{\nabla} \cdot \vec{A}=0.$$
Then the equation reads
$$-\Delta \vec{A}=\mu_0 \vec{j}.$$
We know from electrostatics how to solve this for each Cartesian component of ##\vec{A}##:
$$\vec{A}(\vec{x}) = \frac{\mu_0}{4 \pi} \int_{\mathbb{R}^3} \mathrm{d}^3 x' \frac{\vec{j}(\vec{x}')}{|\vec{x}-\vec{x}'|},$$
which holds if ##\vec{j}## vanishes quickly enough at infinity.

For the most simple example, i.e., an infinitely long wire with constant ##\vec{j}## along it, the integral for ##\vec{A}## does not converge, but for ##\vec{B}## it does
$$\vec{B}(\vec{x})=\vec{\nabla} \times \vec{A}(\vec{x})=\frac{\mu_0}{4 \pi} \int_{\mathbb{R}^3} \mathrm{d}^3 x' \vec{j}(\vec{x}') \times \frac{\vec{x}-\vec{x}'}{|\vec{x}-\vec{x}'|^3}.$$
which is Biot-Savart's Law.

Of course, the example for the constant current along an infinite wire is much more simply solved directly using the Maxwell equations. In this case we have
$$\vec{j}(\vec{x})=\vec{e}_z \frac{I}{\pi a^2} \Theta(a-R),$$
where ##(R,\varphi,z)## are the usual cylinder coordinates and ##a## the radius of the cylindrical wire along the ##z##-axis. By symmetry the ansatz for ##\vec{A}## is
$$\vec{A}=A_z(R) \vec{e}_z.$$
Since ##\vec{e}_z=\text{const}## one can use the equation for the Laplacian as applied to a scalar on the component ##A_z##, leading to
$$\Delta A_z(R)=\frac{1}{R} \partial_R (R \partial_R A_z)=-\mu_0 j_z.$$
For ##R<a## we have
$$R \partial_R A_z=C_1 -\mu_0 \frac{I}{2 \pi a^2} R^2$$
and
$$A_z=C_1 \ln(R/a) + C_2 -\mu_0 \frac{I}{4 \pi a^2} R^2.$$
Since ##A_z## should be a smooth function, we must have ##C_1=0##:
$$A_z(R)=C_2-\mu_0 \frac{I}{4 \pi a^2} R^2, \quad 0\leq R<a.$$
The solution for ##R>a## obviously follows for ##I=0##, leading to
$$A_z=C_1 \ln(R/a).$$
To make ##A_z## continuous at ##R=a## we must choose ##C_2=I/(4 \pi)##:
$$A_z(R)=\begin{cases} \frac{\mu_0 I (a^2-R^2)}{4 \pi a^2} &\text{for} \quad 0 \leq R<a,\\
C_1 \ln(R/a) & \text{for} \quad R \geq a. \end{cases}$$
To get ##C_1## we first get
$$\vec{B}=\begin{cases} \frac{I \mu_0 R}{2 \pi a^2} \vec{e}_{\varphi} &\text{for} \quad 0 \leq R <a,\\
-\frac{C_1}{R} \vec{e}_{\varphi} & \text{for} \quad R \geq a. \end{cases}.$$
Since also ##\vec{B}## should be continuous at ##R=a##, we get ##C_1=-I \mu_0/(2 \pi a)## and thus finally
$$\vec{B}=\begin{cases} \frac{\mu_0 I R}{2 \pi a^2} \vec{e}_{\varphi} &\text{for} \quad 0 \leq R <a,\\
\frac{\mu_0 I}{2 \pi R} \vec{e}_{\varphi} & \text{for} \quad R \geq a. \end{cases}.$$
 
  • Like
  • Love
Likes Dale and etotheipi
@vanhees71 what is the function ##\Theta(x)##? My guess was that it is defined like$$\Theta(x) =
\begin{cases}
1 & x \geq 0 \\
0 & x < 0
\end{cases}$$because you want non-zero current density where ##R \leq a## and zero everywhere else. But I haven't seen the notation before, so wondered if it is correct?
 
But how does all this you re telling me obey Helmholtz theorem.
 
  • #10
etotheipi said:
@vanhees71 what is the function ##\Theta(x)##? My guess was that it is defined like$$\Theta(x) =
\begin{cases}
1 & x \geq 0 \\
0 & x < 0
\end{cases}$$because you want non-zero current density where ##R \leq a## and zero everywhere else. But I haven't seen the notation before, so wondered if it is correct?
Yes, this is the Heaviside step function, sometimes called the Heaviside theta function: https://en.wikipedia.org/wiki/Heaviside_step_function
 
  • Informative
Likes etotheipi
  • #11
etotheipi said:
@vanhees71 what is the function ##\Theta(x)##? My guess was that it is defined like$$\Theta(x) =
\begin{cases}
1 & x \geq 0 \\
0 & x < 0
\end{cases}$$because you want non-zero current density where ##R \leq a## and zero everywhere else. But I haven't seen the notation before, so wondered if it is correct?
Yes, that's the Heaviside Unistep function. It's rather to be considered a generalized function (distribution). You get nice formalulae when taking this point of view, e.g., ##\Theta'(x)=\delta(x)##, where ##\delta## is the Dirac ##\delta## distribution. To prove it, take an arbitrary test function. Then you get
$$\int_{\mathbb{R}} \mathrm{d} x \Theta'(x) \phi(x)=-\int_{\mathbb{R}} \mathrm{d} x \Theta(x) \phi'(x) =-\int_0^{\infty} \mathrm{d} x \phi'(x)=-\phi(x)|_{x=0}^{x \rightarrow \infty}=+\phi(0).$$
 
  • Like
  • Informative
Likes sysprog and etotheipi
  • #12
zb23 said:
But how does all this you re telling me obey Helmholtz theorem.
The Helmholtz theorem is quite generally valid though the fields involved need to obey certain differentiability conditions and must go to 0 quickly enough at infinity. A famous paper by Blumenthal shows that the Helmholtz decomposition theorem holds for vector fields that are less divergent for ##|\vec{x}|\rightarrow \infty## than ##\ln(|\vec{x}|)##. Unfortunately I don't know any translation of this paper, but for sure you find it in good textbooks on vector analysis.

O. Blumenthal, Über die Zerlegung unendlicher Vektorfelder, ¨Math. Ann. 61, 235 (1905),
https://doi.org/10.1007/BF01457564.
 
  • Like
Likes sysprog and zb23
  • #13
vanhees71 said:
To prove it, take an arbitrary test function. Then you get
$$\int_{\mathbb{R}} \mathrm{d} x \Theta'(x) \phi(x)=-\int_{\mathbb{R}} \mathrm{d} x \Theta(x) \phi'(x) =-\int_0^{\infty} \mathrm{d} x \phi'(x)=-\phi(x)|_{x=0}^{x \rightarrow \infty}=+\phi(0).$$

I suppose this relies on the test function going to zero at ##\infty##, so that we can ignore the ##\left[ \phi(x) \Theta(x) \right]_{-\infty}^{\infty}## in the integration by parts. But it's a nice property that ##\Theta'(x) = \delta(x)##, thanks for explaining!
 
  • #14
The testfunctions are usually "utmost nice". Of course, that's all handwaving physicists' slang. For a more thorough treatment you have to read a textbook on functional analysis making the statements mathematically precise. A nice book for physicists, which doesn't treat the most general case of all kinds of distributions but more rigorously than theoretical-physics textbooks is

M. Lighthill, Introduction to Fourier analysis and generalised
functions, Cambridge University Press (1958).
 
  • Like
Likes sysprog and etotheipi
Back
Top