# How to Use Duality in Computational Electromagnetic Problems

Some weeks ago I happened across a post that caught my eye. Dale asked a question about the number of photons in an electromagnetic field. His question was answered in full but what caught my attention in the discussion was seeing a familiar friend; the rather odd field combination, ##E+iB## [1]. The impetus for Dale’s question centered on ##E+iB## obeying a Schrödinger equation, $$i\frac{d}{dt}(E+iB) = \nabla\times(E+iB).$$

My interest in this field combination is its application to classical radiation and scattering phenomena. First, we need to point out that ##E+iB## is really only half of Maxwell’s equation. The other half is taken up by its friend, ##E-iB## obeying a new and completely separate Schrödinger equation, $$i\frac{\partial}{\partial t}(E-iB) = -\nabla\times(E-iB).$$

What’s shown here is rather interesting. These field combinations obey separate field equations in free space. They do their wave thing quite independently of each other. Hum, okay so what? Well, I think they could have advantages in formulating boundary integral equations that are not widely appreciated.

One real-life problem these field combinations address is known as the internal resonance problem. Calculating EM scattering from a general-shaped conducting body can take considerable numerical work. Knowing you’ll even get an answer isn’t a minor question.

Scattering from general shape conductors provides an excellent example. Turns out specifying just ##E_\|=0## on the surface of a conductor isn’t always enough. Well, it is enough in many cases which is why many computer programs do just this. Problems crop up at particular frequencies. For these the big matrix you filled is useless.

How do you know a given formulation of a problem has one and only one answer? Well, you need a uniqueness theorem. Our field combinations satisfy some really interesting ones. These are simple but take some setup to prove. I hope you’ll stick around.

[1] Thanks to a kind reviewer for pointing out ##E+iB## is known as the Riemann-Silberstein vector.

### What You Get

One of the things we’ll prove is if ##E+iB\ne 0## in a volume then it is not zero on the boundary. There are several other things we’ll show along the way.

### Regions and Their Boundaries

Much of the required setup doesn’t involve Maxwell’s Equations at all. We will need some mathematical machinery in place before returning to Maxwell’s Equations. Pretty much everything here I consider pretty standard stuff which needs to be said for clarity’s sake.

In all that follows we restrict to a bounded region, ##V\subset\mathbb{R}^3##. By region we mean ##V## is a differentiable 3-manifold with a boundary, ##S##. We require the boundary ##S## to be its own 2 dimensional differentiable manifold. As such at every point of ##p\in S## we require there be an outward-directed normal, ##\hat{n}(p)##. This normal will play a big role in what follows.

### Fields and Their Boundary Values

The word “field” seems rather overused. Following in this time honored tradition, we take a field as any differentiable complex 3-vector valued function defined on ##V.## Collect all such functions together and we end up with a set we’ll call ##\mathcal{C}^3(V)##.  The set ##\mathcal{C}^3(V)## can be made into a complex linear vector space. For ##A,B\in\mathcal{C}^3(V)## we define addition and scalar multiplication point wise. Written out this is, $$(A+B)(x) = A(x)+B(x),$$ and, $$(\alpha A)(x) = \alpha A(x)$$ where ##x\in V## and ##\alpha\in\mathbb{C}##.

Every field ##A## has a boundary value. We write the boundary value of ##A## as ##a##. A boundary value is defined by first restricting ##A## to ##S## then removing any normal component. Written out,$$a(p) = A(p) – \left[\hat{n}(p)\cdot A(p)\right]\hat{n}(p).$$ Clearly this defines the boundary value of ##A## at every point on the surface.

We collect all boundary values into a set ##\mathcal{B}\subset\mathcal{C}^2(S)##. While every field has a boundary value it is less clear any old element of ##\mathcal{C}^2(S)## has a matching field. Still, we’ll need this larger set.

The same definitions we used to make ##\mathcal{C}^3(V)## into a vector space also work for ##\mathcal{C}^2(S)##. It is easy to check that ##\mathcal{B}\subset\mathcal{C}^2(S)## live inside ##\mathcal{C}^2(S)## as a vector subspace.

### Inner Product Spaces

We continue to add structure to our fields and boundary values. First, we make each vector space, ##\mathcal{C}^3(V)## and ##\mathcal{C}^2(S)## into inner product spaces. An inner product is a function taking two vectors into a complex number. We’ll use ##(A|B)## on ##\mathcal{C}^3(V)## and ##(a|b)## on ##\mathcal{C}^2(S)##. To count as an inner product on a complex vector space these functions must satisfy three conditions.

1. ##(A|A)\ge 0## and ##(A|A) = 0 \iff A = 0##
2. ##(A|B) = (B|A)^\ast##
3. ##(A|\alpha B + \beta C) = \alpha (A|B) + \beta (A|C)##

These are definitions. One must show any given function meet these requirements. Of the three, the first requirement is the most important for our purpose.

The inner products I use are very standard. $$(A|B) = \int_V (A^\ast(x)\cdot B(x)) d^3x$$ and $$(a|b) = \int_{S} (a^\ast(p)\cdot b(p)) d^2p.$$ Each of the required properties these must have is easy to show except perhaps the one I really really need which is ##(A|A)=0 \iff A=0##.

Given ##A = 0## on ##V## it is pretty clear that ##(A|A)=0##. Going the other way, ##A\ne 0## means there is some point ##x \in V## where ##A^\ast(x)\cdot A(x) > 0##. Being continuous there is a sphere about ##x## for which the minimum value is greater than zero inside the sphere. The integral over this sphere is greater than zero which makes ##(A|A) > 0##. Hence, the only way for ##(A|A) = 0## is for ##A=0## everywhere in ##V##.

The same argument replacing spheres with “disks” works for ##(a|b)##.

### An Important Relation

The divergence theorem is, $$\int_S F\cdot\hat{n} d^2p = \int_V \nabla F d^3x,$$ where ##F\in\mathcal{C}^3(V)##.

The relation I need is similar in a way to Green’s second identity. It follows from applying the divergence theorem to vectors functions of the form ##A^\ast\times B##. The result is best written in terms of our inner products, $$\boxed{( a|N b) = i(\nabla\times A|B) – i(A|\nabla\times B).}$$ This expression is so useful we’ve given it its own box. Two relations are used to obtain this form. One is $$\nabla\cdot(A\times B) = (\nabla\times A)\cdot B – A\cdot (\nabla\times B).$$ The other is the cyclic symmetry of the triple product, $$(a^\ast\times b)\cdot\hat{n} = -a^\ast\cdot(\hat{n}\times b).$$ The extra factor of ##i## is inserted to make the operator, ##N##, more convenient.

### Nice Relation. What’s N?

The operator ##N## deserves its very own section. This operator is defined using the outward-directed surface normal so it acts on the space of boundary values. The definition is simple, $$(N a)(p) = -i\hat{n}(p)\times a(p).$$ We’ve added the factor ##-i## to make ##N## hermitian.  Showing ##N## is hermitian follows from the symmetry of the triple product.

The eigenvalues of ##N## are easy to compute once you notice ##N^2 = 1##. Of course, ##N## annihilates vectors along the surface normal but we are restricting to vectors tangent to ##S##. The eigenvalues are clearly ##\pm 1##. As we know for years of quantum mechanics, this defines two orthogonal projectors, $$P_\uparrow = \frac{1}{2}(1+N),$$ and its complement, $$P_\downarrow=\frac{1}{2}(1-N),$$ acting on ##\mathcal{C}^2(S)##. The ##\uparrow## and ##\downarrow## I read as “in” and “out”. It comes from the role these spaces play in the far-field of a radiator. This will be a topic for other insights if the audience permits.

These projectors split any boundary value into two orthogonal components, $$a = a_\uparrow + a_\downarrow.$$ Specializing our boxed relation for ##A=B##, $$\|a_\uparrow\|^2 – \|a_\downarrow\|^2 = 2\mathcal{I}m (A|\nabla\times A).$$

### Back to Maxwell

With the boxed expression in hand, we return to Maxwell’s equations. First off, we look just at time-harmonic fields. All fields are assumed to have an ##e^{i\omega t}## dependence which we suppress. Basically, we are looking at the energy eigenfunction of Dale’s Schrödinger equation(s).

Instead of free space, we’ll consider ##V## as filled with an isotropic homogeneous material with constitutive relations, ##D=\epsilon E##, and ##H=\mu B##. Both ##\epsilon## and ##\mu## may be complex. This is a critical thing to include because lossy materials may be modeled by complex constitutive parameters.

We choose to write our field combination as, ##F=E+i\eta H##. The constant ##\eta=\sqrt{\mu/\epsilon}## is the intrinsic impedance. The reasons for doing this are that the boundary values of ##E## and ##H## are continuous across any boundary which is free of surface currents. This has real benefits when writing EM analysis programs but doesn’t play much of a role in the following. The field equation for ##F## is then, $$\nabla\times F = -kF.$$ The equation for ##G=E-i\eta H## is the same but with a ##+k## instead of a ##-k##. Here, ##k=\omega\sqrt{\epsilon\mu}## is the complex wavenumber.

Now, we’ve talked about ##\mathcal{C}^3(V)## the complex linear vector space of differentiable functions on ##V##. There is a vector subspace, ##\Lambda_k(V)##, of this space consisting all the solutions of ##\nabla\times F= -kF## on ##V##. What’s interesting is the connection between elements of ##\Lambda_k(V)## and their boundary values, ##\lambda_k(S)## supplied by our boxed relation.

### Main Result

Let’s prove our main result.

Given ##F\in\Lambda_k(V)## ##F=0 \iff f = 0.##

#### Partial Proof

For ##k\notin \mathbb{R}## the boxed relation gives the result, $$\|f_\uparrow\|^2-\|f_\downarrow\|^2 = 2\mathcal{I}m(k)\|F\|^2\ne 0.$$ Clearly, $$\|f\|^2 = \|f_\uparrow\|^2+\|f_\downarrow\|^2 \ge |\|f_\uparrow\|^2 – \|f_\downarrow\|^2| > 0.$$

This completes the proof for complex k. We’ve given this partial answer because it brings out a noteworthy relationship between the up and down components.

##\|f_\uparrow\| > \|f_\downarrow\|## if ##\mathcal{I}m(k)>0## and ##\|f_\uparrow\| < \|f_\downarrow\|## if ##\mathcal{I}m(k)< 0##.

### Proof for all ##k##

Okay, what about the case ##k\in\mathbb{R}##? This case is somewhat harder but not by that much with a little trickery. To show ##f\ne 0## all we need is just one vector, ##T\in\mathcal{C}^3(V)##,  giving us ##(t|f)\ne 0## and we will have made our case.

The field ##F## is a solution of a linear partial differential equation where ##k## appears as a linear parameter. A well known fact (okay, relatively well known) is one can count on ##F## being an analytic function of ##k##. That means we may differentiate ##F## by ##k##. Do this and observe that this new function obeys the equation, $$\nabla\times\frac{dF}{dk} = -k\frac{dF}{dk} – F.$$ Cool. Plugging this into our boxed relation, $$( -i\frac{df}{dk}|f) = \|F\|^2.$$ So we’ve shown that ##f = 0## if and only if ##F = 0##.

I’m going to end this off by posing some exercises for the reader. The only one I haven’t solved completely is problem 4. I believe parity makes 4 true but there are complications with ##V## needing to be parity symmetric I’ve yet to beat down.

1. Show for that for ##\mathcal{I}m(k)>0##, knowing ##f_\uparrow## determines ##F## uniquely. What about ##\mathcal{I}m(k) < 0##?
2. Show ##\lambda^\uparrow_k(V) = \{f_\uparrow|f\in\lambda_k(V)\}## is a vector space.
3. Assume ##k## is real. Use ##(a|N|b)=0## to show the spaces ##\lambda^\uparrow_k(V)## and ##\lambda^\downarrow_k(V)## are connected by a unique vector space isometry.
4. Show that if ##f\in \lambda_k(V)## then ##Nf\in\lambda_{-k}(V)##.