Solenoidal and conservative fields

  • Thread starter Thread starter TrickyDicky
  • Start date Start date
  • Tags Tags
    Fields
  • #51
Matterwave said:
You should realize from that formula, however, that the exterior derivative itself is fully compatible with the connection since you can express it as either partial derivatives or covariant derivatives.

Yes, I realize that, the exterior derivative itself is, but I'm concerned about d^2=0 being the case in the presence of a non-flat connection. If the space is curved one must take into account the curvature form and the exterior derivatives turn into exterior covariant derivatives D and D^2≠0, or at least that is my understanding, so we have a curved space with a connection (the Levi-Civita conn.) that being metric compatible measures the curvature of the manifold and according to the pages of wikipedia mentioned by me and quasar987 (or at least what I infer from them) the dd=0 property would be equivalent to the condition of flat vector bundle connection, which if the manifold is curved and we are using the Levi-civita connection wouldn't be the case, right?
Please if you are versed in exterior calculus and curved connections, could you clarify this?
 
Physics news on Phys.org
  • #52
Matterwave said:
If you want to use the covariant exterior derivative to define a curl, you can (you are free to make whatever definitions you want). But YOU have to come up with the definition.

You seem to just assume that there is a standard definition for curl generalized using the covariant exterior derivative and somehow the definition I gave is not the correct definition. If there is such a standard definition, please find it and share. If not, you are free to come up with it yourself. But either way, we have to agree to a definition first before we can do anything.
I understand your point here, I'll try and find it.
 
  • #53
TrickyDicky said:
Yes, I realize that, the exterior derivative itself is, but I'm concerned about d^2=0 being the case in the presence of a non-flat connection. If the space is curved one must take into account the curvature form and the exterior derivatives turn into exterior covariant derivatives D and D^2≠0, or at least that is my understanding, so we have a curved space with a connection (the Levi-Civita conn.) that being metric compatible measures the curvature of the manifold and according to the pages of wikipedia mentioned by me and quasar987 (or at least what I infer from them) the dd=0 property would be equivalent to the condition of flat vector bundle connection, which if the manifold is curved and we are using the Levi-civita connection wouldn't be the case, right?
Please if you are versed in exterior calculus and curved connections, could you clarify this?

dd=0 always. DD=0 only for a flat connection. But D and d are not the same operator. You can have both on a manifold. They act on different objects and give you different objects. d acts on forms (0-forms, 1-forms), or if you want, scalar valued forms. D acts on vector-valued forms. These are vectors whose components are forms (think a column vector, each entry of which is a form). Or, equivalently, forms which take on vector values, i.e. when you act the form on a vector, it gives you back a vector not a scalar
 
  • #54
Maybe a basic question here would be: can \nabla f be considered as a covector valued 0-form? If yes then I think the use of the exterior covariant derivative (D) is granted.
 
Last edited:
  • #56
TrickyDicky said:
Maybe a basic question here would be: can \nabla f be considered as a covector valued 0-form? If yes then I think the use of the exterior covariant derivative (D) is granted.

Yes, \nabla f is an element of \Omega^0(M, T^*M)=\Gamma(T^*M)=\Omega^1(M) and you can apply D to it. But on 0-(vector-valued-)forms, D is just \nabla, so you get \nabla^2f\in \Omega^1(M,T^*M)=T_0^2M, the so-called covariant hessian of f.

Its action on vectors is as follows: (\nabla^2f)(X,Y)=Y(Xf)-(\nabla_YX)f.
 
Last edited:
  • #57
quasar987 said:
Yes, \nabla f is an element of \Omega^0(M, T^*M)=\Gamma(T^*M)=\Omega^1(M) and you can apply D to it. But on 0-(vector-valued-)forms, D is just \nabla, so you get \nabla^2f\in \Omega^1(M,T^*M)=T_0^2M, the so-called covariant hessian of f.

Its action on vectors is as follows: (\nabla^2f)(X,Y)=Y(Xf)-(\nabla_YX)f.

Thanks, can we then apply the hodge dual to D(\nabla f) and get the curl (always considering we are using a Levi-civita connection in a curved 3-manifold to perform the exterior covariant derivative)?
 
  • #58
The Hodge dual maps 2-forms to 1-forms. Unfortunately, \nabla^2f is not a 2-form, it is a general covariant 2-tensor. In fact, when the connection is symmetric, as is the case of the levi-civita connection for instance, \nabla^2f is symmetric (as is the hessian in R³ by symmetry of the mixed partials!).

So in the riemannian case, we can't even hope to antisymetrize \nabla^2f and then apply hodge: we'll get 0 all the time... :(
 
  • #59
quasar987 said:
The Hodge dual maps 2-forms to 1-forms. Unfortunately, \nabla^2f is not a 2-form, it is a general covariant 2-tensor. In fact, when the connection is symmetric, as is the case of the levi-civita connection for instance, \nabla^2f is symmetric (as is the hessian in R³ by symmetry of the mixed partials!).

So in the riemannian case, we can't even hope to antisymetrize \nabla^2f and then apply hodge: we'll get 0 all the time... :(

Yes you are right,:frown:
Does this have anything to do with the fact that curls refer to infinitesimal rotatations?
 
  • #60
I don't know.. it seems like the situation so far is the following:

Given M a manifold with a connection \nabla, if we naively define the curl of a 1-form A\in\Omega^1(M) by \mathrm{curl}(A):=\mathrm{Ant}(\nabla A)\in \Omega^2(M), then there are 2 cases:

i) if the connection is symmetric (i.e. torsion free), curl(A) = dA (modulo a multiplicative constant). But this is actually independant of the connection and can be defined on any manifold with or without a connection. And we know that in the levi-civita case, this is indeed the curl as defined by Matterwave, which coincides with the actual curl in the R³ case. In this case, curl o grad = 0. So at this point, we are given the option to revise our definition of curl and we may chose to say it's just "d" after all.

ii) If the connection has torsion, then \mathrm{curl}(A)=dA - \tau(A,\cdot,\cdot), where \tau(A,X,Y)=A(\nabla_XY-\nabla_YX-[X,Y]) is the torsion \left( \begin{array}{c} 2 \\ 1 \end{array}\right)-tensor. In coordinates, this is

\mathrm{curl}(A) = (\partial_iA_j - \partial_jA_i) - (\Gamma_{ij}^k - \Gamma_{ji}^k)A_k

(mod constant). So, if we decide to stick with the covariant definition of curl, then curl is only defined on manifolds with connections, and it has the perculiarity that the failure of curl o grad to vanish is a measure not of the curvature of the connection, but rather of its torsion.
 
  • #61
quasar987 said:
I don't know.. it seems like the situation so far is the following:

Given M a manifold with a connection \nabla, if we naively define the curl of a 1-form A\in\Omega^1(M) by \mathrm{curl}(A):=\mathrm{Ant}(\nabla A)\in \Omega^2(M), then there are 2 cases:

i) if the connection is symmetric (i.e. torsion free), curl(A) = dA (modulo a multiplicative constant). But this is actually independant of the connection and can be defined on any manifold with or without a connection. And we know that in the levi-civita case, this is indeed the curl as defined by Matterwave, which coincides with the actual curl in the R³ case. In this case, curl o grad = 0. So at this point, we are given the option to revise our definition of curl and we may chose to say it's just "d" after all.
Why just d?, because we are applying it only to one-forms and dA is a two-form, but we want a one-form as result , actually matterwave (and tiny-tim and wikipedia) defined it as *d. if curl is just d, we should admit that for a curved space curl is just D, which I don't think is correct. Btw what is Ant?
quasar987 said:
ii) If the connection has torsion, then \mathrm{curl}(A)=dA - \tau(A,\cdot,\cdot), where \tau(A,X,Y)=A(\nabla_XY-\nabla_YX-[X,Y]) is the torsion \left( \begin{array}{c} 2 \\ 1 \end{array}\right)-tensor. In coordinates, this is

\mathrm{curl}(A) = (\partial_iA_j - \partial_jA_i) - (\Gamma_{ij}^k - \Gamma_{ji}^k)A_k

(mod constant). So, if we decide to stick with the covariant definition of curl, then curl is only defined on manifolds with connections, and it has the perculiarity that the failure of curl o grad to vanish is a measure not of the curvature of the connection, but rather of its torsion.
Hmm, this is a pretty intriguing result, however I was limiting my explorations to Riemannian manifolds with symmetric connection.
 
Last edited:
  • #62
Ant, also written Alt, is the antisymetrization operator. It takes a tensor T and spits out an antisymetric one. If T is already antisymetric, Ant(T)=T. If T is symetric, Ant(T)=0. See http://en.wikipedia.org/wiki/Wedge_product#The_alternating_tensor_algebra.

Ok, maybe there is a little too much I neglected to say for my last post for it to be intelligible. Let me try again...

Recall: in post #54-59, we were contemplating to define (in the riemannian context) the curl of a gradient covector field \nabla f by considering D(\nabla f)=\nabla^2f and then going down to 1-forms using hodge duality. But then I said that this doen't quite make sense, because \nabla^2f is not a 2-form. If we want to use hodge on it, we need to anti symetrize it first.

Ok, but this discussion focuses on the curl of a gradient. This restriction is not necessary. More generally, we can define the curl of any 1-form A as \mathrm{curl}(A):=*\mathrm{Ant}(\nabla A) (or \mathrm{curl}(A)=*\mathrm{Ant}(D A) if your prefer). It turns out (after computing) that \mathrm{Ant}(\nabla A)=dA, so that our definition is just \mathrm{curl}(A)=*dA after all. So our attempt at a definition of curl via the covariant derivative landed us on the same formula as the one from tiny-tim's post where he defines curl(A) as *dA. In particular, curl o grad = 0 always with our definition.

So that's a little disapointing. But then I noticed that if you back up a little and consider the more general situation of a manifold M with just a connection on it. We may generalize our notion of curl by setting \mathrm{curl}(A):=\mathrm{Ant}(\nabla A). Of course, in this setting we don't have the luxury of a hodge star *, so we must be content with curl being a 2-form.

Then I noticed that if \nabla is symetric, then \mathrm{curl}(A)=dA. So modulo a star on the left, this is the same as in the riemannian case. Also, by setting \mathrm{grad}(f):=\nabla f=df in this setting, we have the familiar curl o grad = 0.

However, if \nabla is not symetric, then curl(A)=dA - τ(A, , ) where τ is the torsion tensor. And so, we see that curl o grad ≠ 0 in general. In fact, curl o grad = 0 iff τ=0.

Like you, I found this an interesting and intriguing observation!
 
Last edited:
  • #63
Nice recap, if I look back I think I should have realized it when matterwave made clear in a previous post that the formula I posted generalizing the curl was equal to zero with the symmetric connection which implied it didn't if the connection was asymmetric, but as I said I was centering only in the symetric case and was happier thinking there was an error somewhere.
 
  • #64
And it is interesting to see that a quick google search shows this result has been used by physicists (even by Einstein in 1928 "in an attempt to match torsion to the electromagnetic field tensor" according to wikipedia) to try to come up with some kind of EM and gravity unified field theory.
 
  • #65
Really! Do you know what that means 'unified field theory'? what was Einstein trying to do exactly?
 
  • #66
quasar987 said:
Really! Do you know what that means 'unified field theory'? what was Einstein trying to do exactly?

Are you kidding? Physicists have been trying to geometrically unify electromagnetism and gravitation (more recently QM and GR) for ages.
Einstein was obsessed with this from 1915 to his death without any success, I don't remember about the details (it's been some time since I've read Einstein's biographies), the quote in my last post was from the wikipedia page on Einstein-cartan theory, the first paragraph.
 
  • #67
But what was Einstein trying to do exactly. Like, mathematically speaking, do you know what it is that he was trying to construct or find?
 
  • #68
I'm pretty fuzzy on this history, but Einstein was basically trying to find one unifying structure which would describe both Gravity and Electra-magnetism (at the time, these were the only 2 forces known). Einstein thought the best candidate was a unified (classical) field theory (as opposed to a quantum field theory). This means, he was trying to find some field, which permeated all of spacetime, which would describe gravity on the one hand and electromagnetism on the other.

It would be elegant if, like the case for electricity and magnetism, these forces were just the 2 sides of the same coin.

Einstein and Maxwell basically succeeded in unifying electricity and magnetism - showing that they basically combine in 4-D spacetime to form a 2-form field (the Faraday tensor).

I believe Einstein was thinking that one could combine gravity into that picture as well.

In fact, Einstein was quite excited for Kaluza who came up with just such a theory (today called Kaluza-Klein theory). Einstein prompted Kaluza (who was sort of a recluse) to publish his theory. The problem with Kaluza's theory was that it involved an extra spatial dimension (the curvature in which, almost magically, reproduces the Maxwell equations!), but he was unable to explain why this dimension was not accessible/observable to us. Klein later remedied this by postulating that the extra spatial dimension was a tiny compact dimension (obeyed periodic boundary conditions).

There are still other problems with the theory of Kaluza and Klein. Being a classical theory, it does not adequately describe the quantum world.

Einstein was never a fan of quantum mechanics, but, it would seem, it's getting more and more difficult to come up with a picture of the universe which is NOT fundamentally quantum. This may have been the great difficulty which Einstein could not overcome. I believe he was always searching for a classical unified field theory, when, as best as we can tell, any unified field theory should be a quantum one.
 
  • #69
Einstein and Maxwell basically succeeded in unifying electricity and magnetism - showing that they basically combine in 4-D spacetime to form a 2-form field (the Faraday tensor).

I wouldn't really credit Einstein and Maxwell with that. I forget who was involved at the beginning of the story. Then, Faraday came up with his law which gave a relationship, and Maxwell showed that something similar held for time-varying electric fields. Then, it was really Minkowski (according to Penrose), building on Einstein's work, who came up with the idea of space-time, as well as the Faraday tensor in some form.
 
  • #70
Who to credit with what is always a question in the history of physics. Some people would argue that one should not credit Einstein with SR either, and that it was Poincare and Lorentz who "came up with it". But certainly Einstein was the one to unify the ideas of Length contraction and time-dilation, etc., into a coherent theory.

Maxwell was certainly not the first to see the relationship between electricity and magnetism. Ampere's law, for example already showed some relationship between current and magnetic fields. Faraday's law does the same thing, but in the other direction (changing magnetic fields gives current).

Obviously, physics is a group effort, so to credit anyone or two people with the unification of E&M is not simple. I chose Einstein and Maxwell because they represent to me the people who came up with the basic structure upon which the unification could be done.
 
  • #71
I just think Minkowski doesn't get enough credit, particularly in the sentence I was referring to. He came up with space-time, completing Einstein's theory.
 
  • #72
Since this thread is still open, I'd like to ask sth relevant too!

So let's get back at page 1, shall we? :rolleyes: I mean before all the exterior algebra discussion came up.

What is actually the physical meaning of having a field, both solenoidal and conservative? Is there any famous field with that property in classical theory? How does it feel to possesses two potentials (of different nature) after all? :-p
Let us stick to the 3D case...
 
Last edited:
  • #73
Since this thread is still open, I'd like to ask sth relevant too!

So let's get back at page 1, shall we? I mean before all the exterior algebra discussion came up.

What is actually the physical meaning of having a field, both solenoidal and conservative? Is there any famous field with that property in classical theory? How does it feel to possesses two potentials (of different nature) after all?
Let us stick to the 3D case...

Yes, that means the scalar potential is a solution of the Laplace equation. The scalar potential exists because it's conservative and solenoidal then says it's a solution of the Laplace equation. The Laplace equation is pretty interesting, physically. Basically, it comes up when there are no sources or sinks for your vector field. So, for example, the steady state solutions (wait until it approaches an equilibrium) of the heat equation satisfy the Laplace equation, except at places where heat is being pumped in or flowing out of your system. Alternatively, you could think of it as the velocity vector field of an incompressible fluid, where no fluid is being pumped in or out anywhere. Or a static electric field, outside of the places where the charge lies.
 
  • #74
homeomorphic said:
Yes, that means the scalar potential is a solution of the Laplace equation. The scalar potential exists because it's conservative and solenoidal then says it's a solution of the Laplace equation. The Laplace equation is pretty interesting, physically. Basically, it comes up when there are no sources or sinks for your vector field. So, for example, the steady state solutions (wait until it approaches an equilibrium) of the heat equation satisfy the Laplace equation, except at places where heat is being pumped in or flowing out of your system. Alternatively, you could think of it as the velocity vector field of an incompressible fluid, where no fluid is being pumped in or out anywhere. Or a static electric field, outside of the places where the charge lies.
To my knowledge, the static electric field has never been ascribed a vector potential even in those regions. Neither does a vector potential for the temperature gradient make any sense at all. I am asking about a physically observable vector field with the property of TWO potentials, one scalar and one vectorial.

Moreover if I unsterstand correctly, your assertion is: a function being harmonic implies that its gradient possesses a vector potential? Can you prove that? I am not sure...
 
  • #75
Haven't read most of this thread, but there is no need to be talking about covariant exterior derivatives. The most natural extension of curl to general manifolds is simply d, when it acts on 1-forms.

dd = 0 is the n-dimensional analogue of "div curl = 0" and "curl grad = 0".

If you have a vector field you want to take the curl of, you should first use the metric to turn it into a covector field, take d, and then use the metric to turn the result into a 2-contravariant-index object. Or you can do all of this in one step:

(\mathrm{curl} \; X)^{\mu \nu} = \nabla^\mu X^\nu - \nabla^\nu X^\mu
This antisymmetric 2-index object describes the "circulation" of the vector field X. Circulation happens in 2-planes; hence why the object must have 2 indices. The object must be antisymmetric, because the infinitesimal generators of rotations are antisymmetric. In 3 dimensions, we are able to take advantage of Hodge duality to map it onto a 1-index object, but this does not apply in general.

(Note: I have assumed zero torsion here. In the presence of torsion, I think I agree with Quasar, but I'd have to think about it. The point is, you want to measure the "circulation" of a vector field, and you have to define what that means.)

The next thing you might be interested in is the Hodge decomposition. The Hodge decomposition states that any n-form \eta can be written as

\eta = d \alpha + \star \, d \star \beta + \gamma
where \alpha is a (n-1)-form, \beta is a (n+1)-form, and \gamma is a harmonic n-form. Harmonic means it is smooth everywhere and solves Laplace's equation:

d \star d \star \gamma + \star \, d \star d \gamma = 0
Since any antisymmetric upper-index tensor can be mapped (via the metric) into an n-form, the Hodge decomposition applies to multivectors as well. In \mathbb{R}^3, there are no harmonic forms, so the Hodge decomposition reduces to the Helmholtz decomposition,

\vec V = \vec \nabla \phi + \vec \nabla \times \vec A
for any vector field \vec V.
 
  • #76
Ben, can you comment (or link to a proof) on the fact that R^3 has no harmonic forms ?
 
  • #77
Harmonic forms correspond to nontrivial cohomology, which R^3 has none of.

Or look at it this way: A harmonic form must solve Laplace's equation, have no singularities, and vanish at infinity. The only such form is the zero form.

As an example of a space that does have harmonic forms: Take two copies of R^3. Cut out the interior of a ball from each. Glue them together along the spherical boundaries we've just created. This makes a manifold with two asymptotic regions, with a tube in the center joining them. Such a space will have one harmonic 2-form (modulo exact forms). It will also have a harmonic 1-form Hodge-dual to this.
 
Last edited:
  • #78
Take the following field for instance:
\vec{v}(x,y,z)=(x+y-z, x+y, -x-2z)T
It is both solenoidal and conservative.

Are there any similar fields in physics?
 
  • #79
Ah, in light of Trifis' comment, I should say that on non-compact manifolds, the Hodge decomposition only applies to forms with suitable vanishing conditions at infinity. Trifis' vector field is harmonic! (This is what "both conservative and solenoidal" means). But it does not obey the necessary fall-off conditions at infinity.

In general, "harmonic" means "both closed and co-closed", which is the higher-dimensional analogue of "both conservative and solenoidal". Harmonic forms on higher-dimensional spaces are very important in differential topology, because they are closely related to Betti numbers.

In physics, such things turn up in string theory, where 6 of the 10 dimensions might be wrapped up in a Calabi-Yau manifold. We don't have explicit metrics for most Calabi-Yau manifolds, but we do have information about their Hodge numbers (and hence Betti numbers). So we know something about the number of distinct harmonic forms that exist.
 
  • #80
Once again, not every harmonic field is conservative. Consider another trivial example:
(x+y,y-z,z+x)
This field has obviously a non vanishing curl BUT it is harmonic, since its laplacian is the 0 vector.

It seems to me, that harmonic functions are not related to the question.

PS: An addendum to my previous comment. The \vec{v} field is special, because its decomposition consists EITHER of the curl of its vector potential OR the divergence of the its scalar potential. In other words, this field gets to choose a standalone, unique (if the field vanishes at infinity) component, between two different kinds of potentials!
What's the physical meaning behind this, is all I am looking for.
 
Last edited:
  • #81
That's what I knew, that on non-compact manifolds, the Hodge decomposition doesn't follow trivially (not even at all), so that it's not necessarily true that Hodge decomposition is a generalization of Helmholtz decomposition.
 
  • #82
Ben's covered things pretty well, but I think I can add a little bit more geometric perspective.

In geometric calculus, the analgoue of ##dd=0## is ##\nabla \cdot \nabla \cdot A = 0## and ##\nabla \wedge \nabla \wedge A= 0## for any multivector field ##A##.
This comes from the equality of mixed partial derivatives.

When Ben talks about Hodge duality, what this means is that the true object formed by the exterior derivative--##\nabla \wedge X## as it would be called--is a bivector field, a field of oriented planes, and the normals to these planes is what we usually call the curl. As was said, there is no unique vector normal to a plane in an arbitrary dimension space, which is why curl doesn't generalize.

The geometric calculus analogue to the Hodge decomposition is

$$\eta = \nabla \wedge \alpha + \nabla \cdot \beta + \gamma$$

Laplace's equation condition for ##\gamma## can also be simplified somewhat:

$$\nabla \cdot \nabla \wedge \gamma + \nabla \wedge \nabla \cdot \gamma = \nabla^2 \gamma = 0$$

Now, why does this decomposition work? It's because ##\nabla## is invertible under the geometric product--it admits a Green's function. In ordinary 3d space, this Green's function is the familiar ##G(r) = r/4\pi|r|^3##, as seen everywhere in electromagnetic theory, which uses it heavily.

Generally, you can decompose the derivative as follows:

$$\nabla \eta = \nabla \cdot \eta + \nabla \wedge \eta = \lambda + \mu$$

Where ##\lambda## is one grade lower (e.g if ##\eta## were a vector, ##\lambda## would be a scalar) and ##\mu## is one grade higher (e.g. a bivector, grade 2).

You can then use the Green's function for the vector derivative to solve for ##\eta##: (note that ##dS', dV'## are multivector measures, not vectors or scalars)

$$\begin{align*}
\oint_{\partial M} G(r-r') dS' \; \eta(r') &= \int_{M} G(r-r')( \nabla' dV') \; \eta(r') \\
&= -\int_{M} \delta(r-r') dV' \eta(r') + \int_M G(r-r') (\nabla' \eta(r')) \\
&= - i \eta(r) + \int_M G(r-r') \; dV'\; \lambda(r') + \int_M G(r-r') \; dV' \; \mu(r')\end{align*}$$

If ##\nabla \eta## were 0, then the value of ##\eta(r)## would be determined everywhere by the surface integral on the far left. This generalizes the Cauchy Integral theorem to arbitrary dimensions. Complex analytic functions are ones who, in real vector analysis, have ##\nabla \eta = 0##, both divergence and curl, so to speak. Rearranging, we get

$$i\eta(r) = -\oint_{\partial M} G(r-r') \; dS' \; \eta(r') + \int_M G(r-r') \; dV' \; \nabla' \cdot \eta(r') + \int_M G(r-r') \; dV' \; \nabla' \wedge \eta(r')$$

This, of course, just a fancy version of the Helmholtz decomposition.
 
Last edited:
  • #83
Every linear differential equation has a Green function; that is not really relevant for the Hodge decomposition. What is relevant is the inner product defined on \Omega^p T^*M (for M compact, or with suitable fall-off conditions to make this finite):

(\alpha, \beta) \equiv \int_M \alpha \wedge \star \beta
You can show without too much work that any p-form has an orthogonal decomposition with respect to this inner product, which is the Hodge decomposition. The Laplace operator

\Delta \equiv d \delta + \delta d
turns up because it is self-adjoint with respect to this inner product. Here

\delta \equiv (-1)^s \star d \star
where (-1)^s is a sign I can't remember at the moment. It depends on the degree p and the dimension n. The \delta operator corresponds to Muphrid's \nabla \cdot operator.

P.S. Muphrid, can you fix your long formulas so you don't force the page to have horizontal scroll bars? Try using \begin{split} or \begin{align}.
 
  • #84
I don't know; I can see why you're saying the Green's function ought not to matter, but I'm not so sure. Looking over the wiki page on Helmholtz, it seems clear that Green's functions are pivotal there, and I don't feel like there should be a vast difference between that and Hodge.

I think I realize I was going slightly in the wrong direction, so let me try something different. Let's start with the result I had before:

$$i \eta =-\oint_{\partial M} G(r-r') \; dS' \eta(r') + \int_{M} G(r-r') \; dV' \nabla' \eta(r')$$

Let ##\nabla^2 H = \delta(r)## be a Green's function for the Laplacian. Then ##\nabla H = G##. Because all the ##\eta## are functions of ##r'##, not ##r##, we can do

$$i \eta = -\nabla \oint_{\partial M} H(r-r') \; dS' \eta(r') + \nabla \int_{M} H(r-r') \; dV' \; \nabla' \eta(r')$$

The usual Helmholtz decomposition separates the terms by grade. At this point, I'm just going to choose ##\eta## as a vector field for simplicity.

$$\begin{align*}i A &= - \oint_{\partial M} H(r-r') \; dS' \cdot \eta(r') + \int_M H(r-r') \; dV' \; [\nabla' \wedge \eta(r')] \\
i \phi &= -\oint_{\partial M} H(r-r') \; dS' \wedge \eta(r') +\int_M H(r-r') \; dV' \; [\nabla' \cdot \eta(r')]\end{align*}$$

Then ##\eta = \nabla \phi + \nabla \cdot A## as expected.

However: if you instead keep the surface integrals separate from the volume integrals, you get a different decomposition.

$$\begin{align*}
i \alpha &= \int_M H(r-r') \; dV' \; [\nabla' \cdot \eta(r')] \\
i \beta &= \int_M H(r-r') \; dV' \; [\nabla' \wedge \eta(r')] \\
i \gamma &= -\nabla \oint_{\partial M} H(r-r') \; dS' \eta(r')
\end{align*}$$

And you get ##\eta = \nabla \wedge \alpha + \nabla \cdot \beta + \gamma##. It seems sort of arbitrary to break it down this way, but it is guaranteed that ##\nabla^2 \gamma = 0## (and in fact, ##\nabla \gamma = 0##, I think).

I'm not sure if this is actually meaningful or more meandering, but I thought it was an interesting way to connect the Helmholtz decomposition to something that looks like the Hodge.
 
  • #85
To my knowledge, the static electric field has never been ascribed a vector potential even in those regions. Neither does a vector potential for the temperature gradient make any sense at all. I am asking about a physically observable vector field with the property of TWO potentials, one scalar and one vectorial.

What is it that allows you to have a magnetic vector potential? It's the fact that the magnetic field is divergence free. So, what's wrong with the electric field having a vector potential if it is divergence free?

http://en.wikipedia.org/wiki/Solenoidal_vector_field

Why wouldn't a vector potential for the temperature gradient make sense? It's just a vector field. Does it have a good physical meaning? Maybe not. I was just talking about what it's like for a vector field to be solenoidal and conservative, not what it's like if it has two different potentials.

Moreover if I unsterstand correctly, your assertion is: a function being harmonic implies that its gradient possesses a vector potential? Can you prove that? I am not sure...

Yes. If it's harmonic, that just means the gradient is divergence-free. The Laplacian is div grad of the function. In ℝ^3, that means it has a vector potential. However, the mere fact that there is a vector potential doesn't imply that we should care about or use the vector potential, unless there is some reason to care about it, which is the case for the magnetic field.
 
  • #86
I think trifis might be confused by the fact that in R^3 there are harmonic functions but no harmonic forms, so if he is thinking in terms of the usual scenario like that of Maxwell equations or classical physics there are no physical examples of laplacian vector fields. That's why this question is well placed in the math subforums.
 
  • #87
I take it this distinction between harmonic functions vs. forms has to do with the domain on which the function is valid on? It does seem like one can construct a harmonic function by a surface integral, but the function is not necessarily harmonic everywhere, only in some region of interest.

At any rate, the theory of harmonic functions is very rich. As I alluded to earlier, in geometric calculus the Laplace condition can be replaced by a stronger, first-order condition ##\nabla A \equiv \nabla \cdot A + \nabla \wedge A = 0##. Such functions are called monogenic functions. You might notice that, using the decomposition posted earlier, this means for any ##\nabla \eta = 0##,

$$\eta(r) = -i^{-1} \oint_{\partial M} G(r-r') \; dS' \; \eta(r')$$

The value of the function at any given point is entirely determined by its values on some surface.

...wait, haven't we heard that before? Replace surface with curve, and you have a well-known result from complex analysis. Let me rewrite the 2d case:

$$\eta(r) = - \frac{1}{2\pi i} \oint_{C} \frac{1}{r-r'} \; dS' \; \eta(r')$$

It's the Cauchy integral formula for a complex analytic function, but it's now neatly connected to a 3d counterpart.

$$\eta(r) = - i^{-1} \oint_{S} \frac{r-r'}{4\pi|r-r'|^3} \; dS' \; \eta(r')$$

The only difference between spaces is the form of the Green's function for ##\nabla##. It's important to note that, in regions without charge, the electric field is monogenic, and as such, the electric field inside a region is entirely determined by its values on a bounding surface. Going on to special relativity, the electromagnetic field bivector ##F## is also monogenic when there are no charges or currents, though the meaning of a bounding hypersurface is somewhat more complicated to deal with.
 
  • #88
homeomorphic said:
Yes. If it's harmonic, that just means the gradient is divergence-free. The Laplacian is div grad of the function. In ℝ^3, that means it has a vector potential. However, the mere fact that there is a vector potential doesn't imply that we should care about or use the vector potential, unless there is some reason to care about it, which is the case for the magnetic field.
Mea culpa. What I meant to write is: "a function being harmonic implies that its gradient possesses a scalar potential?" And the answer is no, as demonstrated in post #80.
So not every harmonic field has two kinds of potentials!

TrickyDicky said:
I think trifis might be confused by the fact that in R^3 there are harmonic functions but no harmonic forms, so if he is thinking in terms of the usual scenario like that of Maxwell equations or classical physics there are no physical examples of laplacian vector fields. That's why this question is well placed in the math subforums.
Post #72 :
"What is actually the physical meaning of having a field, both solenoidal and conservative? Is there any famous field with that property in classical theory?"


So, in conlusion, there isn't any vector field in classical physics, which possesses both scalar and vector potentials and that property is of any physical interest.
 
  • #89
Trifis, consider the scalar potential

V = \frac{1}{r}
and the vector potential

\vec A = \cos \theta \, \hat \phi
 
  • #90
Trifis said:
Once again, not every harmonic field is conservative. Consider another trivial example:
(x+y,y-z,z+x)
This field has obviously a non vanishing curl BUT it is harmonic, since its laplacian is the 0 vector.

This is why we should be talking about monogenic (i.e. ##\nabla v \equiv \nabla \cdot v + \nabla \wedge v = 0##) functions instead of harmonic ones. All monogenic functions are harmonic, but not all harmonic functions are monogenic. The monogenic condition is stronger, and it captures the notion that the field must be both divergencelss and curlless in a way that the harmonic condition does not.
 
  • #91
@Ben Niehoff Yes you're right, I didn't say there aren't any such fields. I did say that we do not attribute any physical meaning to their vector potentials, as we do with the megnetic vector potential for example...
(Btw I hate the θ convention for the azimuth :P)

@Muphrid hmmm aren't monogenic functions just the generalization of analytic ones in higher dimensions? I might have missed sth, but where exactly do you prove that both of their derivatives must be zero in this decomposition?
 
  • #92
Trifis said:
@Muphrid hmmm aren't monogenic functions just the generalization of analytic ones in higher dimensions? I might have missed sth, but where exactly do you prove that both of their derivatives must be zero in this decomposition?

Geometric algebra allows us to represent complex numbers as being part of an exterior algebra. Basically, ##w(x,y) = u(x,y) + e^{xy} v(x,y)##, where ##e^{xy}## is a bivector.

Now, take the vector derivative of this object.

$$\nabla w= (e^x \partial_x + e^y \partial_y)w = e^x \left[\partial_x u - \partial_y v\right] + e^y \left[\partial_y u + \partial_x v\right]$$

Setting ##\nabla w = 0## enforces the Cauchy-Riemann conditions for complex differentiability. However, instead of working in the realm of complex analysis, one can factor out ##e^x## on the right to get

$$f = we^x = u e^x - v e^y \implies \nabla f = \nabla w e^x = (\nabla w) e^x = 0$$

(This explains the sign change to the y-component that is often necessary when converting between complex analysis and vector fields.) Regardless, ##f## is a vector field, and condition for analyticity--for integrability--still holds. As I showed in the decomposition posts above, as long as ##\nabla f = 0##, the function is entirely determined by its values on a closed surface, and there is no need for volume integrals to account for source terms. This is exactly in analogy to the properties of complex analytic functions. Hence, ##\nabla f = 0## is the generalization of the Cauchy-Riemann condition not only to a real 2d vector space but to arbitrary dimensions.
 
Back
Top