Separating the Dirac Delta function in spherical coordinates

  • #1
The following integral arises in the calculation of the new density of a non-uniform elastic medium under stress:
∫dx ρ(r,θ)δ(x+u(x)-x')
where ρ is a known mass density and u = ru_r+θu_θ a known vector function of spherical coordinates (r,θ) (no azimuthal dependence). How should the Dirac delta function be separated in spherical coordinates taking into account the u(x) dependence?

Answers and Replies

  • #2
Science Advisor
Insights Author
Gold Member
I am not clear on your notation and especially the vector [itex]\mathbf{u}[/itex] (should that be [itex]r\theta \mathbf{u}_\theta[/itex]?) I can give you some general advice.

Firstly you can derive the following rule for delta functions:
$$\delta(a(t-t_0)) = \frac{1}{a} \delta(t-t_0)$$
by integrating: [itex]\int f(t)\delta(a(t-t_0)) dt[/itex] and carrying out the change of variable.

Then in single variable form for a monotone increasing function [itex]f[/itex] with zero [itex]f(t_0)=0[/itex] the delta function
$$\delta(f(t)) = \delta( f(t_0) + (t-t_0) f'(t_0)) = \delta( f'(t_0)(t-t_0)) = \frac{1}{f'(t_0)} \delta(t-t_0) $$

If the function inside the delta function crosses zero for many values you get contributions at each root.

So now that's delta functions of one variable. When you see a delta function of a position vector, what you really have is product of delta functions for each (rectilinear) coordinate.
$$\delta(\vec{x}-\vec{x}_0) = \delta(x-x_0)\delta(y-y_0)\delta(z-z_0)$$
But we should be able to work out how they behave as a whole under coordinate transformations using change of coordinates in an integral.
$$\iiint \rho(\vec{x}) \delta(\vec{x}-\vec{x}_0) d^3x = \rho(\vec{x}_0)$$
Let [itex]\vec{y}(\vec{x})[/itex] be an invertible function in the neighborhood of [itex]\vec{x}_0[/itex] and then consider:
$$\iiint \rho(\vec{x})\delta(\vec{y} - \vec{y}_0) d^3x = $$
$$=\iiint \rho(\vec{x}(\vec{y}))\delta(\vec{y} - \vec{y}_0) \left| \frac{\partial \vec{x}}{\partial \vec{y}}\right| d^3y =$$
$$=\rho(\vec{x}_0)\cdot \left| \frac{\partial \vec{x}}{\partial \vec{y}} \right|_{\vec{x}=\vec{x}_0}$$
Where the multiplier is the Jacobian determinant at the zero point.

[Edit] Oops forgot the last equation: You thus get:
$$ \delta(\vec{y} - \vec{y}_0) = \frac{\delta(\vec{x} - \vec{x}_0) }{\left|\frac{\partial \vec{y}}{\partial \vec{x}} \right|_{\vec{x}_0}}$$

This is how you would work with this in general and you can well see the analogue with the one variable case.
  • #3
JAmbaugh - many thanks for the detailed reply! I can see where my notation can be confusing: I use r and θ as unit vectors and (r,θ) to denote an explicit dependence on spherical coordinates.

I am familiar with the Jacobian transformation you describe. Unfortunately it leads to a computationally tedious/difficult 2D local inversion problem that I had hoped to avoid. As an alternative, I sought for a way to separate the delta function into a product of two delta functions (as you do above for Cartesian coordinates), one for the radial and one for the polar degrees-of-freedom; this would lead to 2 1D transformations, an easier task. Since the radial (u_r) and polar (u_θ) components of u both depend on (r,θ), it may well not be possible to do so in a straightforward manner. JCM
  • #4
Science Advisor
Insights Author
Gold Member
If the argument is simple (I was not sure about that additional component u(x) in the argument...) then yes there's a straightforward identity:

$$\delta(x-x_0)\delta(y-y_0) = \delta(r-r_0)\delta(r_0\theta - r_0\theta_0)$$
As both are in coordinates which are orthonormal at the singular point [itex]x_0 = r_0\cos(\theta_0), y_0 = r_0\sin(\theta_0)[/itex]. You can then use the scaling behavior to get in terms of standard polar coords:
$$\delta(x-x_0)\delta(y-y_0) = \frac{1}{r_0} \deta(r-r_0) \delta(\theta-\theta_0)$$

I don't know if you had that part already or if it helps. You may just need to crank out the math... do keep in mind though that given you have a delta function in the integral, you can evaluate the Jacobian at the singluar point of the delta function and pull its value outside the integral.
  • #5
Ok, all this is well-known to me, but we may now be able to get to the gist of my question with more ease: Given the presence of u(x) in the delta function, the central question is how should your second equation above be modified, or, to put it crudely, how should the components of u be allocated among the separated deltas (let's ignore the scale factor for a moment): Is it δ(r + u_r - r' )δ(θ + u_θ / r - θ')? That doesn't look right. Geometry tells me that it should be

δ( sqrt[ ( r + u_r)^2 + u_θ^2] - r' )δ(θ + atan( u_θ / r + u_r ) - θ' )

but I can't produce a rigorous argument to justify it. Even this remains complicated owing to the r- and θ-dependences in both factors, but it could be solved iteratively.
  • #6
Science Advisor
Insights Author
Gold Member
It will be more involved than that as you must not only consider components of u but its dependency on the coordinates.
Let me think....

Let's suppose [itex]\vec{u}(r,\theta) = a(r,\theta)\hat{r} + b(r,\theta)\hat{\theta}[/itex]. Note I'm using the local orthonormal basis moving with the polar coordinates.

The delta function is [itex] \delta( \vec{r}+\vec{u}(\vec{r}) - \vec{r}')[/itex]. Note that it is in general not singular at [itex] \vec{r}=\vec{r}'[/itex] (if your case constrains [itex]\vec{u}[/itex] to be zero at [itex]\vec{r}'[/itex] then let me know as this makes a big difference!!!).
The contribution will occur where the argument is zero. Call that point [itex] \vec{r}_0[/itex].

Now I'll try to be a bit clever and use local linear coordinates about [itex]\vec{r}_0[/itex] tangential to the polar coordinate basis there. So let [itex]\hat{r}_0,\hat{\theta}_0[/itex] be the radial and tangential unit vectors at [itex]\vec{r}_0[/itex]. Define the differential (not infinitesimal!) coordinates:
[itex] dr = r-r_0,\quad d\theta= \theta-\theta_0[/itex]

So the following vector function of position coordinates is tangential to [itex] \vec{r}+\vec{u}(\vec{r}) - \vec{r}'[/itex] to first order at [itex]\vec{r}_0[/itex]:

[itex] d\vec{r}=dr\hat{r}_0 + r_0 d\theta\hat{\theta}_0 + [dr\partial_r a(r_0,\theta_0)+d\theta \partial_\theta a(r_0,\theta_0) ]\hat{r}_0 +[dr\partial_r b(r_0,\theta_0)+d\theta\partial_\theta b(r_0,\theta_0)]\hat{\theta}_0[/itex]

So we have, with combined like terms w.r.t. the differentials:
[tex]d\vec{r} = dr\cdot\left( (1+a_r)\hat{r}+b_r\hat{\theta} \right) + d\theta\cdot\left( a_\theta \hat{r}+(r_0+b_\theta)\hat{\theta}\right)[/tex]

Lets write this simply as [itex] d\vec{r} = dr\vec{u} + d\theta\vec{v}[/itex]
  • If the two vectors were orthonormal then the delta function would factor fine:
  • If they were orthogonal but not orthonormal then you can simply scale:
    [tex]\delta(d\vec{r}) = \frac{\delta(dr)}{|\vec{u}|}\frac{\delta(d\theta)}{|\vec{v}|}[/tex]
  • I believe that in general one normalizes the non-orthogonal case using the magnitude of the cross product:
This "magnitude of the cross product" is more properly the appropriate Jacobian determinant when you take partials of your local (differential) coordinates.
Note my differentials here are simply tangential coordinates and not specifically the differential notation in the integrals.

I also typed this up rather late in one sitting so do proof my work before counting on it. But I think this will give you enough to work out your specific case.
  • #7
Now it's gotten more interesting! I follow your argument and math 'til the three bullet points; there you lose me: I would argue the second bullet point to be the correct answer regardless of whether vectors u and v (in your notation) are or are not orthonormal (as it happens, the functions a(r,θ) and b(r,θ) are orthogonal with respect to integration over the polar angle). What do you see that I fail to see? Thank you for your interest and perseverance.
  • #8
Science Advisor
Insights Author
Gold Member
Ok, I was guessing a bit and I may have got the reciprocal of the correct multiplicative factor. Here's me working it out for general cases... not elegantly but thinking aloud I get the following:

Consider then the change of coordinates from [itex]x,y[/itex] to [itex]u,v[/itex] where [itex]\mathbf{r}=x\hat{i}+y\hat{j}+ u(\hat{i}+a\hat{j}) + v(\hat{j}+a\hat{i})[/itex].
I'm changing from an orthonormal basis to an oblique basis in the simplest way I can think of here. You can solve for [itex]u,v[/itex] in terms of [itex] x,y[/itex] by
[itex] x = u+av, y=v+au[/itex]
Thence the differential area element is:
[itex]dxdy = J^{-1}dudv,[/itex] where [itex] J=\left| det \left(\begin{array}{cc} 1 & a \\ a & 1\end{array}\right)\right|= |1-a^2|,[/itex] is the jacobian.
Now the relationship between vector argument delta functions must be for arbitrary function [itex]f[/itex] (and here I choose evaluation at [itex]\boldsymbol{0}[/itex] for simplicy)
[tex]f(\boldsymbol{0})=\int f(\mathbf{r})\delta(\mathbf{r})dxdy = \int f(\mathbf{r})\delta(\mathbf{r})J^{-1} dudv[/tex]
Ok... now we know for orthonormal coordinates [itex]\delta(\mathbf{r}(x,y))=\delta(x)\delta(y)[/itex]

for non-orthonormal coordinates [itex]\delta(\mathbf{r}(u,v))= C\delta(u)\delta(v)[/itex] for some constant [itex]C[/itex].
[tex]f(\boldsymbol{0})= \int f(\mathbf{r}(u,v))C\delta(u)\delta(v)J^{-1} dudv = CJ^{-1}f(\mathbf{r}(0,0))[/tex]
So [itex]C = J =\left| \frac{\partial(x,y)}{\partial(u,v)}\right|[/itex]
You can view this Jacobian factor as the magnitude of the cross product for the two basis vectors dual to the coordinates [itex]u[/itex] and [itex]v[/itex].
[itex]\mathbf{r} = u(\hat{i}+a\hat{j})+v(\hat{j}+a\hat{i}) = u \mathbf{r}_u + v\mathbf{r}_v[/itex]
these are the coordinate bases as they are the partials of the parametrized vector in terms of each coordinate.

With this I think the general rule will be:
For 2-vector [itex]\mathbf{r}[/itex] a function of parameters (coordinates) [itex]u,v[/itex] the delta function factors via:
[tex]\delta(\mathbf{r}) = J\delta(u-u_0)\delta(v-v_0) [/tex]
where [itex]u_0,v_0[/itex] are the values at which [itex]\mathbf{r}(u_0,v_0)=\boldsymbol{0}[/itex] and where [itex]J =\left| \mathbf{r}_u\times \mathbf{r}_v\right|[/itex], with [itex] \mathbf{r}_u = \frac{\partial\mathbf{r}}{\partial u}|_{u_0,v_0}[/itex] and similarly for [itex]r_v[/itex]
For 3-vectors it is:
[tex]\delta(\mathbf{r})=J\delta(u-u_0)\delta(v-v_0)\delta(w-w_0) [/tex]
with [itex]J =| \mathbf{r}_u\cdot(\mathbf{r}_v\times\mathbf{r}_w)|[/itex] is the (absolute) triple product.

This way one need not necessarily transform to some orthonormal representation. The triple product is also the magnitude of the 3-form [itex]\mathbf{r}_u\wedge\mathbf{r}_v\wedge\mathbf{r}_w[/itex], which is the volume element for the parallelepiped these vectors form and equates to the jacobian for transforming to (or is it from?) any orthonormal basis.
  • #9
Ok, I get your point. BTW, do you see an alternative way to express dr (in last Wednesday's reply) that avoids differentiation of a(r,θ) and b(r,θ) if u itself were a first-order quantity, i.e., u ⋅ u << |u| ?
  • #10
Upon further reflection, I do not believe that oblique coordinates enter the problem at all. Rather, the local unit vectors [itex](\hat{r}_0,\hat{\theta}_0)[/itex] used to span the vector [itex] d\vec{r}[/itex] (as defined in your next-to-last reply) are not principal axes, but may be rotated by the appropriate orthogonal transformation and achieve the desired separation: To motivate the argument more coherently, consider the limit representation of the delta function
[tex]\delta(d\vec{r})=\displaystyle\lim_{\alpha\rightarrow 0}\frac{1}{2\pi\alpha^2}exp[-\frac{(d\vec{r}\cdot d\vec{r})}{2\alpha^2}][/tex]
[tex] d\vec{r} \cdot d\vec{r} =A_{11} (r - r_0 )^2 + A_{22}r_0^2(\theta - \theta_0)^2 + 2A_{12}(r - r_0)r_0(\theta - \theta_0) [/tex]
where, e.g.,
[tex] A_{11} = ( 1 + \partial_r a)^2 + (\partial_r b_\theta)^2[/tex]
etc.. Moreover, I introduce the symbol [itex]\Delta\vec{r} \equiv ( r - r_0, r_0(\theta-\theta_0))[/itex] and denote the quadratic form [itex] d\vec{r} \cdot d\vec{r}[/itex] as [itex]A(\Delta\vec{r},\Delta\vec{r})[/itex]. The goal is to rotate the basis [itex](\hat{r}_0,\hat{\theta}_0)[/itex] to a new one [itex](\hat{u},\hat{v})[/itex] so that
[tex]A(\Delta\vec{r},\Delta\vec{r}) = \lambda_+ ( u - u_0 )^2 + \lambda_- ( v - v_0 )^2[/tex]
[itex]A[/itex] is real-symmetric and positive definite; it has positive, real eigenvalues so that the above is always possible. Hence,
[tex]\frac{1}{2\pi\alpha^2}exp[-\frac{(\lambda_+ ( u - u_0 )^2 + \lambda_- ( v - v_0 )^2)}{2\alpha^2}] \rightarrow \frac{\delta(u-u_0)\delta(v-v_0)}{\sqrt{\lambda_+\lambda_-}}[/tex]
Since the change of basis is an orthogonal transformation, the Jacobian is unity and the result for the integral I originally posted becomes

∫dx ρ(r,θ)δ(x+u(x)-x') = ρ([itex]r_0,θ_0[/itex]) / [itex]\sqrt{det|A(\vec{r_0})|}[/itex]

a result that is easily extended to three or higher dimensions.
  • #12
Your third reply gave me a better understanding of the Dirac delta function and got me going in the right direction. Thanks.