# There's a distribution Tf for each function f

1. Dec 9, 2008

### Fredrik

Staff Emeritus
I'm reading the Wikipedia article on distributions. They say that there's a distribution $T_f$ for each function f, defined by

$$\langle T_f,\phi\rangle=\int f\phi dx$$

and that there's a distribution $T_\mu$ for each Radon measure $\mu$, defined by

$$\langle T_\mu,\phi\rangle\int \phi d\mu$$

They define the delta distribution by

$$\langle\delta,\phi\rangle=\phi(0)$$

I feel that there's one thing missing in all of this. I don't see an explanation of the expression

$$\int \delta(x)f(x)dx$$

What does this integral mean? Is there a definition of the integral of a distribution that applies here, or should this just be interpreted as a "code" representing the expression $\langle\delta,\phi\rangle$? (This would mean that the expression has nothing to do with integrals at all, but is written as if it were an integral just to make it look like the expression involving f above).

Also, can someone tell me why you can define a topology on the test function space by defining limits of test functions? (If you answer by referencing a definition or a theorem in a book, make sure it's either "Principles of mathematical analysis" by Walter Rudin or "Foundations of modern analysis" by Avner Friedman, because those are the only ones I've got )

2. Dec 11, 2008

### crazyjimbo

Re: Distributions

I'm sure you're familiar with the definitions but I'll repeat them for clarity.

If we're working on the real line then a test function is an infinitely differentiable function with compact support. The space of all such test functions is denoted $$D(\mathbb{R})$$.

A distribution is then a linear and continuous map $$T : D(\mathbb{R}) \rightarrow \mathbb{R}$$ such that if $$\phi_n \rightarrow \phi$$ and $$\phi_n^{(k)} \rightarrow \phi^{(k)}$$ uniformly then $$T(\phi_n) \rightarrow \T(\phi)$$. For whatever reason, $$T(\phi)$$ is normally denoted by $$\langle T, \phi \rangle$$.

For ant integrable function $$f : \mathbb{R} \rightarrow \mathbb{R}$$ we can then define a distribution $$T_f$$ given by $$\langle T_f, \phi \rangle = \int f \phi dx$$. The delta distribution however does not come from such a function, and instead $$\langle \delta, \phi \rangle$$ is defined as $$\phi(0)$$. So the expression $$\int \delta (x) \phi (x) dx$$ should never come up.

3. Dec 11, 2008

### tiny-tim

Welcome to PF!

Hi crazyjimbo! Welcome to PF!

We need members who can help write pages in the new PF Library.

Would you like to start an entry on "distribution"?

4. Dec 11, 2008

### Avodyne

Re: Distributions

crazyjimbo is of course mathematically correct, but let me offer a physicist's perspective, with the caveat that these are somewhat hazy, nonrigorous notions that the mathematicians have souped up into the rigorous theory of distribtuions.

The "delta function" can be thought of as a function that is sharply peaked at x=0, with total area 1 underneath it. For example,
$$\pi^{-1/2}\varepsilon^{-1}\exp(-x^2/\varepsilon^2)$$
with $\varepsilon$ "small". If you integrate this function times f(x), you will get something that is very close to f(0) (provided that f(x) is smooth on the scale set by $\varepsilon$). The integral notation is then extremely useful, and comes up all the time in situations such as complete sets of states in QM, etc.

Last edited: Dec 12, 2008
5. Dec 11, 2008

### HallsofIvy

Staff Emeritus
Re: Distributions

Notice that the article said that for each function, f there is a corresponding distribution. It did NOT say that for each distribution there is a corresponding function! In that sense, the set of distributions (generalized functions) includes the set of functions as a proper subset.

Last edited: Dec 12, 2008
6. Dec 11, 2008

### strangerep

Re: Distributions

The expression $\langle\delta,\phi\rangle$ is a "dual pairing", a concept
which becomes ever more deeply important as you get into advanced QFT.

A function f can be thought of as a vector in an infinite-dimensional
space, indexed by (say) a real variable "x" instead of writing (say)
$f_i$ which you might see for (components of) elements of
finite-dimensional vector spaces.

A physically important concept in vector spaces is the notion of "dual
space". E.g., suppose $v$ is an element of a vector space V with
components $v_i$ wrt to some basis. Then the set of all
possible ways of linearly mapping elements of V to scalars is called
the "dual space of V", denoted $V^*$. For the familiar
boring finite-dimensional vector spaces, the distinction between
$V$ and $V^*$ corresponds to the distinction
between lower and upper indices on vectors, and the two
spaces are in fact isomorphic. When one writes (eg) $w^k v_k$ (scalar product),
this is actually a special case of a "dual pairing" between elements
$w\in V^*$ and $v\in V$. Think of it as w acting on v to
produce a scalar. (Remember that $w$ is really a mapping.)

Now consider that a space of functions of a real variable is really an
infinite-dimensional vector space. You can add two different functions
f and g together to get another function h = f+g, defined by

$$h(x) ~:=~ (f + g)(x) ~:=~ f(x) + g(x)$$

which is precisely analogous to the finite-dimensional case where
we might add two vectors u,v component-wise to get another
vector w = u+v, defined component-wise via:

$$w_i ~:=~ (u + v)_i ~:=~ u_i + v_i$$

Hereafter, I'll write $f(x)$ as $f_x$ to
emphasize this analogy and I'll denote by "F" the inf-dim vector space
of which f is one member.

Now, just as we can define a dual space over a finite-dim vector space,
the same notion makes sense over infinite-dimensional vector spaces,
so $F^*$ is the space of all the linear mappings from
F to scalars.

But what does the dual-pairing look like in the inf-dim case? For
finite-dim vector spaces we just sum the indices, right? So the analogy
for (many) inf-dim spaces is to integrate over the (now real) index.
However, such integration might not be well-defined for every function
space we might imagine, so mathematicians prefer to omit the details of
the dual pairing if not necessary, and just write things like (eg)
$\langle\beta,f\rangle$, where
$\beta\in F^*, f\in F$. Explicitly, this is
$\beta^x f_x$, which in this case really means:

$$\int \beta(x) f(x) dx$$

Many elements of $F^*$ typically arise from elements of F.
That's what the $T_f$ distributions are in the earlier
posts, and it's essentially how Avodyne motivated the delta
distribution -- by considering it as a limit of ordinary functions.
However, in the inf-dim case we often find that $F \subset F^*$,
(unlike the finite-dim case where the two spaces
are isomorphic). Thus, there are elements of the dual space which do
not arise from elements of F, in general. (That's what HallsofIvy pointed out.)
Nevertheless, one can still speak of the dual pairing $\langle\beta,f\rangle$
(where now $\beta$ does not arise from any element of F).

I prefer to consider the delta distribution $\delta(x-y)$
as $\delta(x,y)$ (keeping the indices distinct), or
as $\delta_x^{~y}$. Now the role of the delta distribution
as an identity mapping becomes obvious:

$$\delta_x^{~y} f_y ~:=~ \int \delta(x,y) f(y) dy ~:=~ f(x) ~=:~ f_x$$

So we can think of $\delta\in F\otimes F^*$ or as
$\delta\in Lin(F,F)$ .

There's a distinction of course between $\delta_x^{~y}$
and the finite-dim $\delta_i^{~j}$ in that
$\delta_{x+a}^{~~~y+a} = \delta_x^{~y}$, for all x,y but
this reflects translation invariance of the integral measure
used in the dual pairing in this case.

The moral of this story is that it's more general and powerful
to think of distributions as linear operators acting on
inf-dim vector spaces (function spaces).

Actually, that question is not well-defined. You need a topology first
before you can define the notion of "limit". Maybe try to re-ask the
question after thinking about the above.

7. Dec 11, 2008

### Fredrik

Staff Emeritus
Re: Distributions

Thank you. I suspeced that, but I'm a still a bit surprised because I've been seeing that expression "everywhere" in my physics books for so many years. Every time there was a comment about it, it always said roughly what Avodyne just said, and that the delta appearing under the integral sign isn't really a function but a distribution. (None of my books defines "distribution").

What do you mean by $\phi_n^{(k)}$? That part wasn't included on the Wikipedia page. Edit: Ah, I get it. It's the partial derivatives. I don't use that notation myself. I'd normally write $\phi_n,_k$.

Last edited: Dec 12, 2008
8. Dec 11, 2008

### Fredrik

Staff Emeritus
Re: Distributions

Thanks strangerep. This time I knew just about everything you said already, but it's still interesting to see how someone else thinks about these things. The comment about how the dual space can have members that don't correspond to members of the original vector space helped me see one thing more clearly even though I knew this fact already.

I thought about it before I asked, and I asked because I have only seen this done your way. (Define a topology first, and then use the topology to define limits). But the Wikipedia page (link) says that "It can be given a topology by defining the limit of a sequence of elements of D(U)". The word "it" refers to the set D(U) of test functions, with the obvious vector space structure.

1. In addition to defining a vector space structure on D(U), we also define an inner product by $\langle\phi,\psi\rangle=\int\phi\psi d\mu$ and use the associated metric to define limits in the way that's standard for metric spaces. To do this we need a measure on U, but if U is the real numbers we can use the Lebesgue measure.

2. Maybe it also makes sense to define the limits first and than take the topology to be the coarsest topology in which the limits we have defined as convergent are convergent according to the standard definition.

But I see now that 2 doesn't work in this case. Their definition of a convergent sequence of test functions uses the notion of uniform convergence of a sequence of (partial derivatives of) test functions. That seems circular to me.

9. Dec 12, 2008

### strangerep

Re: Distributions

Oh, I see now... It uses the notion of "uniform convergence" of the test functions and
their derivatives to define the limits, but the notion of uniform convergence relies on
the standard topology of the range of the functions (e.g., R or C). The desired topology
is then the final (finest) topology such that the functions and derivatives are continuous
under that topology, so it's a kind of weak topology (I think).

10. Dec 12, 2008

### Fredrik

Staff Emeritus
Re: Distributions

OK, I feel that I understand the definition well enough now. I'm still uncertain about some details regarding the topology, but I can live with that.

It's funny that expressions that "should never come up" (I'm quoting #2) come up all the time in physics books. I have a follow-up question about that. How do mathematicians express and prove this identity?

$$\delta(x^2-a^2)=\frac{1}{2|x|}\Big(\delta(x-|a|)+\delta(x+|a|)\Big)$$

$$\int_0^\infty\delta(x^2-a^2)f(x)dx=\begin{bmatrix}y=x^2, x=\sqrt y\\dx=\frac{1}{2\sqrt y}dy\end{bmatrix}=\int_0^\infty\delta(y-a^2)f(\sqrt y)\frac{1}{2\sqrt y}dy=\frac{f(\sqrt{a^2})}{2\sqrt{a^2}}=\frac{f(|a|)}{2\sqrt{|a|}}$$

$$\int_{-\infty}^0\delta(x^2-a^2)f(x)dx=\begin{bmatrix}y=-x\\dy=-dx\end{bmatrix}=-\int_{\infty}^0\delta(y^2-a^2)f(-y)dy=\int_0^\infty\delta(y^2-a^2)f(-y)dy=\frac{f(-|a|)}{2\sqrt{|a|}}$$

$$\int_{-\infty}^\infty\delta(x^2-a^2)f(x)dx=\frac{f(|a|)+f(-|a|)}{2\sqrt{|a|}}=\int_{-\infty}^\infty\frac{1}{2|x|}\Big(\delta(x-|a|)+\delta(x+|a|)\Big)f(x)dx$$

Last edited: Dec 13, 2008
11. Dec 13, 2008

### mathwonk

Re: Distributions

why is this in the linear algebra section?

12. Dec 13, 2008

### Fredrik

Staff Emeritus
Re: Distributions

I'm not sure why I put it here. I suppose "calculus & analysis" would have been the right place to put it. I asked (by using the "report" button) the moderators to move it there when no one had answered the first day, but nothing happened.

13. Dec 13, 2008

### Hurkyl

Staff Emeritus
Re: Distributions

This is right. (At least in the formulation I'm familiar with)

14. Dec 13, 2008

### Hurkyl

Staff Emeritus
Re: Distributions

Once you have definitions, the way to proceed will probably be obvious. The big problem is that if you haven't defined what an expression like $\delta(x^2-a^2)$ might mean, you certainly can't prove any theorems about it! The 'physicists derivation' you gave is a motivation: we would like to be able to choose a definition so that that calculation works out. And if you don't need a general theory of derivatives and composition, then you could just take identities like that as a definition, and then do a check to make sure they have the properties you want. (And take care not to use any properties you haven't checked)

Now, if you wanted a general theory of composition, it would probably go something like this. (disclaimer: having never seen it before, I'm working this out from scratch, so it may or may not bear resemblance to how people actually do things)

The final calculation is at the end, after a separator.

First, note that duality gives us, for any function $f:V \to W$, a dual function $f^*:W^* \to V^*$. In inner-product-like notation, this is defined by

$$\langle f^*(\omega), v \rangle := \langle \omega, f(v) \rangle$$

This has a similarly simple expression in functional notation, but it will cause some notational confusion with the theory of composition I want to derive, so I won't state it.

Suppose that V and W are spaces of functions on X and Y. If we have a good map $f:X \to Y$, we get another kind of dual mapping $f^* : W \to V$ defined by composition: $f^*(w)(x) = w(f(x))$. And, of course, we get the dual dual mapping $f^{**} : V^* \to W^*$, which I'm going to rename as $f_*$.

$f^*$ here is sometimes called a 'pullback', and $f_*$ a 'pushforward'.

Suppose we have an inner product on a vector space V. For any v in V, the inner product lets us define the 'transpose' of v to be an element of $V^*$ as follows, written in inner-product-like notation on the left, and actual inner-product notation on the right:
$$\langle v^T, w \rangle := \langle v, w \rangle$$. Henceforth, I will not explicitly write the transpose operation.

Note I've done nothing new or specific to this situation -- the above is just basic operations in the arithmetic of functions.

Now, we can already do some calculations! Let's fix a function $f : \mathbb{R} \to \mathbb{R}$, use the standard inner product, and let $\phi$ be a test function. Then, we have $f^*(\phi)(x) = \phi(f(x))$. What about the pushforward map?

Well, let's suppose that f is invertible and increasing, with inverse g. For a test function $\psi$, we can make the following calculation:

$$\langle f_*(\psi), \phi \rangle = \langle \psi, f^*(\phi) \rangle = \int_{-\infty}^{+\infty} \psi(x) \phi(f(x)) \, dx = \int_{-\infty}^{+\infty} \psi(g(y)) \phi(y) g'(y) \, dy = \langle g' g^*(\psi), \phi \rangle$$

thus giving us $f_*(\psi) = g' g^*(\psi)$. (Again, recall that I'm suppressing the transpose operation)

We can also vary the calculation slightly to arrive at another interesting result:

$$\langle \psi \circ f, \phi \rangle = \langle f^* (\psi), \phi \rangle = \langle f' f^* (\psi), \phi / f' \rangle = \langle g_* \psi, \phi / f' \rangle = \langle g_* (\psi) / f', \phi \rangle$$

and so I'm inspiried to make the following definitions

Definition: If f is a good function, and $\omega$ is a distribution, then define $f \omega$ by $\langle f \omega, \phi \rangle = \langle \omega, f \phi \rangle$

Definition: If f is a good, increasing function with inverse g, then for any distribution $\omega$, define $\omega \circ f = g_*(\omega) / f'$

The above still works for multivariable test functions and distributions; the appoproriate condition on f is that it's invertible with positive Jacobian. At this point, I'm going to assume we have also defined partial integration of multivariable distributions. (i.e. evaluate a 2-variable distribution at a 1-variable test function to produce a 1-variable distribution. This is extremely similar to tensor contraction) I will also assume we've worked out the properties of composition as defined above.

So now, my magic trick to define arbitrary composition distributions is to convert to the invertible case by adding a variable, by virtue of the fact that the following is invertible:

u = x
v = y + f(x)

with Jacobian 1.

Now consider this if $\omega$ is a distribution, then it should also be a two-variable distribution by adding another dummy variable. Heuristically speaking, applying the above transformation would give

$$\iint \omega(v) \phi(u) \psi(v - f(u)) \, du \, dv = \iint \omega(y + f(x)) \phi(x) \psi(y) \, dx \, dy$$

Note that this is well defined, becase we simply composed a two-parameter distribution with an invertible function! Now, if partial integration with respect to x gives us an honest-to-goodness test function, then we have

$$\iint \omega(y + f(x)) \phi(x) \psi(y) \, dx \, dy = \int g(y) \psi(y) \, dy$$

And so we can make the following definition

Definition: Let $\omega$ be a distribution, $f$ a good function, $\phi$ a test function. Suppose there is a good function g such that, for every test function $\psi$, we have the identity $\langle \omega(y), \phi(x) \psi(y - f(x)) \rangle = \langle g, \psi \rangle$ (where the first inner product is over both variables). Then we define $\langle \omega \circ f, \phi \rangle = g(0)$.

--------------------------------------------------------------------------------------

Now, let's compute
$$\iint \delta(v) \phi(u) \psi(v - u^2 + a^2) \, du \, dv = \int \phi(u) \psi(a^2 - u^2) \, du = \int_{-\infty}^{a^2} \frac{\phi(\sqrt{a^2 - x}) + \phi(-\sqrt{a^2 - x})}{2 \sqrt{a^2 - x}} \psi(x) \, dx$$

And so we have (assuming the integrand is 'good'):
$$\langle \delta(x^2 - a^2), \phi(x) \rangle = \frac{\phi(|a|) + \phi(-|a|)}{2|a|}$$
when a > 0, and finally
$$\delta(x^2 - a^2) = \frac{1}{2|a|}\left( \delta(x - a) + \delta(x + a) \right)$$

Note that $\delta(x^2 - a^2)$ is undefined for a = 0. More interestingly, you can let a be a variable rather than a constant (or maybe a 'variable constant'), and now this expression is distributional in a.

Last edited: Dec 13, 2008
15. Dec 13, 2008

### Fredrik

Staff Emeritus
Re: Distributions

16. Dec 14, 2008

### Fredrik

Staff Emeritus
Re: Distributions

That was very instructive. I found the notation a bit confusing at times, but I was able to understand it. I was also able to use your ideas to find a fairly simple way to define $\delta(f(x))$. We defined $\delta$ by $\langle\delta,\phi\rangle=\phi(0)$, inspired by the expression $\int \delta(x)\phi(x)dx$. We would like to find a way to define a distribution $\delta_f$ so that $\langle\delta_f,\phi\rangle$ corresponds to the expression $\int \delta(f(x))\phi(x)dx$. We start by making the change of variables $y=f(x)$ in that "integral":

$$\int \delta(f(x))\phi(x)dx=\int\delta(y)\phi(f^{-1}(y))f^{-1}'(y)dy=\phi(f^{-1}(0))f^{-1}'(0)$$

So (at least when f is continuous and increasing) we can define $\delta_f$ by

$$\langle\delta_f,\phi\rangle=\phi(f^{-1}(0))f^{-1}'(0)=\langle\delta,(\phi\circ f^{-1})f^{-1}'\rangle$$

Unfortunately, things get weird when f has lots of zeroes. Here's one generalization of the definition above: If there's no real number r such that every neighborhood of r contains infinitely many members of $f^{-1}(0)$, we can define $\delta_f$ by

$$\langle\delta_f,\phi\rangle=\sum_k sign(f'(x_k))\phi(f_k^{-1}(0))f_k^{-1}'(0)$$

where the $x_k$ are the members of $f^{-1}(0)$ and each $f_k$ is the restriction of $f$ to an infinitesimal interval containing $x_k$.

Last edited: Dec 14, 2008
17. Dec 14, 2008

### Hurkyl

Staff Emeritus
Re: Distributions

If all you care about is delta, you get more wiggle room -- e.g. I think you made essential use of the fact that it has exactly one singularity (which is thus isolated).

I was trying to work out a similar development, but using operations on measures to capture the tricks one wants to do with the integral, and try in that way to avoid any irregularities involved because we're really working with distributions rather than functions. I think I now finally see how it's going to work out!

First, the setup.

Let f : X --> Y be a measurable function. If we have a measure $\mu$ on X, then we have a pushforward measure $f_* \mu$ on Y defined by

$$(f_* \mu)(E) = \mu(f^{-1} (E))$$

for any measurable subset E of X. (here, f^-1 is the inverse image operation on sets -- we don't need an actual inverse for f) From Royden's real analysis, in the section on mappings of measurable spaces, we have the theorem

$$\int_Y g \, df_* \mu = \int_X g \circ f \, d\mu$$

so this is how change-of-variable is expressed measure theoretically.

Now, suppose that $\mu, \nu$ are measures on a space X with the property that $\mu(E) = 0$ implies $\nu(E) = 0$, then there exists a measurable function called the Radon-Nikodym derivative with the property that

$$\nu(E) = \int_E \left[ \frac{d\nu}{d\mu} \right] d\mu$$

this derivative is essentially unique; any two Radon-Nikodym derivatives will disagree on a set of $\mu$-measure zero. This derivative satisfies many of the 'normal' derivative properties, and also for any nonnegative measurable function f,

$$\int f \, d\nu = \int f \left[ \frac{d\nu}{d\mu} \right] d\mu$$

We can combine the two to fully express change-of-variable for ordinary functions in the 'usual' way. If $dx$ denotes the usual Lesbegue measure, and $df^{-1}(x)$ its pullback, as defined above, then

$$\int g(f(x)) \, dx = \int g(x) \, df^{-1}(x) = \int g(x) \left[ \frac{df^{-1}(x)}{dx} \right] \, dx$$

By using the pushforward measure, we have avoided the need to decompose the integral into several parts so that we can use invertible change-of-variable. Not only that, but it covers a lot more badly behaved f's!

As an example, if $f(x) = x^2 - a^2$, then

$$\left[ \frac{df^{-1}(x)}{dx} \right] = \begin{cases} \frac{1}{\sqrt{x + a^2}} & x > -a^2 \\ 0 & x \leq -a^2$$

Note that this accounts for most points having two preimages -- the actual value of the derivative is exactly what you would get by decomposing into two integrals, applying invertible change-of-variable, and recombining. In fact, that's how I computed it!

This gets more complicated if the left hand side isn't purely a composition. Let $d\mu = \phi dx$. Then, we would have

$$\int g(f(x)) \phi(x) \, dx = \int g \circ f \, d\mu = \int g df_*\mu = \int g(x) \left[ \frac{df_*\mu}{dx} \right](x) \, dx$$

This is awkward because we have absorbed the terms $\phi \circ f^{-1}$ into the Radon-Nikodym derivative. But I suppose they would have been awkward anyways, so it hasn't really become worse.

So how would we compute this derivative? The same way we always would! For example, if $h(x) = x^2 - a^2$, I compute

$$\int f(x^2 - a^2) g(x) \, dx = \int_{-a^2}^{+\infty} f(x) \frac{g(\sqrt{x + a^2}) + g(-\sqrt{x + a^2})}{2 \sqrt{x + a^2}} \, dx$$

and so we have

$$\left[ \frac{dh_*\mu}{dx} \right] = \begin{cases} \frac{g(\sqrt{x + a^2}) + g(-\sqrt{x + a^2})}{2 \sqrt{x + a^2}} & x > -a^2 \\ 0 & x \leq -a^2$$

All of the above sets us up very nicely for distributions. It's clear that if f is a map and $\varphi$ is a test function, we want to define

$$d\mu = \varphi \, dx$$

$$\langle \omega \circ f , \varphi \rangle = \int \omega(x) \left[ \frac{df_*\mu}{dx} \right](x) \, dx$$

But how do we actually compute it? Simple! We do the calculation exactly as we would if $\omega$ were an honest-to-goodness function, with the only caveat that we're not allowed to actually evaluate an integral until we've put it back into the correct form of an integral over all of space. The reason? The method for computing the derivative itself would be done by putting an arbitrary nonnegative function where $\omega$ is, and the computing the integral would be done by replacing that arbitrary function with $\omega$.

So the only remaining problem is what happens if $[ df_*\mu / dx]$ isn't a test function. (which it isn't in the running example, since it's discontinuous at -a^2) I suppose you just need to define something with limits to allow you to compute the inner product of a distribution with something that isn't a test function.

Last edited: Dec 15, 2008
18. Dec 14, 2008

### strangerep

Re: Distributions

I notice it's now been moved to Calculus & Analysis, but this makes me wonder
where discussions on Functional Analysis should go? It's algebra on
infinite-dimensional linear spaces, hence has a foot in both camps.
A question for the moderators, perhaps?

19. Dec 15, 2008

### Fredrik

Staff Emeritus
Re: Distributions

Hurkyl, that looks pretty cool. I'd like to ask a question before I make an effort to try to really understand this.

Did you accidentaly swap the measures here? The integral on the left is over Y, but the pullback measure is a measure on X. If the theorem is

$$\int_Y g \, d\mu = \int_X g \circ f \, df^*\mu$$

instead, it's kind of intuitive: If you pullback both the measure and the function, you get the same result.

Would you like to change this too? (It seems to have the same problem).

I've been wondering the same thing. I had to think about it when I was going to start this thread, and I felt that none of the forums was right.

20. Dec 15, 2008

### Hurkyl

Staff Emeritus
Re: Distributions

Grar, I thought I looked over that carefully. To a mapping $\varphi$ (not a test function), Royden describes the operations on measures in terms of what he labels $\Phi$, defined by $\Phi(E) = \varphi^{-1}(E)$, and defines the measures $d\Phi^*\mu$. I had wanted to omit $\Phi$, but I simply got the initial definition of that measure wrong. I've corrected it in the above.

Last edited: Dec 15, 2008