Distributions (generalised functions) basics

In summary, the concepts the author is struggling with are the following:1. The Dirac delta function is defined by ∫ δ(x) φ(x) dx = δ(φ) = φ(0)2. We have (f(0) φ(0))' - basically we want to derive the product of two functions.3. This one gives me the most headaches. The goal is to determine (x-a)δa. To do this let's call f=x-a and T=δa (shouldn't we write T
  • #1
Cathr
67
3
I started studying distribution theory and I am struggling with the understanding of some basic concepts. I would hugely appreciate any help, made as simple as possible, because by now I'm only familiar with the formalism, but not all the meaning behind.

The concepts I am struggling with are the following:

1. As a distribution, the Dirac delta function is defined by ∫ δ(x) φ(x) dx = δ(φ) = φ(0)
There are several things I do not understand here:
1.1. What is the meaning of φ(0)? There is a wide range of bump functions, so the value of phi in zero could vary for each test function, so why do we write that it is equal to the generalized delta function?
1.2. Can we write δ(x) outside the integral? Won't it have the same meaning as δ(φ)?

2. We have (f(0) φ(0))' - basically we want to derive the product of two functions.
What we obtain is f(0)'φ(0) + f(0)φ(0)'. The first term equals zero and we are left with f(0) multiplied by the derivative of φ(0)'. But by definition, the defivative of the delta function is MINUS the derivative of φ in zero. So the first result, by the classical derivation is wrong. I don't understand why...

3. This one gives me the most headaches. The goal is to determine (x-a)δa. To do this let's call f=x-a and T=δa (shouldn't we write Tδa ?).
So we have the following properties: fT(φ)=T(f φ)=f(a)φ(a)=0.
I don't understand the notations, and why can we include f in the parantheses along with φ, when φ is a test function, infinitely differentiable and f is not.
Also, suppose we look at the function f as a distribution => we write Tf . What should we do if we have:
gTf (φ) = ?
1° Tgf (φ) - we evaluate g as a distribution as well
2° Tf (g φ) - we multiply g with phi as in the previous example.
May someone please clarify the notations and the meaning begind? That would save my life...

Thank you very much in advance for the answers and for the patience.

EDIT: I think I understood a part of my question 3 - finally, the two variants that I wrote 1° and 2° are the same. However, how do we prove that ∫ (x-a) δ(x-a) φ(x) dx = 0?
 
Last edited:
Physics news on Phys.org
  • #2
I'll try to answer question 1. 1.2 - no., 1.1 think of the delta function as the limit (as n becomes infinite) of [tex]f_n(x)[/tex], where[tex]f_n(x)=n,\ \frac{-1}{2n}\le x\le \frac{1}{2n}[/tex] and 0 otherwise.
 
  • #3
mathman said:
I'll try to answer question 1. 1.2 - no., 1.1 think of the delta function as the limit (as n becomes infinite) of [tex]f_n(x)[/tex], where[tex]f_n(x)=n,\ \frac{-1}{2n}\le x\le \frac{1}{2n}[/tex] and 0 otherwise.
Thank you for your response!
For 1.2 I guess it is not possible to write δ(x) outside of an integral because it is viewed as a distribution, however it still has values for each x, this is why I am confused.
 
  • #4
Cathr said:
Thank you for your response!
For 1.2 I guess it is not possible to write δ(x) outside of an integral because it is viewed as a distribution, however it still has values for each x, this is why I am confused.
In the standard layout distributions " act on" smooth functions of compact support aka test functions, i.e., they are in their dual space. So the x in ## \delta(x)## is not a Real number, since these operate on Test functions, not Real numbers. I hope this is what you meant.
 
  • Like
Likes Cathr
  • #5
To me it seems like you have a few misunderstandings, so please forgive my long reply...

One thing to remember is that distributions are not functions (which map numbers to numbers), but functionals (which map functions to numbers). That is what your notation ##\delta(\phi)=\phi(0)## is indicating; for each test function ##\phi## the delta distribution produces the number ##\phi(0)##, which of course depends on the particular test function ##\phi##. In order to keep the distiction clear, some people use square brackets for the functional: ##\delta[\phi] =\phi(0)##. hopefully that answers your question 1.1

Of course, we want the regular functions we deal with all the time to be included in this theory of generalized functions. If ##f## is a typical function (like ##\sin##) then the distribution generated by ##f## is often denoted ##T_f##. When the integral is well defined, then we define ##T_f[\phi] = \int \phi(x) f(x) dx##. Such distributions are called regular. The delta distribution is not regular because the integral makes no sense. So while it is common to write ##\delta[\phi]=\int \phi(x) \delta(x) \, dx##, the integral symbol does not actual imply an integral here, it just a symbol to remind you that it is a distribution that produces a number for each ##\phi##.

Part of your confusion may symply be notational. Above I have been following your lead and using the ##T[\phi]## notation. As pointed out by WWGD, it is not actually correct to write distributions using functional notation, such as ##\delta(x-a)##, but the notation is so incredibly useful that, in my opinion, scientists and engineers should go ahead and do it anyway as long as they remember that distributions are functionals, not functions. For example, it is common to read a physics or engineering book and see a differential equation such as ##g^{\prime} + k\,g = \delta(x)##. What does this mean? It means ##T^\prime_g[\phi] + k T_g[\phi] = \delta[\phi]##, and we are looking for a distributional solution ##T_g## that satisfies the differential equation for all test functions ##\phi##. This is what you should think when you see a delta function by itself. Hopefully this answers 1.2

In order to answer your question 3, you need to know that any given distribution is only a legitimate distribution for a given set of test functions, and the operations allowed on that distribution (eg differentiation) also depend on the set of test functions. For example, ##\delta_a[\phi]=\phi(a)## is only a valid distribution for test functions ##\phi## that are continuous at ##a##. The derivative of the delta distribution, ##\delta_a^\prime[\phi] = -\phi^\prime(a)##, is only legitimate for test functions that have a continuous first derivative at ##a##, etc. Of course It is clunky to think about this all of the time, which is why distribution theory is usually developed by first defining a set of test functions ##\mathcal{D}## that is continuously differentiable and has compact support (that is, vanishes outside a finite interval). Distributions are then defined as linear functionals on test functions in ##\mathcal{D}##. As usual, linear means that for any two numbers ##a## and ##b##, and any ##\phi\in\mathcal{D}## and ##\psi\in\mathcal{D}##, a distribution ##T## must satisfy ##T[a\phi + b\psi] = a T[\phi] + b T[\psi]##. There is also a continuity requirement on distributions that can be important for more advanced parts of the theory, but can usually be ignored by those of us that just need the basics, so I will ignore it. Anyway, the class of test functions ##\mathcal{D}## produces the most general set of distributions, and such distributions are in a class labeled ##\mathcal{D}^\prime##. But most (if not all) distributions ##T\in \mathcal{D}^\prime## that we usually care about in can be extended to wider classes of test functions. Our prime example is the delta distribution, as discussed above.

In general it is not okay to multiply two generalized functions; whenever we want to do that we need to examine each particular case to see if it makes sense. However, if one of the distributions is generated by a continuously differentiable function then it is okay. So if ##g## is continuously differentiable then ##g \, T[\phi] = T[g\phi]##, which makes sense since if ##\phi \in \mathcal{D}## then ##g \,\phi \in \mathcal{D}##. In your case, ##(x-a) \phi(x)## is continuous at ##a## as long as ##\phi## is continuous at ##a##, so ##\delta[(x-a) \phi(x)]## is well defined. Explicitly, if we let ##\psi(x)=(x-a)\phi(x)##, then clearly ##\delta_a[\psi] = \psi(a) = (a-a)\phi(a) = 0##.Jason
 
  • Like
Likes Orodruin and Cathr
  • #6
jasonRF said:
To me it seems like you have a few misunderstandings, so please forgive my long reply...

One thing to remember is that distributions are not functions (which map numbers to numbers), but functionals (which map functions to numbers). That is what your notation ##\delta(\phi)=\phi(0)## is indicating; for each test function ##\phi## the delta distribution produces the number ##\phi(0)##, which of course depends on the particular test function ##\phi##. In order to keep the distiction clear, some people use square brackets for the functional: ##\delta[\phi] =\phi(0)##. hopefully that answers your question 1.1

Of course, we want the regular functions we deal with all the time to be included in this theory of generalized functions. If ##f## is a typical function (like ##\sin##) then the distribution generated by ##f## is often denoted ##T_f##. When the integral is well defined, then we define ##T_f[\phi] = \int \phi(x) f(x) dx##. Such distributions are called regular. The delta distribution is not regular because the integral makes no sense. So while it is common to write ##\delta[\phi]=\int \phi(x) \delta(x) \, dx##, the integral symbol does not actual imply an integral here, it just a symbol to remind you that it is a distribution that produces a number for each ##\phi##.

Part of your confusion may symply be notational. Above I have been following your lead and using the ##T[\phi]## notation. As pointed out by WWGD, it is not actually correct to write distributions using functional notation, such as ##\delta(x-a)##, but the notation is so incredibly useful that, in my opinion, scientists and engineers should go ahead and do it anyway as long as they remember that distributions are functionals, not functions. For example, it is common to read a physics or engineering book and see a differential equation such as ##g^{\prime} + k\,g = \delta(x)##. What does this mean? It means ##T^\prime_g[\phi] + k T_g[\phi] = \delta[\phi]##, and we are looking for a distributional solution ##T_g## that satisfies the differential equation for all test functions ##\phi##. This is what you should think when you see a delta function by itself. Hopefully this answers 1.2

In order to answer your question 3, you need to know that any given distribution is only a legitimate distribution for a given set of test functions, and the operations allowed on that distribution (eg differentiation) also depend on the set of test functions. For example, ##\delta_a[\phi]=\phi(a)## is only a valid distribution for test functions ##\phi## that are continuous at ##a##. The derivative of the delta distribution, ##\delta_a^\prime[\phi] = -\phi^\prime(a)##, is only legitimate for test functions that have a continuous first derivative at ##a##, etc. Of course It is clunky to think about this all of the time, which is why distribution theory is usually developed by first defining a set of test functions ##\mathcal{D}## that is continuously differentiable and has compact support (that is, vanishes outside a finite interval). Distributions are then defined as linear functionals on test functions in ##\mathcal{D}##. As usual, linear means that for any two numbers ##a## and ##b##, and any ##\phi\in\mathcal{D}## and ##\psi\in\mathcal{D}##, a distribution ##T## must satisfy ##T[a\phi + b\psi] = a T[\phi] + b T[\psi]##. There is also a continuity requirement on distributions that can be important for more advanced parts of the theory, but can usually be ignored by those of us that just need the basics, so I will ignore it. Anyway, the class of test functions ##\mathcal{D}## produces the most general set of distributions, and such distributions are in a class labeled ##\mathcal{D}^\prime##. But most (if not all) distributions ##T\in \mathcal{D}^\prime## that we usually care about in can be extended to wider classes of test functions. Our prime example is the delta distribution, as discussed above.

In general it is not okay to multiply two generalized functions; whenever we want to do that we need to examine each particular case to see if it makes sense. However, if one of the distributions is generated by a continuously differentiable function then it is okay. So if ##g## is continuously differentiable then ##g \, T[\phi] = T[g\phi]##, which makes sense since if ##\phi \in \mathcal{D}## then ##g \,\phi \in \mathcal{D}##. In your case, ##(x-a) \phi(x)## is continuous at ##a## as long as ##\phi## is continuous at ##a##, so ##\delta[(x-a) \phi(x)]## is well defined. Explicitly, if we let ##\psi(x)=(x-a)\phi(x)##, then clearly ##\delta_a[\psi] = \psi(a) = (a-a)\phi(a) = 0##.Jason

Thanks a lot! This makes a lot of sense now, you are very good at explaining! The way I viewed distributions was wrong, mainly because I'm not familiar with functionals, but now it's clearer. I need more practice though...
 
  • #7
Cathr said:
The concepts I am struggling with are the following:

1. As a distribution, the Dirac delta function is defined by ∫ δ(x) φ(x) dx = δ(φ) = φ(0)
Mathematical notation doesn't define anything until we say what that notation means. There is a "magical" aspect to certain types of notation due to the fact that by merely manipulating symbols, we can (with care) get correct answers without thinking about the meaning of the symbols. To employ this magic we sometimes must use notation that is misleading or wrong if interpreted literally.

Without the magic, we could state things this way:

##\delta## is defined a function whose domain is a set of functions and whose co-domain is the set of real numbers.
The value of ##\delta## evaluated at the function ##\phi## is ##\delta(\phi) = \phi(0)##.

A function who domain is set of functions and whose co-domain is a set of real numbers is called a "functional".

For example, we could define a functional ##m(\phi)## as ##m(\phi) = max_{x \in [0,1]} \phi(x) ## = the maximum value of ##\phi(x)## on the interval [0,1] (if the maximum exists).

The functional ##\delta## is a linear functional. E.g. ##\delta ( 3f + 2g) = 3 f(0) + 2 g(0) = 3 \delta(f) + 2 \delta(g) ##

By contrast the functional ##m## is not linear. E.g. ##max_{x \in [0,1]} (3f + 2g) ## need not be equal to ##3 (max_{x \in [0,1]} f)+ 2 (max_{x \in [0,1]} g) ##.

If taken literally, the notation ##\delta(x)## would mean ##\delta## evaluated at the identity function ##\phi(x) = x##, so ##\delta(x)## would be ##\phi(0) = 0##. To be consistent with that interpretation, ##\int \delta(x) f(x) dx ## would be ##\int (0) f(x) dx = \int 0 dx = 0##. It is clear that people who denote ##\delta## as ##\delta(x)## are using some magical notation because the standard interpretation of their notation is not what they mean.

The motivation for writing ##\delta## as ##\delta(x)## comes from the fact that we can define less exotic linear functionals by integrations. For example if we let ##k(x) = sin(x)## , we could define a linear functional ##K(f)## by ##K(f) = \int_0^1 k(x) f(x) dx = \int_0^1 sin(x) f(x) dx##. So when we write ##\delta## as ##\delta(x)## we are pretending that we can define ##\delta(\phi)## by doing an integration where the integrand has the factor ##\delta(x)##
There are several things I do not understand here:
1.1. What is the meaning of φ(0)? There is a wide range of bump functions, so the value of phi in zero could vary for each test function, so why do we write that it is equal to the generalized delta function?
##\phi(0)## doesn't denote a single constant. The notation is used to define the value of ##\delta(\phi)## for an arbitrary function ##\phi##.

1.2. Can we write δ(x) outside the integral? Won't it have the same meaning as δ(φ)?

Changing notation doesn't automatically define the meaning of the notation. You can makes changes in notation that result in meaningless expressions. You can make changes in notation that have no pre-defined meaning and then define what you mean by the notation. So the question is: What would you (or other people) mean by writing ##\delta(x)## outside the integral sign? This not a simple question since writing ##\delta(x)## within the integral sign involves the fiction that ##\delta## can be defined as an integral. I don't know if there is a standard interpretation for writing ##\delta(x)## outside the integral sign.

2. We have (f(0) φ(0))' - basically we want to derive the product of two functions.

Where do we have that? How does that expression involve ##\delta##?

Do you mean that you wish to take the derivative of an expression like ##f(x)\delta##? In that case, you don't have the product of two functions, each of which maps real numbers to real numbers. Remember ##\delta## is not a function whose domain is a set of real numbers.

How to define the derivative of a functional is a technically complicated question. The derivative of a real valued function of a real variable is defined as a limit of a "difference quotient", which is an expression involving real valued functions of real variables. Shall we define the derivative of a functional as a limit of some expression involving functionals? How do your text materials define the derivative of a functional in general? - or do they only define the derivative to the specific functional ##\delta##?
3. This one gives me the most headaches. The goal is to determine (x-a)δa.
The would give me a headache too because the notation is unfamiliar. We'd have to see how your text materials define that notation. The line of inquiry can be "animal, vegetable, or mineral ?", i.e. what does ##(x-a)\delta_a## denote?:
1) a real number?
2) a real valued function of a real variable?
3) a functional ?

It's useful to implement the concept of "a delta-like functional whose value evaluated at the function f is f(a)" where "a" need not be zero. How do your text materials denote that concept?
 
  • Like
Likes Cathr
  • #8
Thanks a lot for your answer! And excuse me for my delayed reply.

Stephen Tashi said:
Where do we have that? How does that expression involve ##\delta##?

Now that I understand distributions better, I see that my question wad very badly formulated. It comes from the following reasonment:
We have to derive ##f(x)\delta_0##, so I used the property ##f(x)\delta_0(\phi(x))##=##\delta_0(f(x)\phi(x))##. This is equal to ##f(0)\phi(0)##. Since ##f(x)\delta_a##=##f(0)\phi(0)##, I thought their derivatives must be equal as well, but I got different results. For the distributions I used the property ##T'_f (\phi)##=##-T_f (\phi')##, which is the second term of the integration by parts, if we write the distribution as an integral. But for the functions, however, I used the standard derivation and I thought it must be equal to the derivative of the distribution. Not only this statement is wrong, but also ##f(0)\phi(0)## is a constant, so derivating it would give zero. I got the misconception that functions would somehow behave differently when they are seen as distributions (however, that's what they still are - functions evaluated in a point).
I thought it is like in the following example: if ##g(y)=2y##, then ##g'(y)=(2y)'=2##, but it doesn't work for the distributions, because their output is always a number.

Stephen Tashi said:
The would give me a headache too because the notation is unfamiliar. We'd have to see how your text materials define that notation. The line of inquiry can be "animal, vegetable, or mineral ?", i.e. what does ##(x-a)\delta_a## denote?:

##(x-a)\delta_a## denotes the function (x-a) multiplied by ##\delta_a##, which is equal to ##\delta_a(x)=\delta_0(x-a)## (in the integral). It's ##\delta##, translated by a, having its maximum value in a.
A way to calculate this is: ##(x-a)\delta_a(\phi)##=##\delta_a((x-a)\phi(x))##, which is equal to zero, but if I have to evaluate it as an integral, I don't really know what to do.

Stephen Tashi said:
##\phi(0)## doesn't denote a single constant. The notation is used to define the value of ##\delta(\phi)## for an arbitrary function ##\phi##.

So this means that the value of ##\delta## in 0 isn't infinity all the time, but it depends on the test function? If we have ##2\delta(\phi)##, that means that delta evaluates the double of ##\phi##, does it also mean the final output will be the double of ##\delta## for the same ##\phi##?
 
  • #9
Cathr said:
##(x-a)\delta_a## denotes the function (x-a) multiplied by ##\delta_a##, which is equal to ##\delta_a(x)=\delta_0(x-a)##
Why is that true? Is it a theorem or a definition?

A voice of authority https://cds.cern.ch/record/1453294/files/978-3-642-23617-4_BookBackMatter.pdf (p 451 referring to eq A.23) says "this definition".

It's important to think about what the notation ##(x-a)\delta_a## would mean without creating a special definition for it. Yes, it would denote the product of a function (x-a) times a delta functional. However, what would that product represent?

Taken literally, the product would NOT represent a functional. If we evaluate ##(x-a)\delta_a## applied to a test function ##\phi(x)## we get: ## (x-a) \phi(a)## and this is a real valued function of a real variable ##x##, not a functional. If it were a functional the result should be a single real number, not something that is a function of ##x##.

One often hears the phrase "abuse of notation" in reference to writing mathematics. This describes using notation that is ambiguous or outright wrong unless interpreted with special conventions. The voice of authority defines how the notation ##(x-a)\delta_a## is used to denote a functional.
(in the integral). It's ##\delta##, translated by a, having its maximum value in a.
Only if you are writing poetry about mathematics. From an unpoetic viewpoint, ##\delta_a## is not a real valued function of a real variable. So ##\delta_a## has no "maxium value" There are such things as "indicator functions". However, ##\delta_a## is a functional, not an indicator function.

A way to calculate this is: ##(x-a)\delta_a(\phi)##=##\delta_a((x-a)\phi(x))##, which is equal to zero, but if I have to evaluate it as an integral, I don't really know what to do.

I don't know what you mean by "it". Are you asking about how to do some manipulation using integration to prove ##(x-a)\delta_a(\phi) = \delta_a( (x-a) \phi)##? Using the approach in the link I cited, this equality is not a theorem, so it requires no proof. The equality is a definition of what the notation on the left hand side of the equation means. A discussion in terms of integration is motivation for the definition. It isn't "proof" of the definition because a definition is not in need of proof; a definition is not a theorem. Using integrals to motivate the definition is done on the top of p451.

So this means that the value of ##\delta## in 0 isn't infinity all the time, but it depends on the test function?
"Infinity" is not a real number. ##\delta## is functional; it maps a function to single real number. The "value" of ##\delta## is never "infinity".

If we have ##2\delta(\phi)##, that means that delta evaluates the double of ##\phi##, does it also mean the final output will be the double of ##\delta## for the same ##\phi##?
I'd say yes. Begin by applying the definition given by eq A.23:
##(gT)][\phi] = T[ g \phi]##
Let ##T = \delta_a##.
Let ##g## be the constant function ##g(x) = 2##
Then use the fact that ##\delta## is a linear functional.
 
Last edited:
  • Like
Likes jasonRF and Cathr
  • #10
Thanks for your response, you helped me very much!

Stephen Tashi said:
Why is that true? Is it a theorem or a definition?

A voice of authority https://cds.cern.ch/record/1453294/files/978-3-642-23617-4_BookBackMatter.pdf (p 451 referring to eq A.23) says "this definition".

It's important to think about what the notation ##(x-a)\delta_a## would mean without creating a special definition for it. Yes, it would denote the product of a function (x-a) times a delta functional. However, what would that product represent?

Taken literally, the product would NOT represent a functional. If we evaluate ##(x-a)\delta_a## applied to a test function ##\phi(x)## we get: ## (x-a) \phi(a)## and this is a real valued function of a real variable ##x##, not a functional. If it were a functional the result should be a single real number, not something that is a function of ##x##.

One often hears the phrase "abuse of notation" in reference to writing mathematics. This describes using notation that is ambiguous or outright wrong unless interpreted with special conventions. The voice of authority defines how the notation ##(x-a)\delta_a## is used to denote a functional.

Actually that's why I was having trouble understanding this product, the notations are very ambiguous.
I'll start studying the notions in the source you provided, thanks again!
 

What are distributions (generalised functions)?

Distributions, also known as generalised functions, are a mathematical concept used to extend the notion of a function to a larger class of objects. They are used to represent functions that are not traditionally defined in the usual sense, such as the Dirac delta function.

What is the difference between a distribution and a traditional function?

A traditional function is defined as a rule that assigns a unique output to each input. Distributions, on the other hand, do not have a traditional definition and may not have a unique output for each input. They can represent a wide variety of functions, including discontinuous ones.

How are distributions useful in scientific research?

Distributions are useful for solving problems in many areas of science, such as physics, engineering, and mathematics. They provide a way to model and analyze complex systems and phenomena that cannot be described by traditional functions.

What is the Dirac delta function and how is it related to distributions?

The Dirac delta function, denoted by δ(x), is a distribution that is zero everywhere except at x = 0, where it has an infinite value. It is used to model point sources in physics and is closely related to the concept of impulse in classical mechanics.

Can distributions be differentiated or integrated?

Yes, distributions can be differentiated and integrated. However, the traditional rules of differentiation and integration do not always apply to distributions. Special rules, such as the product rule and chain rule, must be used when differentiating distributions. Integration of distributions is defined using a process known as convolution.

Similar threads

  • Calculus
Replies
25
Views
1K
Replies
2
Views
369
  • Topology and Analysis
Replies
24
Views
2K
Replies
21
Views
1K
Replies
1
Views
1K
  • General Math
Replies
3
Views
856
  • Precalculus Mathematics Homework Help
Replies
6
Views
2K
Replies
2
Views
999
  • Topology and Analysis
Replies
16
Views
1K
  • Electromagnetism
Replies
6
Views
2K
Back
Top