Proving Dirac Delta Function Does Not Exist

Click For Summary
SUMMARY

The discussion centers on the proof that no continuous function can satisfy the properties of the Dirac delta function. Key properties include the integral of the function equating to 1 and being even, which leads to the conclusion that any continuous function must be zero everywhere except at x=0, where it cannot be defined. The participants explore the use of Gaussian functions to illustrate the behavior of integrals involving continuous functions and emphasize that the integral of a continuous function multiplied by another function can be shown to be less than one under certain conditions.

PREREQUISITES
  • Understanding of the Dirac delta function and its properties
  • Knowledge of continuous functions and their behavior
  • Familiarity with integral calculus, specifically definite integrals
  • Basic concepts of Gaussian functions and their applications
NEXT STEPS
  • Study the properties of the Dirac delta function in detail
  • Learn about the behavior of continuous functions and their integrals
  • Explore the application of Gaussian functions in approximating the Dirac delta function
  • Investigate the implications of discontinuities in functions related to the Dirac delta function
USEFUL FOR

Mathematicians, physicists, and students studying advanced calculus or functional analysis, particularly those interested in the properties of distributions and the Dirac delta function.

asdf60
Messages
81
Reaction score
0
How can I prove that no continuous function exists that satisfies the property of the dirac delta function? I thought it should be pretty easy, but it's actually giving me quite a hard time! I know that the integral of such a function must be 1, and that it must also be even (symmetric about the y-axis). It's also easy to see that such a delta function exists for any given function, but no such delta function exists for all functions. How do I go from here?
 
Physics news on Phys.org
So by the properties of the delta function, you must mean:

\int_a^b \delta(x) f(x) dx = \left{ \begin{array}{cc} f(0) & \mbox{if } a<0<b\\ -f(0) & \mbox{if } b<0<a\\ 0 & \mbox{otherwise} \end{array}

The last line implies the function must be zero everywhere but x=0 (or, to be more specific, any continuous function must be zero for x≠0 to satisfy this property), and the other two imply it cannot be zero at x=0, so it must be discontinuous. In fact, you could even show there is no discontinuous function which satisfies the above conditions by showing that the value at x must actually be infinite.
 
Last edited:
Unfortunately, that is not the way the function is defined in this problem. The definition given is:

\int_a^b \delta(x) f(x) dx =f(0)
where a = -1, and b = 1, always.

Heh, i can't figure out how to make the limits of the integration -1 and 1 in latex.
 
Last edited:
Ok. Also, the function f must be continuous, right? You can define a series of gaussian functions fn(x) that get narrower and narrower, but always have a value of 1 at x=0. All you need to show is that, for any continuous function d(x), there is some n above which the integral of d(x)fn(x) is less than one.

And for bounds on an integral (also powers, subscripts, summation indices, etc) you need to put brackets around the bounds if they are more than one chatacter (click to see the code):

\int_{-1}^{\sum_{k=1}^{\infty} e^{-p_k}} \delta(x) = 1
 
Last edited:
I don't think we'd be allowed to use gaussian functions. I don't even really know much about them. however, i was thinking about doing something like that, but i still don't quite know how to prove that the integral of d(x)*f(x) will start to be less than 1...
 
You can use any function that gets narrower and narrower. To show the integral becomes smaller than one, you can use the fact that a continuous function d(x) on a closed interval takes on a maximum value a, and the integral of any function times d(x) is less than the integral of that function times a.
 
Right! I quickly dismissed that idea because I thought that it assumed d(x) doesn't change sign, but i realize now after thinking for a second that it's only necessary that f(x) doesn't change sign, which of course we have control over.

Thanks for the help, and sorry for wasting your time.
 
No problem, and you're hardly wasting my time. For one thing, I answered your question voluntarily. Plus, your question gave me an idea that led me to a thread I posted in the analysis section.
 

Similar threads

  • · Replies 31 ·
2
Replies
31
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
0
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K