Information content of Dirac delta function

  • Thread starter friend
  • Start date
  • #1
1,448
9
I understand that the Dirac delta function can be taken as a distribution. And that one can calculate the Shannon entropy or information content of any distribution. So what is the information content of the Dirac delta function? I think it is probably identically zero, but I'd like to see the proof of it. I could not find anything specific on-line about this. So any help would be appreciated. Thanks.
 

Answers and Replies

  • #2
147
0
Those are two entirely different senses of "distribution". The Dirac delta is a distribution in this sense, Shannon entropy relates to probability distributions in this sense.
 
  • #3
1,448
9
I found this information on the Web from the book:
Ecosystem Ecology: A New Systhesis, by David G. Christopher, L.J. Frid, on page 46

He calculated the Shannon entropy of a dirac delta function to be zero. Actually, he seems to be calculating for the discrete Kronecker delta. I wonder how one would go to the continuous case of the Dirac delta. Thanks.
 

Attachments

  • Entropy_of_delta.gif
    Entropy_of_delta.gif
    60 KB · Views: 803
  • #4
147
0
The Dirac delta is not a function, so it does not have a well-defined entropy. (If you define the probability distribution to be p(X<x) rather than p(X<=x), then it does have a corresponding probability distribution - p(X<x) = 0 for x<=0, p(X<x) = 1 for x > 0 - but this probability distribution is not generated by a probability distribution function.) However, you could define its entropy as the limit of the entropies of successive approximations to the Dirac delta. These can be taken to be increasingly narrower and higher bell curves or rectangles of constant unit area. It's easy to show that the limit is zero - the limit is in effect x*log(x) as x goes to 0.
 
  • #5
1,448
9
... However, you could define its entropy as the limit of the entropies of successive approximations to the Dirac delta. These can be taken to be increasingly narrower and higher bell curves or rectangles of constant unit area. It's easy to show that the limit is zero - the limit is in effect x*log(x) as x goes to 0.

So if we take the limit of the delta function after we calculate its entropy, then it goes to zero, right? I'm wondering if its too much trouble you to show me the math of this situation? Thanks.
 
  • #6
147
0
Well, as I said, the Dirac delta distribution can be approximated for example by a sequence of rectangles Rn around zero of width 1/n and height n.

[tex]H_n = - \int_{-1/2n}^{1/2n} R_n(x) \cdot \log (R_n(x)) \textrm{d}x = - \frac{1}{n} \cdot n \cdot \log(n) = -\log(n)[/tex]

So actually you get [tex]-\infty[/tex] rather than 0. (But that's okay, because differential entropy can be negative.)
 

Related Threads on Information content of Dirac delta function

Replies
2
Views
6K
  • Last Post
Replies
8
Views
2K
  • Last Post
Replies
10
Views
4K
  • Last Post
Replies
6
Views
3K
Replies
7
Views
615
  • Last Post
Replies
1
Views
2K
Replies
1
Views
3K
  • Last Post
Replies
6
Views
973
Top