Information content of Dirac delta function

Click For Summary

Discussion Overview

The discussion revolves around the information content of the Dirac delta function, particularly in relation to Shannon entropy. Participants explore the implications of treating the Dirac delta as a distribution and its connection to probability distributions, as well as the mathematical treatment of its entropy in both discrete and continuous contexts.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant suggests that the information content of the Dirac delta function is probably zero and seeks proof for this claim.
  • Another participant points out the distinction between the Dirac delta as a distribution and Shannon entropy as relating to probability distributions.
  • A participant references a source that calculates the Shannon entropy of the Dirac delta function as zero, but questions how this applies to the continuous case.
  • It is argued that the Dirac delta does not have a well-defined entropy since it is not a function, but a limit of approximations could yield an entropy of zero.
  • Further elaboration suggests that the limit of entropies of approximations to the Dirac delta could be calculated, leading to a discussion about whether this limit is indeed zero.
  • Another participant presents a mathematical approximation using rectangles and concludes that the entropy approaches negative infinity, noting that differential entropy can be negative.

Areas of Agreement / Disagreement

Participants express differing views on the information content of the Dirac delta function and its entropy, with no consensus reached on whether it is zero or negative infinity. The discussion remains unresolved regarding the correct interpretation and calculation of entropy in this context.

Contextual Notes

Participants highlight limitations in defining entropy for the Dirac delta function and the dependence on the chosen approximations. The discussion also reflects uncertainty about the transition from discrete to continuous cases in entropy calculations.

friend
Messages
1,448
Reaction score
9
I understand that the Dirac delta function can be taken as a distribution. And that one can calculate the Shannon entropy or information content of any distribution. So what is the information content of the Dirac delta function? I think it is probably identically zero, but I'd like to see the proof of it. I could not find anything specific on-line about this. So any help would be appreciated. Thanks.
 
Physics news on Phys.org
Those are two entirely different senses of "distribution". The Dirac delta is a distribution in this sense, Shannon entropy relates to probability distributions in this sense.
 
I found this information on the Web from the book:
Ecosystem Ecology: A New Systhesis, by David G. Christopher, L.J. Frid, on page 46

He calculated the Shannon entropy of a dirac delta function to be zero. Actually, he seems to be calculating for the discrete Kronecker delta. I wonder how one would go to the continuous case of the Dirac delta. Thanks.
 

Attachments

  • Entropy_of_delta.gif
    Entropy_of_delta.gif
    68.6 KB · Views: 1,107
The Dirac delta is not a function, so it does not have a well-defined entropy. (If you define the probability distribution to be p(X<x) rather than p(X<=x), then it does have a corresponding probability distribution - p(X<x) = 0 for x<=0, p(X<x) = 1 for x > 0 - but this probability distribution is not generated by a probability distribution function.) However, you could define its entropy as the limit of the entropies of successive approximations to the Dirac delta. These can be taken to be increasingly narrower and higher bell curves or rectangles of constant unit area. It's easy to show that the limit is zero - the limit is in effect x*log(x) as x goes to 0.
 
Preno said:
... However, you could define its entropy as the limit of the entropies of successive approximations to the Dirac delta. These can be taken to be increasingly narrower and higher bell curves or rectangles of constant unit area. It's easy to show that the limit is zero - the limit is in effect x*log(x) as x goes to 0.

So if we take the limit of the delta function after we calculate its entropy, then it goes to zero, right? I'm wondering if its too much trouble you to show me the math of this situation? Thanks.
 
Well, as I said, the Dirac delta distribution can be approximated for example by a sequence of rectangles Rn around zero of width 1/n and height n.

[tex]H_n = - \int_{-1/2n}^{1/2n} R_n(x) \cdot \log (R_n(x)) \textrm{d}x = - \frac{1}{n} \cdot n \cdot \log(n) = -\log(n)[/tex]

So actually you get [tex]-\infty[/tex] rather than 0. (But that's okay, because differential entropy can be negative.)
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
12K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
0
Views
2K