What is the probability associated with a Dirac delta-like distribution?

  • Context: Graduate 
  • Thread starter Thread starter coolnessitself
  • Start date Start date
  • Tags Tags
    Dirac Distribution
Click For Summary

Discussion Overview

The discussion revolves around the interpretation of a probability distribution that exhibits infinite probability at a specific point, particularly in the context of a radially symmetric Laplace distribution in two dimensions. Participants explore the implications of such a distribution, its relation to Dirac delta functions, and the challenges of sampling from it.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether sampling from a distribution with infinite probability at r=0 would always yield r=0, given that p(0) is infinitely more likely than any other r.
  • Another participant suggests that the question may relate to different scenarios, such as Monte Carlo simulations or mathematical proofs involving the distribution.
  • A participant describes a jump diffusion process and expresses confusion about how to sample from the radially symmetric distribution p(r) while noting that inverse CDF sampling yields a peak away from r=0.
  • One participant distinguishes between a distribution with an infinite value at a point and a Dirac delta function, suggesting that they do not behave the same way.
  • Another participant provides an example of a random variable with a point mass and discusses how to handle such cases in terms of probability density and point masses.
  • There is a discussion about the implications of having an infinite probability density and how it relates to integrability and sampling methods.

Areas of Agreement / Disagreement

Participants express differing views on how to interpret the infinite probability at a point and its implications for sampling. There is no consensus on whether the distribution behaves like a Dirac delta function or how to properly sample from it.

Contextual Notes

Participants highlight the need for careful consideration of integrability and the mathematical treatment of distributions with infinite values, indicating that traditional probability density functions may not apply directly in this case.

coolnessitself
Messages
29
Reaction score
0
Hi all,
I have a question about the actual value associated with the probability p(r) where p(r) is infinite for r=0.
I realize that this p(r) can only be a distribution and only exist under an integral, and can't represent a pdf. My p(r) is a radially symmetric laplace distribution in 2d, centered at the origin:
[tex] p(r) = \frac{1}{2\pi} K_0\left(r\right)[/tex]

where [tex]K_0(\cdot)[/tex] is a modified Bessel function of the first kind (recall that [tex]K_0(0)=\infty[/tex]). This exact distribution isn't really my question, it's just one with nonzero probability around r=0 and infinite 'probability' at 0.
All moments are defined, and
[tex] 2\pi \int\limits_0^{\infty} p(r) rdr = 1.[/tex]
But if I sample from this distribution, will I always get r=0, since p(0) is infinitely more likely than any other r? I see the relation to a dirac delta, but in that case [tex]p(r)=0[/tex] for all r other than r=0. Here, [tex]p(r\ne 0)>0[/tex]. Does that make any difference?

In the end, I'm trying to assign a probability to any r. This would be easy if all p(r)<=1, but I don't know what to do in this case. Any question I can think of asking hits a roadblock since the pdf doesn't exist, but I guess I'm simply having trouble interpreting what that means if some p(r) are nonzero and finite.
 
Physics news on Phys.org
coolnessitself said:
This exact distribution isn't really my question, it's just one with nonzero probability around r=0 and infinite 'probability' at 0.
...
But if I sample from this distribution, will I always get r=0, since p(0) is infinitely more likely than any other r?

It would help if clarified your goal. The scenarios that come to my mind are:

1. You want to write a computer program to do a Monte Carlo simulation and you want to draw samples from this distribution.

2. You are trying to do a mathematical proof and you want to do a step like divide something by p(r) when r is not 0.

3. You are wondering about a Platonic mathematical concept (such as people who like to discuss whether .9999... = 1.
 
Hi Stephen,
Essentially I guess it's #1. I have a jump diffusion process

[tex] x(t) - x(0) = \int\limits_0^t f(x(s))ds + \int\limits_0^t g(x(s))dw(s) + J(t)[/tex]

where J(t) specifies both the Pr(jump in [t,t+Delta]) as well as a distribution of the resulting jumps. That latter distribution is my radially symmetric p(r) in the previous post.
If I were to simulate this, I'd have to draw samples from p(r). Would I always draw r=0? Seems like no: If I do inverse cdf sampling, I get a nice curve with a peak around r=0.7, but I don't understand why this is the case.

If I were to try and model how x(t) behaves, I might divide a 2d space into a grid and look at the evolution of p(x,t). In this case, the transition from x(t) to a radially symmetric distribution about x(t) (given by p(r)) would be of interest. But are any of these probabilities defined?
 
Last edited:
I don't know about modified Bessel functions of the first kind, so as I read your original post, I was thinking that you had the equivalent of a Dirac delta function. Reading more carefully, I think your question is whether a probability distribution that becomes infinite at some value behaves like a Dirac delta function. I think the answer to that is no. There are functions that have vertical asymptotes that have "improper integrals". The Wikipedia article on improper integrals gives the example
[tex]\int_a^c \frac{e^x}{\sqrt{c-x}} dx.[/tex]. I think you can sample from the cdf of such a distribution.

If you do anything involving the mean, variance etc. of such a distribution you'd have to check that those integrals exist.
 
Hi,
thanks for the link.

So the bessel function simply appears in a bivariate laplace distribution, which I'm dealing with. I guess from an intuitive standpoint, I'm confused about its pdf. If the pdf becomes infinite at some point, how does that relate to probability? If the pdf maps R to some probability, how is it possible to have an infinite probability around 0?
For a delta function, it makes sense (kinda) that you can have an infinite probability, since when defined as a distribution you're mapping both a set and R to this probability. I don't see a clear analogy to this case, so I feel like there must be a formal way of treating this that I'm glossing over.
 
Last edited:
Imagine a random variable generated this way: I flip a fair coin. If it lands heads, I assign the random variable X the value 6. If it lands tails then I make a draw from a uniform distribution on [0 , 10] and assign X to be that value.

We could represent the density of X as a flat line segment at height y = .05 except that at the point X = 6, it would have a vertical jump to Y = 1/2. However, this density would not integrate to 1 by the normal way of doing integrals - intuitively, in a Riemann sum, the point X = 6 only matters in a "rectangle" whose width approaches zero.

The way I have seen this handled in applied math courses is that the point X = 6 is given the special distinction of being a "point mass" and it is understood that in doing the integral on must add a 1/2 to the total because of the "point mass" probability at X = 6.

I'm sure the measure theory of pure mathematics has ways of justifying this.

For a probability density f(x), it is convenient to think of f(x) as "the probability of x", but the above example illustrates the distinction between a point mass, which is a situation where that is actually true vs the non-point mass case where it is merely a metaphor. Actually, the probability of drawing any particular value of x from a continuous pdf is zero. (I reconcile this with reality by saying that when I draw a random number on a computer, I am really drawing a random interval due to the finite precision of the machine.) So as long as you are not treating the location of a vertical asymptote as any kind of point mass, you don't have any theoretical worries. You might have some numerical methods worries.
 
coolnessitself said:
Hi,
thanks for the link.

So the bessel function simply appears in a bivariate laplace distribution, which I'm dealing with. I guess from an intuitive standpoint, I'm confused about its pdf. If the pdf becomes infinite at some point, how does that relate to probability? If the pdf maps R to some probability, how is it possible to have an infinite probability around 0?
For a delta function, it makes sense (kinda) that you can have an infinite probability, since when defined as a distribution you're mapping both a set and R to this probability. I don't see a clear analogy to this case, so I feel like there must be a formal way of treating this that I'm glossing over.

The pdf can be infinite in places provided it's integrable. A simpler example would be 1/2sqrt(x) for 0<=x<=1 which corresponds to the cdf F(x)=sqrt(x). For this example numbers can be drawn from the distribution by cdf inversion, so take U uniform on (0,1) and solve F(X)=U.

For your example two options would be to either find an analytic expression for the cdf and perform cdf inversion (numerically if necessary) as described above, or if the distribution has a single mode you could try more general methods such as the ziggurat algorithm (which is how normal random numbers are implemented in MATLAB).
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
966
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K