Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Sampling a PDF

  1. May 9, 2016 #1
    Hi

    If I have a CDF = integral wrt t from 0 to r^2/2 of 2*sqrt(pi*t)*e^(-t)*dt
    but I want to generate samples from the PDF, would Metropolis-Hastings
    algorithm be the best alternative?

    Rgds
    rabbed
     
    Last edited: May 9, 2016
  2. jcsd
  3. May 9, 2016 #2

    chiro

    User Avatar
    Science Advisor

    Hey rabbed.

    Metropolis-Hastings algorithm is a Bayesian algorithm for doing pseudo-random number generation.

    You can use a number of techniques to generate random numbers from a distribution (often known as Monte-Carlo) and your algorithm is one of those.

    Are you familiar with the basic pseudo-random number generation relating a distribution to the uniform and then using standard U[0,1] techniques?

    https://en.wikipedia.org/wiki/Inverse_transform_sampling
     
  4. May 9, 2016 #3
    Hi chiro

    Yes, I think so. If I have an invertible and closed form CDF of my PDF I can create samples from the PDF by using a uniform distribution.
    But in my case, I think this is not possible?

    As I'm a beginner, I thought a good strategy was to learn two methods - the preferred analytic way (inverse CDF) and a numerical method in cases where the analytic way fails. Would Metropolis-Hasting be a good choice?

    Rgds
    rabbed
     
    Last edited: May 9, 2016
  5. May 9, 2016 #4

    chiro

    User Avatar
    Science Advisor

    Metropolis-Hastings would make more sense if you studied Bayesian Statistics since it is based on those techniques.

    In Bayesian Statistics, the pseudo-random generation is based on an MCMC theorem (known as Markov-Chain Monte-Carlo).

    Basically the idea behind it (the actual proof is quite involved and takes a bit of effort to make the most sense of) is that you have conditional probabilities that converge to the stationary distribution.

    So a conditional probability could be represented as P(A|B) and with MCMC there is a technique where you can find the normal distribution (in this case P(A)).

    https://en.wikipedia.org/wiki/Metropolis–Hastings_algorithm

    The MCMC technique - when understood gives you an idea of how the algorithm does its magic.

    The Bayesian techniques are good and extremely useful when you have all kinds of complicated dependencies between the distributions but they are best used when that is the case.

    Normal pseudo-random generators are good to produce random vectors so long as you have lots of elements in the sample (so basically the more numbers you generate - the better the distribution will be reflected).

    Since you are beginner I'll point out that pseudo-random generation is either done an actual distribution (discrete or continuous) in the form of a mass function or a density function and you either have the exact probabilities from an existing parametric distribution or you supply an arbitrary one yourself.

    This means if you want to implement this in code, you will have to find a way to compute the density function and I would recommend you use the R platform since it has lots of code and does a lot of the really mundane stuff for you.

    It's good that you are curious and my recommendation is start off with the basics first (inverse transform sampling), then learn some Bayesian (conditional probability) methods and then if you are keen take a look at the MCMC algorithm and its proof (if you are super keen).

    What probability, mathematics, and statistics have you done?
     
  6. May 9, 2016 #5
    Hey

    I did some probability on very basic level ages ago and since then I got a bachelor degree in computer science, which involved a lot of math but not any probability or statistics. I'm swedish so I'm not sure if the course levels are the same as yours.
    So the things I know at the moment is what I've picked up on the net.

    Not sure what you mean by computing the density function. If I already have the PDF, you mean making a function with which I can evaluate the PDF? But what can I do with it after that to get a sample?

    Basically I want to (from three spherical coordinates where two of them gives me uniform direction) sample the radius coming from a distribution such that the corresponding cartesian coordinates all turn out to be distributed from a standard normal distribution.
    Having this as reference: http://mathematics.livejournal.com/1144375.html
     
  7. May 9, 2016 #6
    Could it be possible to randomize u from U[0,1], then summing all PDF(x)*dx by increasing x from 0 in steps of dx until the sum is above u, meaning that my sample is x?
     
    Last edited: May 9, 2016
  8. May 9, 2016 #7

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    If you can not find the inverse of the CDF, a simple, but maybe inefficient way is to use the rejection method. Generate a uniform x in the range of the random variable and a uniform y, 0<y< max(PDF). Reject x if pdf(x) < y. Otherwise accept x. The accepted values of x will have the distribution you want. There are ways to make this more efficient. (See https://en.wikipedia.org/wiki/Rejection_sampling )
     
    Last edited: May 9, 2016
  9. May 10, 2016 #8

    chiro

    User Avatar
    Science Advisor

    Sampling from U[0,1] is quite common and many pseudo-random algorithms exist to do this.

    You can search a lot of source code (open source) for implementations of pseudo-random number generators from U[0,1] and I'd recommend you do that.

    The inverse-transform sampling method can be applied to a multi-variable case (where you have random variables "entangled" with each other) but it involves a multi-variable transformation (like you do for a multi-variable calculus substitution with the Jacobian).
     
  10. May 10, 2016 #9
    Thanks guys, I will do some testing!
     
  11. May 10, 2016 #10

    Stephen Tashi

    User Avatar
    Science Advisor

    Yes.

    The usual way to sample a random variable ##X## with cumulative distribution ##F(X)## is to pick a random value ##u## of a random variable that's uniformly distributed on [0,1]. The sample value of ##X## is ##x = F^{-1}(u)##. I think you are describing exactly that method.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Sampling a PDF
  1. Compare PDF (Replies: 1)

  2. Finding the PDF? (Replies: 1)

  3. Cdf and pdf (Replies: 4)

  4. Histogram to PDF (Replies: 5)

  5. Mgf and pdf (Replies: 1)

Loading...