Graduate Using Metropolis-Hastings for Sampling from a CDF

  • Thread starter Thread starter rabbed
  • Start date Start date
  • Tags Tags
    Cdf Sampling
Click For Summary
SUMMARY

The discussion centers on using the Metropolis-Hastings algorithm for sampling from a cumulative distribution function (CDF) defined as the integral of 2*sqrt(pi*t)*e^(-t) from 0 to r^2/2. Participants confirm that Metropolis-Hastings is a suitable method for generating samples from a probability density function (PDF) when the inverse CDF is not available. They emphasize the importance of understanding Bayesian statistics and Markov-Chain Monte-Carlo (MCMC) techniques to effectively implement this algorithm. Additionally, they recommend starting with inverse transform sampling and exploring rejection sampling as alternative methods.

PREREQUISITES
  • Understanding of Metropolis-Hastings algorithm
  • Familiarity with Bayesian statistics and MCMC techniques
  • Knowledge of probability density functions (PDF) and cumulative distribution functions (CDF)
  • Basic programming skills, preferably in R for statistical computations
NEXT STEPS
  • Learn about inverse transform sampling and its applications
  • Study rejection sampling techniques and their efficiency
  • Explore the implementation of MCMC algorithms in R
  • Investigate the mathematical foundations of Bayesian statistics
USEFUL FOR

Statisticians, data scientists, and researchers interested in probabilistic modeling and sampling techniques, particularly those working with Bayesian methods and Monte Carlo simulations.

rabbed
Messages
241
Reaction score
3
Hi

If I have a CDF = integral wrt t from 0 to r^2/2 of 2*sqrt(pi*t)*e^(-t)*dt
but I want to generate samples from the PDF, would Metropolis-Hastings
algorithm be the best alternative?

Rgds
rabbed
 
Last edited:
Physics news on Phys.org
Hey rabbed.

Metropolis-Hastings algorithm is a Bayesian algorithm for doing pseudo-random number generation.

You can use a number of techniques to generate random numbers from a distribution (often known as Monte-Carlo) and your algorithm is one of those.

Are you familiar with the basic pseudo-random number generation relating a distribution to the uniform and then using standard U[0,1] techniques?

https://en.wikipedia.org/wiki/Inverse_transform_sampling
 
Hi chiro

Yes, I think so. If I have an invertible and closed form CDF of my PDF I can create samples from the PDF by using a uniform distribution.
But in my case, I think this is not possible?

As I'm a beginner, I thought a good strategy was to learn two methods - the preferred analytic way (inverse CDF) and a numerical method in cases where the analytic way fails. Would Metropolis-Hasting be a good choice?

Rgds
rabbed
 
Last edited:
Metropolis-Hastings would make more sense if you studied Bayesian Statistics since it is based on those techniques.

In Bayesian Statistics, the pseudo-random generation is based on an MCMC theorem (known as Markov-Chain Monte-Carlo).

Basically the idea behind it (the actual proof is quite involved and takes a bit of effort to make the most sense of) is that you have conditional probabilities that converge to the stationary distribution.

So a conditional probability could be represented as P(A|B) and with MCMC there is a technique where you can find the normal distribution (in this case P(A)).

https://en.wikipedia.org/wiki/Metropolis–Hastings_algorithm

The MCMC technique - when understood gives you an idea of how the algorithm does its magic.

The Bayesian techniques are good and extremely useful when you have all kinds of complicated dependencies between the distributions but they are best used when that is the case.

Normal pseudo-random generators are good to produce random vectors so long as you have lots of elements in the sample (so basically the more numbers you generate - the better the distribution will be reflected).

Since you are beginner I'll point out that pseudo-random generation is either done an actual distribution (discrete or continuous) in the form of a mass function or a density function and you either have the exact probabilities from an existing parametric distribution or you supply an arbitrary one yourself.

This means if you want to implement this in code, you will have to find a way to compute the density function and I would recommend you use the R platform since it has lots of code and does a lot of the really mundane stuff for you.

It's good that you are curious and my recommendation is start off with the basics first (inverse transform sampling), then learn some Bayesian (conditional probability) methods and then if you are keen take a look at the MCMC algorithm and its proof (if you are super keen).

What probability, mathematics, and statistics have you done?
 
Hey

I did some probability on very basic level ages ago and since then I got a bachelor degree in computer science, which involved a lot of math but not any probability or statistics. I'm swedish so I'm not sure if the course levels are the same as yours.
So the things I know at the moment is what I've picked up on the net.

Not sure what you mean by computing the density function. If I already have the PDF, you mean making a function with which I can evaluate the PDF? But what can I do with it after that to get a sample?

Basically I want to (from three spherical coordinates where two of them gives me uniform direction) sample the radius coming from a distribution such that the corresponding cartesian coordinates all turn out to be distributed from a standard normal distribution.
Having this as reference: http://mathematics.livejournal.com/1144375.html
 
Could it be possible to randomize u from U[0,1], then summing all PDF(x)*dx by increasing x from 0 in steps of dx until the sum is above u, meaning that my sample is x?
 
Last edited:
If you can not find the inverse of the CDF, a simple, but maybe inefficient way is to use the rejection method. Generate a uniform x in the range of the random variable and a uniform y, 0<y< max(PDF). Reject x if pdf(x) < y. Otherwise accept x. The accepted values of x will have the distribution you want. There are ways to make this more efficient. (See https://en.wikipedia.org/wiki/Rejection_sampling )
 
Last edited:
Sampling from U[0,1] is quite common and many pseudo-random algorithms exist to do this.

You can search a lot of source code (open source) for implementations of pseudo-random number generators from U[0,1] and I'd recommend you do that.

The inverse-transform sampling method can be applied to a multi-variable case (where you have random variables "entangled" with each other) but it involves a multi-variable transformation (like you do for a multi-variable calculus substitution with the Jacobian).
 
Thanks guys, I will do some testing!
 
  • #10
rabbed said:
Could it be possible to randomize u from U[0,1], then summing all PDF(x)*dx by increasing x from 0 in steps of dx until the sum is above u, meaning that my sample is x?

Yes.

The usual way to sample a random variable ##X## with cumulative distribution ##F(X)## is to pick a random value ##u## of a random variable that's uniformly distributed on [0,1]. The sample value of ##X## is ##x = F^{-1}(u)##. I think you are describing exactly that method.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
6K
Replies
2
Views
2K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K