# Probability density function

Hello! I have been having problems with understanding how the probability density function is calculated. However, at the same time, I need it urgently for my research. Well, you could start by giving me a definition of
1. Refernce measure
2. That 'E' sign(looks like an epsilon, and I sound very untechnical)
And could you define this for me:

$Pr[X\in A]=\int_{A}f d\mu$

Never really learnt much about probability theory. Anyway, I am mainly asking how to do the calculations,ie I understand the first part of the equation but don't know how to calculate the second half.

chiro
Hello! I have been having problems with understanding how the probability density function is calculated. However, at the same time, I need it urgently for my research. Well, you could start by giving me a definition of
1. Refernce measure
2. That 'E' sign(looks like an epsilon, and I sound very untechnical)
And could you define this for me:

$Pr[X\in A]=\int_{A}f d\mu$

Never really learnt much about probability theory. Anyway, I am mainly asking how to do the calculations,ie I understand the first part of the equation but don't know how to calculate the second half.
Hello Ashwin_Kumar and welcome to the forums.

Typically most probabiltity density functions start from a set of assumptions.

The first thing you have to do is to figure out whether your domain is discrete or continuous: this can be determined by figuring out what you are measuring and if that variable is continuous or discrete.

Given the above there are two ways to get a pdf: the first is an assumption based approach and the second is the empirical approach.

An assumption based approach is just that: you use assumptions and from those you derive your density function.

The first thing you need to do is to make note of the Kolmogorov Axioms for a probability space.

Once you have these you add extra constraints to get your pdf. For example a uniform distribution is based on the axiom that every possibility of the domain has the same chance as every other possibility. The binomial is built on the idea that there are only two choices and that every individual Bernoulli trial within the space is independent of every other one. The Poisson process is a special kind of Binomial distribution.

The empirical approach is where you carry out some kind of test or experiment to get the probabilities. For example lets say you want to get the pdf for an unbiased dice and you don't want to take the uniform assumption for granted. You can use an experiment in conjunction with the law of large numbers to find a pdf based entirely on the results of your experiment.

The above is good when the distribution does not seem to correspond with any of the common assumptions found with distributions.

There are also distributions that are used mostly for analysis like the chi-square and the student t distribution.

In terms of probability for discrete you have the probability so no extra work needs to be done. For continuous you need to specify an interval (or a collection of intervals) to get a probability.

With respect to your equation, if we are talking about a domain on R, then you just use standard Riemann type integration. If it's on some kind of custom measure, then use the theory for that measure in the same kind of way.

LCKurtz