Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Estimating joint distributions from marginal

  1. Jul 23, 2012 #1
    Suppose I have the marginal probability density functions of two random variables A and B, P(A), and P(B). Suppose I modelled P(A) and P(B) using a mixture model from some dataset D and obtained a closed form pdf for each.

    I am interested in finding their joint density function P(A and B) and associated properties such as maximas, minimas, etc.

    Ideally the joint density is expressed as a closed form 2D mixture model as well, but this is not critical.

    I could do something perhaps by brute force by use of Baye's theorem:

    ie. I can approximate

    P(A and B) = P(A) P(B | A) = P(B) | P(A | B)

    But eventually I need to extend this to higher dimensions, eg. P( A and B and C and D... etc) and this is certainly no trivial task.
     
  2. jcsd
  3. Jul 23, 2012 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    In general, you cannot determine a joint probability distribution when given only the marginal probability distributions, so if your problem can be solved the solution depends on special circumstances or information that you haven't mentioned. To get the best advice, you should describe the situation completely.

    That isn't a mere approximation. It is a theorem.

    Are you saying that you have data that could be used estimate the conditional probability distributions?
     
  4. Jul 24, 2012 #3
    Well A and B are two variables that specify (completely) the state of the system. Suppose i've sampled a whole bunch of data points (a,b) s.t. I can generate their PDFs.

    I can approximate P(B | A=a1) and P(A | B=b1) as well by taking a slice of my dataset, (eg. B= b1+-0.1) and count the occurrences of A. However, this can be bad because my entire dataset may be quite small, and using only a subset of it will result in a lot of noise and error.
     
  5. Jul 24, 2012 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    I think the only convenient cure for small amounts of data is to build-in a lot of structure to the answer - for example, you might assume the distribution you are trying to determine is from a family of distributions that are defined by only a few parameters and estimate those parameters from the data. To do this you must employ any expert knowledge that you have about the situation. For example you may know that certain families of distributions have a plasusible shape and others don't.

    You haven't described the problem clearly, but from your remarks, I conjecture that you are dealing with continuous variates. Some technicalities about your terminology: The value of a continuous probability density function does not give "the probability of" particular values. (For example, think about the uniform probability density function on the interval [0, 1/2] which has constant value 2.) However, I agree that it is often helpful to think about density functions informally that way. Observed frequences of values in a sample are not probabilities (unless you are taking about randomly selecting a value from the sample itself.) So you shouldn't use the p(A|B) notation for them. Of course, observed frequencies can be used as estimators of probabilities.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Estimating joint distributions from marginal
Loading...