Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Easy-to-compute posteriors / closure under noisy sampling

  1. Jul 6, 2013 #1
    I have a question on (I think?) Bayesian statistics.

    Consider the following situation:
    -P is a class of probability measures on some subset A of the real line
    -q is a probability measure on some subset B of the real line
    -f is a function on AxB
    -My prior distribution on the random variable (X,Y) is an independent draw with X~p for some p in P and Y~q.

    I'm interested in cases where: if I observe the realization of f(X,Y), my posterior distribution on X is still an element of P.

    ~~~~~~

    Ultimately, I'm after natural examples of this with P being a pretty small parametrized class (e.g. Gaussian; exponential). My model example is the case where q and everything in P are both Gaussian and f is linear.

    Does anybody here know any nice examples, any related general theory, or even in any vaguely related buzzwords that I might search? I'm totally oblivious here, and I don't have peers to ask.

    Thanks!!

    EN

    p.s. This is my first post here. Please forgive me if I've written too much, given an inappropriate title, or committed any other faux pas.
     
  2. jcsd
  3. Jul 6, 2013 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    "Conjugate priors"
     
  4. Jul 6, 2013 #3
    Thanks.
    If I understand correctly, that would apply to the case where X is measurable with respect to Z=f(X,Y), i.e. where I observe the realization of X. In this special case, my question would reduce to the question, "When is P a self-conjugate class?"

    I'm very interested in the case where Y enters my observation (so that X isn't observable). Do you know of a studied generalization of conjugate priors which can handle this?
     
  5. Jul 6, 2013 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    I don't know results for a situation as general as the one you describe.

    If you consider "multivariate conjugate priors" then the special cases when the variables are independent might apply. The special case when f(x,y) is expressible as f(x,y) = g(x)h(y) might be tractable. I think certain transformations of variables preserve conjugacy, but I don't know about bivariate transformations (x,y)--> (g(x,h),h(x,y)). If your bottom line objective is more specific than the problem you describe, I suggest you reveal it.
     
  6. Jul 6, 2013 #5
    Thanks again! That's a good idea, trying to specialize from multivariate conjugate priors.

    My bottom-line objective is a tractable model I can play around with for the following story:
    - Alice knows the realization of a bunch of independent random variables {X, Y_1, Y_2, Y_3,...}, but Bob doesn't.
    - Every day n, Alice tells Bob X*Y_n, 6X+Y_n^2, or some other one-dimensional summary of X and Y_n.
    I want to be able to cleanly write down how Bob's posterior on X changes over time.

    The only example I have so far is the one where everything in sight is Gaussian, and Alice tells Bob a linear combination of X and Y_n.
     
  7. Jul 6, 2013 #6

    Stephen Tashi

    User Avatar
    Science Advisor

    Do you know what happens in the 1-dimensional case when the observation is a non-linear function of the underlying random variable? (I don't.) The usual development of a conjugate prior assumes you observe [itex] X [/itex], not [itex] X^2 [/itex] or something like that.
     
  8. Jul 8, 2013 #7
    I don't! I guess for any example I've tried to think through, knowing one of X,Y would (a.s.) determine the other. e.g. X+3Y, X*Y.

    Thanks, ST. It's helpful to chat through this with somebody a bit.
     
  9. Jul 9, 2013 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    Are you interested in the situation where the reported observable value is always the same function of X and Y, such as X + 3Y, or does the function change from day to day?
     
  10. Jul 9, 2013 #9
    Probably it would take the same form with a different parameter, e.g. X+kY where k>0 might vary (but be known), though I don't think this should change things much.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Easy-to-compute posteriors / closure under noisy sampling
Loading...