Easy-to-compute posteriors / closure under noisy sampling

  • Context: Graduate 
  • Thread starter Thread starter economicsnerd
  • Start date Start date
  • Tags Tags
    closure Sampling
Click For Summary

Discussion Overview

The discussion revolves around Bayesian statistics, specifically focusing on the posterior distributions of random variables when observing functions of those variables. Participants explore the implications of observing non-linear functions and the concept of conjugate priors in various contexts, including multivariate cases and specific examples involving independent random variables.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes a scenario involving independent random variables X and Y, with a prior distribution and a function f(X,Y), seeking conditions under which the posterior distribution on X remains within a specified class of probability measures P.
  • Another participant suggests the concept of "conjugate priors" as relevant, noting that it typically applies when X is measurable with respect to the observed Z=f(X,Y).
  • A participant expresses interest in generalizations of conjugate priors that accommodate cases where Y is included in the observation, rather than X alone.
  • Discussion includes the idea of "multivariate conjugate priors" and the potential tractability of special cases where the function f can be expressed in a certain form.
  • One participant outlines a specific scenario involving Alice and Bob, where Alice knows independent random variables and communicates one-dimensional summaries to Bob, seeking to understand how Bob's posterior on X evolves over time.
  • Questions arise regarding the implications of observing non-linear functions of the underlying random variables, with participants noting that traditional conjugate prior development assumes direct observation of X.
  • Another participant reflects on the relationship between X and Y, suggesting that knowing one could almost surely determine the other in certain examples.
  • There is a discussion about whether the function used to summarize the observations remains constant or varies over time, with a participant proposing that it might take a similar form with different parameters.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and interest in the concepts discussed, with some proposing ideas and others seeking clarification. No consensus is reached on the specific conditions or examples that would satisfy the original question posed.

Contextual Notes

Participants acknowledge the complexity of the problem, particularly in relation to the assumptions about the functions and the nature of the observations. The discussion remains open-ended regarding the applicability of conjugate priors in the scenarios described.

economicsnerd
Messages
268
Reaction score
24
I have a question on (I think?) Bayesian statistics.

Consider the following situation:
-P is a class of probability measures on some subset A of the real line
-q is a probability measure on some subset B of the real line
-f is a function on AxB
-My prior distribution on the random variable (X,Y) is an independent draw with X~p for some p in P and Y~q.

I'm interested in cases where: if I observe the realization of f(X,Y), my posterior distribution on X is still an element of P.

~~~~~~

Ultimately, I'm after natural examples of this with P being a pretty small parametrized class (e.g. Gaussian; exponential). My model example is the case where q and everything in P are both Gaussian and f is linear.

Does anybody here know any nice examples, any related general theory, or even in any vaguely related buzzwords that I might search? I'm totally oblivious here, and I don't have peers to ask.

Thanks!

EN

p.s. This is my first post here. Please forgive me if I've written too much, given an inappropriate title, or committed any other faux pas.
 
Physics news on Phys.org
economicsnerd said:
or even in any vaguely related buzzwords that I might search?

"Conjugate priors"
 
Thanks.
Stephen Tashi said:
"Conjugate priors"
If I understand correctly, that would apply to the case where X is measurable with respect to Z=f(X,Y), i.e. where I observe the realization of X. In this special case, my question would reduce to the question, "When is P a self-conjugate class?"

I'm very interested in the case where Y enters my observation (so that X isn't observable). Do you know of a studied generalization of conjugate priors which can handle this?
 
I don't know results for a situation as general as the one you describe.

If you consider "multivariate conjugate priors" then the special cases when the variables are independent might apply. The special case when f(x,y) is expressible as f(x,y) = g(x)h(y) might be tractable. I think certain transformations of variables preserve conjugacy, but I don't know about bivariate transformations (x,y)--> (g(x,h),h(x,y)). If your bottom line objective is more specific than the problem you describe, I suggest you reveal it.
 
Thanks again! That's a good idea, trying to specialize from multivariate conjugate priors.

My bottom-line objective is a tractable model I can play around with for the following story:
- Alice knows the realization of a bunch of independent random variables {X, Y_1, Y_2, Y_3,...}, but Bob doesn't.
- Every day n, Alice tells Bob X*Y_n, 6X+Y_n^2, or some other one-dimensional summary of X and Y_n.
I want to be able to cleanly write down how Bob's posterior on X changes over time.

The only example I have so far is the one where everything in sight is Gaussian, and Alice tells Bob a linear combination of X and Y_n.
 
economicsnerd said:
- Every day n, Alice tells Bob X*Y_n, 6X+Y_n^2, or some other one-dimensional summary of X and Y_n.
.

Do you know what happens in the 1-dimensional case when the observation is a non-linear function of the underlying random variable? (I don't.) The usual development of a conjugate prior assumes you observe [itex]X[/itex], not [itex]X^2[/itex] or something like that.
 
Stephen Tashi said:
Do you know what happens in the 1-dimensional case when the observation is a non-linear function of the underlying random variable? (I don't.) The usual development of a conjugate prior assumes you observe [itex]X[/itex], not [itex]X^2[/itex] or something like that.

I don't! I guess for any example I've tried to think through, knowing one of X,Y would (a.s.) determine the other. e.g. X+3Y, X*Y.

Thanks, ST. It's helpful to chat through this with somebody a bit.
 
Are you interested in the situation where the reported observable value is always the same function of X and Y, such as X + 3Y, or does the function change from day to day?
 
Probably it would take the same form with a different parameter, e.g. X+kY where k>0 might vary (but be known), though I don't think this should change things much.
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
8K
Replies
1
Views
4K
Replies
24
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
6
Views
4K