Easy-to-compute posteriors / closure under noisy sampling

In summary, the conversation discusses the concept of conjugate priors and its application in cases where the observation of a random variable is a function of two independent random variables. The main goal is to find natural examples of this with a small parametrized class, such as Gaussian or exponential distributions. The conversation also touches on the idea of multivariate conjugate priors and the possibility of handling situations where one of the variables is not observable. Further discussion is had on the potential of using a non-linear function as the observation and how it may affect the development of a conjugate prior.
  • #1
economicsnerd
269
24
I have a question on (I think?) Bayesian statistics.

Consider the following situation:
-P is a class of probability measures on some subset A of the real line
-q is a probability measure on some subset B of the real line
-f is a function on AxB
-My prior distribution on the random variable (X,Y) is an independent draw with X~p for some p in P and Y~q.

I'm interested in cases where: if I observe the realization of f(X,Y), my posterior distribution on X is still an element of P.

~~~~~~

Ultimately, I'm after natural examples of this with P being a pretty small parametrized class (e.g. Gaussian; exponential). My model example is the case where q and everything in P are both Gaussian and f is linear.

Does anybody here know any nice examples, any related general theory, or even in any vaguely related buzzwords that I might search? I'm totally oblivious here, and I don't have peers to ask.

Thanks!

EN

p.s. This is my first post here. Please forgive me if I've written too much, given an inappropriate title, or committed any other faux pas.
 
Physics news on Phys.org
  • #2
economicsnerd said:
or even in any vaguely related buzzwords that I might search?

"Conjugate priors"
 
  • #3
Thanks.
Stephen Tashi said:
"Conjugate priors"
If I understand correctly, that would apply to the case where X is measurable with respect to Z=f(X,Y), i.e. where I observe the realization of X. In this special case, my question would reduce to the question, "When is P a self-conjugate class?"

I'm very interested in the case where Y enters my observation (so that X isn't observable). Do you know of a studied generalization of conjugate priors which can handle this?
 
  • #4
I don't know results for a situation as general as the one you describe.

If you consider "multivariate conjugate priors" then the special cases when the variables are independent might apply. The special case when f(x,y) is expressible as f(x,y) = g(x)h(y) might be tractable. I think certain transformations of variables preserve conjugacy, but I don't know about bivariate transformations (x,y)--> (g(x,h),h(x,y)). If your bottom line objective is more specific than the problem you describe, I suggest you reveal it.
 
  • #5
Thanks again! That's a good idea, trying to specialize from multivariate conjugate priors.

My bottom-line objective is a tractable model I can play around with for the following story:
- Alice knows the realization of a bunch of independent random variables {X, Y_1, Y_2, Y_3,...}, but Bob doesn't.
- Every day n, Alice tells Bob X*Y_n, 6X+Y_n^2, or some other one-dimensional summary of X and Y_n.
I want to be able to cleanly write down how Bob's posterior on X changes over time.

The only example I have so far is the one where everything in sight is Gaussian, and Alice tells Bob a linear combination of X and Y_n.
 
  • #6
economicsnerd said:
- Every day n, Alice tells Bob X*Y_n, 6X+Y_n^2, or some other one-dimensional summary of X and Y_n.
.

Do you know what happens in the 1-dimensional case when the observation is a non-linear function of the underlying random variable? (I don't.) The usual development of a conjugate prior assumes you observe [itex] X [/itex], not [itex] X^2 [/itex] or something like that.
 
  • #7
Stephen Tashi said:
Do you know what happens in the 1-dimensional case when the observation is a non-linear function of the underlying random variable? (I don't.) The usual development of a conjugate prior assumes you observe [itex] X [/itex], not [itex] X^2 [/itex] or something like that.

I don't! I guess for any example I've tried to think through, knowing one of X,Y would (a.s.) determine the other. e.g. X+3Y, X*Y.

Thanks, ST. It's helpful to chat through this with somebody a bit.
 
  • #8
Are you interested in the situation where the reported observable value is always the same function of X and Y, such as X + 3Y, or does the function change from day to day?
 
  • #9
Probably it would take the same form with a different parameter, e.g. X+kY where k>0 might vary (but be known), though I don't think this should change things much.
 

1. What are easy-to-compute posteriors?

Easy-to-compute posteriors refer to probability distributions that can be calculated or approximated using simple and efficient methods. These methods are often used in Bayesian statistics to estimate the probability of a hypothesis or event based on new evidence or data.

2. Can easy-to-compute posteriors be used in real-world applications?

Yes, easy-to-compute posteriors have practical applications in various fields such as finance, medicine, and engineering. They are particularly useful in situations where quick and accurate estimation of probabilities is required.

3. How do easy-to-compute posteriors handle noisy sampling?

Easy-to-compute posteriors are designed to be robust against noise in the data. They use techniques such as Markov chain Monte Carlo (MCMC) sampling to account for uncertainty and variability in the data, ensuring accurate estimation of probabilities even in the presence of noise.

4. Are easy-to-compute posteriors applicable to all types of data?

Easy-to-compute posteriors can be applied to a wide range of data types, including continuous, discrete, and categorical data. However, the specific methods and techniques used to compute these posteriors may vary depending on the type of data and the underlying assumptions of the model.

5. How do easy-to-compute posteriors differ from traditional statistical methods?

Easy-to-compute posteriors differ from traditional statistical methods in that they are based on Bayesian principles, which allow for the incorporation of prior knowledge and updating of beliefs based on new data. They also tend to be more flexible and robust, making them suitable for complex and noisy datasets.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
951
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
454
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
440
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
  • Programming and Computer Science
Replies
2
Views
713
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
901
Back
Top