How to fit given function to blurred data points?

Click For Summary
Fitting parameters of a function family to data represented by probability distributions instead of precise coordinates is a complex challenge that can be approached using Bayesian statistics. A fully Bayesian solution involves multiplying the likelihood of the parameters with their prior distributions to derive the posterior probability distribution. While classical statistics typically focus on maximizing likelihood, Bayesian methods can accommodate various error structures beyond Gaussian distributions. The discussion highlights the potential for computational challenges, especially with many parameters or highly correlated variables, but suggests that the problem is generally linear with respect to data points. Hierarchical modeling and error-in-variables approaches can be useful strategies for addressing these issues effectively.
sceptic
Messages
10
Reaction score
0
Are there any elaborated theory or method how to fit parameters of a function family to data given by probability distributions of data points instead of given coordinates of points precisely without error? I think this is a very general problem, I hope it is already solved.

Important:

I would like a general method working with any kind of probability distribution around data points, not just a Gaussian which can be described an error value, for example its variance.

I would like to use all information which is available, so a fully Bayesian solution without unnecessary estimation.
 
Physics news on Phys.org
In classical statistics, you would set up the Likelihood for your parameters and maximize it.
Bayesian statistics is similar: you multiply the likelihood with the prior distributions of the the parameters to obtain the posterior probability distribution of the parameters.
 
Yes, I know all the principles. But I need a practical example with equations, maybe a book chapter or a paper with this kind of problem. For example what kind of keyword should I search for? The distributions can be the same, but not Gaussian. Is it practically possible to calculate at all? Maybe in general for lots of data point distributions the problem can exponentially explode, or can't?
 
Likelihood and minimization are good keywords. Usually the point of maximal likelihood is found with iterative approximations.
sceptic said:
Maybe in general for lots of data point distributions the problem can exponentially explode, or can't?
No, it is typically linear with data points (because you have to calculate the likelihood for each data point). Many free parameters can make the problem time-consuming, especially if they are highly correlated.
 
There is a concept of an "error-in-variables" model that deals with this kind of thing, although I'd probably just take a hierarchical approach. As an example, suppose that we have observed points ##(x_1,\dots,x_n)## from a normal distribution ##N(\mu,\tau^2)## which we assume are actually measured with normally distributed error ##N(0,\sigma^2_i)##. If ##x_i## has true value ##\mu_i## (which is unobserved), then we have ##x_i \sim N(\mu_i, \sigma^2_i)##, so the full model for the mean is
x_i = \mu_i + \epsilon_i, \ \ \ where \ \epsilon_i \sim N(0, \tau^2)
or
x_i = \mu + e_i + \epsilon_i, \ \ \ where \ e_i \sim N(0, \sigma^2_i)
Basically, we just model the error at two different levels.

A similar regression model might take the form

y_i = \alpha + \beta \mu_i + \epsilon

Note that you can assume any kind of error structure you want; it doesn't have to be normal. The same general approach would still apply.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
28
Views
4K
Replies
8
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
Replies
24
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
9K