Hello all, To just begin, I am having a lot of trouble keeping my brain in the Bayesian view and not letting it revert back to a Frequentist way of thinking. Not to mention having troubles with unbinned MLE estimation. If I say something wrong, please let me know. My questions are as follows: 1) In the Frequentist view, the mean of the distribution of fitted estimators is the true value. If it doesn't match up with your true pseudo-experiment's value, you have a bias and you need to correct for it when you analyze the real experimental data. In the Bayesian view, it doesn't make sense for there to be multiple data sets. How do you test for bias in the Bayesian paradigm? 2) The Bayesian paradigm comes built in with a spiffy way of dealing with systematic uncertainty. You can just allow your unobservable parameter to take all the values within its uncertainty via Monte Carlo and see the resultant spread in your fitted estimator. However when reading papers, it usually gives the estimator in a value plus systematic error plus statistical error format. Does the statistical error even exist in the Bayesian paradigm? The data is what it is and identical experiments don't exist so there is no way of determining a spread in your fitted estimator over many data sets who only vary due to poisson statistics. I'm sure I'll have more questions but those are it for now. I would be greatly appreciative if I could get some help in understanding this. Thanks again!