# Universal Oscillations?

## Main Question or Discussion Point

Hey all,
http://iopscience.iop.org/1538-3881/149/4/137/
This could easily be due to observational error, but I was wondering whether something such as this is possible in our current models of expansion and dark energy without having to fine-tune the parameters depending on time?

That was interesting, (for me anyway interesting at the same level as string theories are interesting).
I guess they must go from there to eliminating the liklyhood of observational errors which you mentioned, or also of that irritating fact that random data can include apparent features, corespondancies, subsets, which appear to be self similar for no particular reason.

Chalnoth
Hey all,
http://iopscience.iop.org/1538-3881/149/4/137/
This could easily be due to observational error, but I was wondering whether something such as this is possible in our current models of expansion and dark energy without having to fine-tune the parameters depending on time?
This kind of idea pops up every once in a while. I can virtually guarantee you that it's just a matter of bad statistical analysis (I can't look at the refereed paper there...do they have an arxiv preprint?).

Chronos
Gold Member
Chalnoth
See arxiv.org/abs/1502.06140 for the preprint.
Wow. That's some terrible statistical analysis. From the paper:

A 7 HHz ( $\pm$1HHz ) signal appeared 272 times or 5.4% ( $\pm$0.3%) of the time. The 95% confidence interval for these trials is 5.4% ( $\pm$0.6% ). Next a 7 HHz signal was introduced at 1/10 the noise peak amplitude. 1000 trials were run. A 7 HHz signal level at least twice the noise was seen 52.1% of the time. Thus the likelihood of the dominant signal at 7 HHz being real is approximately 10/1 using this filtering.
That's just absurdly incorrect. Taking the ratio of probabilities like that is just completely invalid.

Furthermore, with this kind of analysis, the correct comparison is not to the probability of that specific signal in the simulated data, but instead to the probability of seeing any signal of similar magnitude. They could, for example, have made use of Bayesian evidence to get a handle on just how likely it is to see a pattern of similar strength in their simulated samples. Sadly, statistical inference is not unambiguous: you always have to make assumptions on the probability distributions. But there are wildly incorrect ways of doing it, and this is one.

In addition, the actual probability distribution of the data is generally not going to be Gaussian, and purely Gaussian simulations will tend to underestimate the frequency of spurious signals. So a 2-sigma result like this is almost certainly far less significant than it appears at first glance.

A case of looking for something and then finding what you expected?
Cherrypicking?, Confirmation bias?
I have no idea, but I will follow this for a while and see how the discussion develops.

Okay,
Judging by your replies I suppose something like this cannot be explained by our current models?

Chalnoth