# Instability of an atomic clock

Niles
Hi

When I read "popular" papers on atomic clocks, many journalists write that the clock loses 1 second in XX million/billion years. But when I look at some professional papers, people talk about a fractional instability of e.g. 10-14t-1/2, the authors never use the former way of characterizing the clocks. How does one go from one "representation" to the other?

Gold Member
You can't. At least not properly.
The correct way to measure the stability of an oscillator is to measure the Allan deviation; this will be in the form of a plot with the fractional stabilty on the y-axis and the integration time on the x-axis. Hence, there is no single number that can be used to characherize an oscillator/clock.

What you CAN do is of course to just take the stability for the integration time used to steer the actual clocks, and then calculate what this means in term of say change per million years.

Niles
You can't. At least not properly.
The correct way to measure the stability of an oscillator is to measure the Allan deviation; this will be in the form of a plot with the fractional stabilty on the y-axis and the integration time on the x-axis. Hence, there is no single number that can be used to characherize an oscillator/clock.

What you CAN do is of course to just take the stability for the integration time used to steer the actual clocks, and then calculate what this means in term of say change per million years.

Thanks. The last thing you suggest I tried to do, and I just set t=1 as an example. So I get the fractional instability 10-14 during 1s measurement time. So this means that the clock loses 10-16% of a second during 1 second, right?

Best,
Niles.

Mentor
10-14 = 10-12% (as 10-2 = 1%). While accurary usually increases with measurement time, there are some technical or physical limits too - and those determine the minimal deviation.
Note that this deviation is not fixed - otherwise you could simply correct the error (and it would not be an uncertainty).