Understanding the Accuracy and Precision of the Definition of Second

  • Thread starter Thread starter darkar
  • Start date Start date
  • Tags Tags
    Definition
Click For Summary
The definition of a second is based on the cesium-133 atom completing 9,192,631,770 oscillations, aligning with previous time standards. While cesium clocks are highly accurate, with some models achieving an uncertainty of one second over 60 million years, various factors contribute to potential errors, including electronic synchronization and relativistic effects from atomic motion. The accuracy of atomic clocks can be affected by environmental conditions, such as gravity, but adjustments are made to ensure consistency across different locations. Clocks at sea level tick uniformly, while those at higher altitudes require corrections due to variations in gravitational potential. Ultimately, the precision of timekeeping relies on comparing multiple clocks to identify and account for discrepancies.
darkar
Messages
187
Reaction score
0
The definiton of second is the time needed for a cesium-133 atom to perform 9,192,631,770 complete oscillations.

Why 9,192,631,770 complete oscillations? Is that because it is the interval of 1 sec that is about the same as the old definition?

And the cesium clock, is claimed to accurate to a single second over 3 million years. Therefore, couldn't we just take this error into account, so that there will be no error?
 
Physics news on Phys.org
darkar said:
The definiton of second is the time needed for a cesium-133 atom to perform 9,192,631,770 complete oscillations.

Why 9,192,631,770 complete oscillations? Is that because it is the interval of 1 sec that is about the same as the old definition?

Yes

And the cesium clock, is claimed to accurate to a single second over 3 million years. Therefore, couldn't we just take this error into account, so that there will be no error?

That error figure is representative. Basically, one builds not one, but a whole bunch of clocks, and sees how well they agree with each other. Individual clocks will exhibit both systematic and random errors. If you built a whole bunch of clocks, after 3 million years you'd expect anyone of them to be about a second away from the average.

I don't know all of the error sources in cesium clocks, some of the ones that I've heard of are:

1) the phase lock loop losing synchronization. The design of the clock is that there is an external microwave source which is "tuned" to match the resonance frequence of the atoms. A phase locked loop keeps the microwave source tuned to the right frequence. The atoms may be keeping good time, but the electronics has to "keep up". This depends on the design of the electronics and measurement circuitry. A key factor is the observation time - the time over which an error of 1 count can be measured by the electronics. In our best clocks, this is as high as 1 second, which means that the electronics might add or subtract a count from the actual count kept by the atoms over a peroid of about a second.

2) the motion of the cesium atoms themselves, which causes relativistic time dilation effencts. This is kept to a minimum by cooling the cesium atoms to ultra cold temperatures.

Currently the main US cesium fountain clock is expected to have an uncertanity of 1 second in 60 million years, about 20x better than the figure you quoted. This has been improving as more and more sources of error are tracked down and eliminated.

http://tf.nist.gov/cesium/fountain.htm

is the source for much of this information.
 
I got some other questions.

Well, If we defined 1 sec using the atomic clock, then how can there be uncertainty in accuracy? Since we are using it as a reference.

And, after gone through the website, it mentions that the gravity takes parts in this atomic clock mechanism. So, if 2 places with different G, then the 1 sec of both places is not the same! How do they determine the 1 sec then?

Thanks
 
darkar said:
I got some other questions.

Well, If we defined 1 sec using the atomic clock, then how can there be uncertainty in accuracy? Since we are using it as a reference.

And, after gone through the website, it mentions that the gravity takes parts in this atomic clock mechanism. So, if 2 places with different G, then the 1 sec of both places is not the same! How do they determine the 1 sec then?

Thanks

The standard is the vibration of the cesium ion under ideal conditions. Real clocks are limited both in the accuracy with which they measure the vibration of the ions, and the ideality of the conditions under which the vibrations are measured. Hence real clocks have some uncertanity.

As far as gravity goes, all clocks on the geoid (which basically means at sea level) tick at the same rate. The contributions of clocks that are significantly above sea level (like Denver) have to be and are adjusted for the fact that they run at a different rate. Note that it is not the force of gravity that is important, but the gravitational potential energy.
 
The reason we know there is error is that if you compare two cesium clocks, they won't say exactly the same thing. Compare a lot of cesium clocks and you'll get a good idea of what the error is.
 
I'm setting up an analog power supply. I have the transformer bridge and filter capacitors so far. The transformer puts out 30 volts. I am currently using two parallel power transistors and a variable resistor to set the output. It also has a meter to monitor voltage. The question is this. How do I set up a single transistor to remove whats left of the ripple after the filter capacitors. It has to vary along with the control transistors to be constant in its ripple removing. The bases of the...

Similar threads

  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 56 ·
2
Replies
56
Views
6K
  • · Replies 14 ·
Replies
14
Views
1K
Replies
17
Views
4K
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 0 ·
Replies
0
Views
2K
Replies
1
Views
2K