# Negative Leap Second, Not Y2K

Staff Emeritus
2021 Award
If you are using a well-vetted structure, like time_t, and well-vetted routines like difftime, what you suggest will work.

The problem apparently is that people don't use difftime(time_t x, time_t y) for the difference between the times x and y, but rather roll their own.

As you say, NTP handles this fine, and it deals with clocks that are too far ahead millions of times each day with no problems. The difficulty seems to be that some code where time-keeping is missionm critical a) bypass NTP, and b) bypass the well-vetted and built-in functionality described above.

anorlunda
Gold Member
For us the implementers, the algorithm is clear:
• If NTP (or GPS or whatever) tells us that we should add a leap second, the last minute befor midnight should contain 61 seconds.
• If we are told that we should subract a leap second (or add a negative leap second), the last minute befor midnight should contain 59 seconds.
Ours is not to decide why, ours is to implement according to the specifications.
And it is naive views like that that cause the problems.

Many of these problems stem from the fact that most operating systems (and posix systems in particular) do not store time as minutes and seconds, they store time as the notional number of seconds since some arbitrary point in time. For reasons which quickly become obvious, this notional clock is not adjusted for leap seconds so that for instance noon on 1 Jul 2015 is exactly 86,400 seconds after 30 Jun 2015. And 2015-07-01T00:00:01 (stored as 1435708830) is exactly 2 seconds after 2015-06-30T23:59:59.

But because a leap second was declared in that interval, all the clocks on the machine have to fit 3 seconds into that 2 second period - or perhaps in some other period when their NTP daemon catches up. This means (among other things) that an event (such as a file modification) that is recorded as happening at 2015-06-30T23:59:59 may have happened after an event recorded at 2015-07-01T00:00:00.

Gold Member
If you are using a well-vetted structure, like time_t, and well-vetted routines like difftime, what you suggest will work.
Difftime is not going to help, if timestamps are unexpectedly out of sequence it will still report time moving backwards.

The difficulty seems to be that some code where time-keeping is missionm critical a) bypass NTP, and b) bypass the well-vetted and built-in functionality described above.
If time-keeping is mission critical it is absolutely essential that you bypass (any clock synchronised by) NTP and use a clock which measures (fractions of) seconds rather than the movement along an arbitrary non-monotonic scale.

Some discussion can be seen at https://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp which contains the excellent summary paragraph:

Which method for correcting the system clock on leap second should you choose? It depends on your applications. It they don't require the system time to be monotonic, use ntpd or chronyd in their default configuration. There will be a backward step, but the clock will be off only for one second. If the time must be monotonic and the requirements on its accuracy are not very strict, you can use ntpd with the -x option to slew the clock instead. With chronyd you can set the leapsecmode option set to slew. If there is also a requirement to keep the clocks on multiple systems close to each other, consider using chronyd configured with a smaller maximum slew rate. If that is not an option, you can run your own leap smearing NTP server with chronyd, but be careful to not mix it with other servers.

Staff Emeritus
Some discussion can be seen at
Thanks for sharing. Who could fail to read and digest all that? [SARCASM]

Who might be tempted to use a simple subtraction to implement t2-t1 in code; treating time as different than any other variable?

"And it is naive views like that that cause the problems."

Well, I do not agree. And, of course, you never change the real-time clock. What you change is the offset used in reporting the local time, just as you would change the offset when going to and from DST (which, by the way, is a much greater source of confusion than leap seconds).

Gold Member
And, of course, you never change the real-time clock. What you change is the offset used in reporting the local time, just as you would change the offset when going to and from DST
Oh if only that were true, unfortunately it is the opposite of what normally happens in mainstream operating systems which change the "real-time" clock to match UTC. In Linux systems this is generally done by ntpd which by default will rewind the clock by a second without even telling you.

The worst nuisance is when files and events are timestampe with the "local clock" instead of UTC. Entering and leaving DST makes the local clock skipping an hour or jumping back an hour respectively.

Melbourne Guy
Let it go for a couple millennia and we'll fix it when the next daylight savings time change comes around.
Or, for some of us Down Under, much faster than that. Our clocks go back an hour every year, and have been doing so for many years, and non of my electronic devices - digital radios, PVR, phones, Linux / Windows PCs - that use external time servers to sync have ever had a problem.

Is this different to that process?

Staff Emeritus
https://arstechnica.com/science/202...-be-abandoned-by-2035-for-at-least-a-century/
A near-unanimous vote on Friday in Versailles, France, by parties to the International Bureau of Weights and Measures (BIPM in its native French) on Resolution 4 means that starting in 2035, the leap second, the remarkably complicated way of aligning the earth's inconsistent rotation with atomic-precision timekeeping, will see its use discontinued. Coordinated Universal Time, or UTC, will run without them until 2135. It was unclear whether any leap seconds might occur before then, though it seems unlikely.
Leap seconds R.I.P. But the wait until 2035 sounds strange. Perhaps someone can interpret the following from the same article.

The assumption is that within those 100 years, time-focused scientists (metrologists) will have found a way to synchronize time as measured by humans to time as experienced by our planet orbiting the Sun.

Gold Member
"Now that we survived the Y2K bug, we need to be ready for the next one: the Y10K bug. Let's not get caught with our pants down again."
- DaveC426913, 02001

Gold Member
But the wait until 2035 sounds strange.
I gather it is the earliest date that could be agreed - the Russians wanted later (it appears that they believe, mistakenly, that leap seconds give GLONASS an advantage) whereas nearly everyone else wanted it sooner, ideally now. The resolution allows for it to be sooner, it is to be agreed in 2026, so all we need to do is not have any leap seconds before then and common sense will prevail (in BIPM? I'll not place any bets). See the extract below from the published resolutions: https://www.bipm.org/documents/2012...2022.pdf/281f3160-fc56-3e63-dbf7-77b76500990f.
The General Conference on Weights and Measures (CGPM)...
• decides that the maximum value for the difference (UT1-UTC) will be increased in, or before, 2035,
• requests that the CIPM consult with the ITU, and other organizations that may be impacted by this decision in order to
• propose a new maximum value for the difference (UT1-UTC) that will ensure the continuity of UTC for at least a century,
• prepare a plan to implement by, or before, 2035 the proposed new maximum value for the difference (UT1-UTC),
• propose a time period for the review by the CGPM of the new maximum value following its implementation, so that it can maintain control on the applicability and acceptability of the value implemented,
• draft a resolution including these proposals for agreement at the 28th meeting of the CGPM (2026),

Last edited:
anorlunda
Gold Member
Leap seconds R.I.P. But the wait until 2035 sounds strange. Perhaps someone can interpret the following from the same article.
The assumption is that within those 100 years, time-focused scientists (metrologists) will have found a way to synchronize time as measured by humans to time as experienced by our planet orbiting the Sun.

It sounds like they don't have a clue about appropriate implementation, and are hoping someone will come up with a "Bright Idea" by then.

Gold Member
It sounds like they don't have a clue about appropriate implementation, and are hoping someone will come up with a "Bright Idea" by then.

I don't know where the journalist got "the assumption is that within those 100 years, time-focused scientists (metrologists) will have found a way to synchronize time as measured by humans to time as experienced by our planet orbiting the Sun" from, and the assumption is unnecessary, because leap seconds serve no useful purpose whatsoever.

It has been suggested that after 100 years UTC may have drifted by up to a minute but
1. this depends on predictions about the rotation of the Earth about which we now realise we don't understand as much as we thought we did
2. IT DOESN'T MATTER. If someone happens to celebrate their 121st birthday by repeating a party they held at dawn on their 21st birthday at the same time and in the same place they are not going to notice that sunrise is a minute later than it was 100 years ago.

Staff Emeritus
It has been suggested that after 100 years UTC may have drifted by up to a minute but
Ouch. That suggests that they might want a leap minute after a century of programmers being out of practice and having forgotten how to handle a leap.

Oh well, the Technological Singularity should happen before then, so there will be no more human programmers.

Staff Emeritus