What is the risk of infrequent events and outdated GPS systems?

In summary, the latest news reports that a GPS rollover event on April 6th could have some side-effects for older GPS systems. While newer systems have been programmed to accommodate the epoch change, older systems may experience timing data jumps and corrupt navigation data. This issue is not limited to one day and could potentially affect other systems that rely on GPS for timing and control functions. The article suggests that the solution is to increase the internal week counter, but this could lead to even bigger problems. It is argued that shorter intervals between rollover events would allow for more vigilance and prevent major consequences. However, this could also create public panic and fear, as seen in the case of Ted Koppel's book "Lights Out". It is important
  • #1
anorlunda
Staff Emeritus
Insights Author
11,308
8,732
From today's news:
https://arstechnica.com/information...vent-on-april-6-could-have-some-side-effects/

Most newer GPS receivers will shrug off the rollover because they’ve been programmed to accommodate the epoch change. But older systems won’t—and this may prove to have some interesting side-effects, as timing data suddenly jumps by 19.7 years. The clock change won’t directly affect location calculations. But if GPS receivers don’t properly account for the rollover, the time tags in the location data could corrupt navigation data in other ways.

But navigation isn't the only concern. There are many systems that use the time for other purposes—cellular networks, electrical utilities, and other industrial systems use GPS receivers for timing and control functions. Since many of these systems have extremely long lifecycles, they’re the ones most likely to have not been updated.

The rollover issue isn’t limited to one day. Because of the way some manufacturers accounted for the rollover date in the past—by hard-coding a date correction into receivers’ firmware—their systems might fail at some arbitrary future date. Some have already succumbed: in July of 2017, an older NovAtel GPS system failed, and while the company issued a notice months earlier warning users to upgrade firmware, many remained ignorant of the notice until it happened. Motorola OncoreUT+ systems and some receivers using Trimble’s GPS engines also have failed over the past three years for similar reasons.

There is an entire class of risks that I could label "infrequent events". The mother of all of them was Y2K. (Y2K bad consequences were avoided via massive publicity, money, and remedial efforts.) What they have in common is that the very long time between events, causes manufacturers, consumers, everyone to slack in vigilance. The irony is that the longer the time between events, the greater the risk. More dependable = more risky. That sounds contradictory.

I wrote before on PF that in some cases we should intervene to increase the resilience of industry and consumers. https://www.physicsforums.com/threads/staged-blackouts.922146/ One of the comments on that thread is that the same thinking should apply to GPS.

IOT (the Internet of Things) makes the problem worse. We own, or will own, smart devices which we never expect to update to the latest software revision. Light bulbs, smart wall plugs, applicances, automobiles ... Indeed, we might buy them in a store but the manufacturers were nameless faceless people who market wholesale goods on alibaba.com. There is almost no hope of contacting those manufacturers in the future.

The article says that the remedy for GPS is to increase the internal week counter from 10 bits to 13 bits. I argue that will make the problem worse! They should shorten it, so that the date rollover events happen frequently enough that we are all confident that no large scale negative onsequences will occur. Longer intervals allow more nameless, faceless, manufacturers to come an go and to be forgotten before the consequences of their lack of vigilance become evident.
 
  • Like
Likes QuantumQuest, FactChecker, kuruman and 1 other person
Computer science news on Phys.org
  • #2
That is an interesting take on risk.
One thing to consider is that a failure which is sufficiently delayed is more likely to be in a system that will become obsolete and replaced before the failure occurs.
Another thing to consider is that software backups can bring back bugs later that will fail. My job during Y2K was to change all the modify dates on backup tapes so that they could be retrieved correctly if needed. My programs that converted all the backup tapes to usable tapes kept me swapping tapes around the clock for weeks.
 
  • #3
Another factor I failed to mention in the OP is the public perception of risk as opposed to the objective measure of risk. More frequent events serve to bolster public confidence that we can deal with it.

If people are suddenly made aware of a risk previously unknown to them, they are vulnerable to panic and demagoguery. An example is Ted Koppel's book Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath. Koppel used his famous name to make it a best seller. The book basically says that we are screwed. Hackers will utterly destroy the power grid and bring down civilization; build your underground bunker and stock up on machine gun ammo for the post-apocalypse world. The public can't judge the technical merits of such a topic, but because they never experienced a national blackout, they are prepared to fear the worst.
 
  • Like
Likes FactChecker
  • #4
@anorlunda has it right we need shorter failovers or perfect systems. Since we can’t expect perfect then we should expect maintenance cycles such as this.

With respect to the national grid, NOVA showed how a power generator and it’s personal controller could be hacked allowing unnamed actors to mess with the frequencies of operation placing the device into an unstable resonant frequency that destroys the generator.
 
  • #5
jedishrfu said:
With respect to the national grid, NOVA showed how a power generator and it’s personal controller could be hacked allowing unnamed actors to mess with the frequencies of operation placing the device into an unstable resonant frequency that destroys the generator.
I am familiar with that demo. Do you see what a huge stretch that is to extrapolate it to regional or national blackouts?

The power grid is designed to survive multiple simultaneous failures. It gets demonstrated with every major weather event. The 1998 ice storm in Canada and the US knocked down about 300000 poles, and isolated dozens of generators. Yet the blackout did not extend beyond the ice storm boundaries.
 
  • #6
This is true but a computer attack from a state actor could be markedly different if you consider the morris worm or the ibm online xmas card fiasco. We are protected from that mayhem but there are other zero day exploits to come.
 
  • #8
But being in constant high alert creates sort of awareness fatigue which creates its own issues.
 
  • Like
Likes anorlunda
  • #9
jedishrfu said:
This is true but a computer attack from a state actor could be markedly different if you consider the morris worm or the ibm online xmas card fiasco. We are protected from that mayhem but there are other zero day exploits to come.
Stuxnet...
 
  • #10
Beyond stuxnet that was so yesterday... :-)
 
  • Like
Likes WWGD

1. What is the definition of "Class Risk: Infrequent Events"?

"Class Risk: Infrequent Events" refers to a category of risks that are low in probability but have severe consequences if they do occur. These events are considered rare and do not happen frequently.

2. How is "Class Risk: Infrequent Events" different from other types of risks?

"Class Risk: Infrequent Events" is different from other types of risks because it involves events that have a low likelihood of occurring but have a high impact if they do happen. Other types of risks may have a higher probability of occurring but with less severe consequences.

3. Can you give some examples of "Class Risk: Infrequent Events"?

Examples of "Class Risk: Infrequent Events" include natural disasters such as earthquakes, hurricanes, and tsunamis, as well as man-made disasters like nuclear accidents or terrorist attacks. Other examples may include rare health conditions or financial crises.

4. How do scientists assess the risk of "Class Risk: Infrequent Events"?

Scientists use various methods to assess the risk of "Class Risk: Infrequent Events," including statistical analysis, historical data, and expert opinions. They also consider the potential impact of these events and their likelihood of occurring in a given time frame.

5. How can individuals and organizations mitigate "Class Risk: Infrequent Events"?

To mitigate "Class Risk: Infrequent Events," individuals and organizations can take precautionary measures such as having emergency plans in place, investing in disaster preparedness, and obtaining insurance coverage. It is also essential to stay informed and updated on potential risks and take necessary precautions to minimize their impact.

Similar threads

Replies
49
Views
3K
  • Computing and Technology
Replies
25
Views
3K
  • Beyond the Standard Models
Replies
30
Views
7K
  • Sci-Fi Writing and World Building
2
Replies
56
Views
5K
  • Astronomy and Astrophysics
Replies
17
Views
3K
  • General Discussion
2
Replies
65
Views
8K
Back
Top