Can computer control systems be relied upon for critical processes?

  • Thread starter Thread starter BobG
  • Start date Start date
  • Tags Tags
    Computers
Click For Summary
Computer control systems can be reliable for critical processes, but their effectiveness often depends on human oversight. Techniques like triple redundancy help mitigate sensor failures by cross-verifying readings from multiple sensors. Despite advancements, there remains a widespread reluctance to fully trust computers in decision-making roles, especially in high-stakes environments like nuclear power plants. Human operators are typically required to intervene, as they can better assess unique situations that computers may not handle well. Ultimately, while computers can outperform humans in many tasks, the need for human judgment remains crucial in critical applications.
  • #61
jarednjames said:
And usually a far more reliable and honest output than you'd get otherwise. :wink:

That's because this alteration breaks other subprograms, such as Inhibitions and ThingsBestLeftUnsaid. :smile:
 
Computer science news on Phys.org
  • #62
DaveC426913 said:
That's because this alteration breaks other subprograms, such as Inhibitions and ThingsBestLeftUnsaid. :smile:

:smile:
 
  • #63
DaveC426913 said:
Of course you can. Consider the essence of hacking. Anything you can do to a computer, could be done to a human easily enough.

Alter his programming? Sure. Give him alcohol. (With the same input, we now get different output.)

Insert a pernicious subprogram? Sure. Shower him with propoganda, changing his political values (his output may change to something covert that does not benefit the system, and may hurt it.)

It's not the same, not as easy, not as reliable... just ask the CIA and every military in the modern world... people are too variable.

Yeah, stick them with amphetamines and barbituates, or versed and scopalamine and you'll get something (who knwos what), and you can go 'Clockwork Orange' on them, but really it's not that simple.

In a few minutes many people here could insert a routine into these forums to cause a temporary breakdown, or gain administrative privelages. There is no equivalent for humans that isn't M.I.C.E, takes time, and has uncertain outcomes.

*bribery is under MICE
 
  • #64
nismaratwork said:
It's not the same, not as easy, not as reliable... just ask the CIA and every military in the modern world... people are too variable.

Yeah, stick them with amphetamines and barbituates, or versed and scopalamine and you'll get something (who knwos what), and you can go 'Clockwork Orange' on them, but really it's not that simple.

In a few minutes many people here could insert a routine into these forums to cause a temporary breakdown, or gain administrative privelages. There is no equivalent for humans that isn't M.I.C.E, takes time, and has uncertain outcomes.

*bribery is under MICE

But you're bifurcating bunnies and missing the point.

Simply put, humans are, like computers, susceptible to alterations in their expected tasks.


(I just heard on the news about a Washington Airport Tower Controller that "crashed" without a "failover system" in place. :biggrin:
http://www.suite101.com/content/air-traffic-controller-sleeps-while-jets-race-toward-white-house-a361811 )
 
Last edited by a moderator:
  • #65
DaveC426913 said:
But you're bifurcating bunnies and missing the point.

Simply put, humans are, like computers, susceptible to alterations in their expected tasks.


(I just heard on the news about a Washington Airport Tower Controller that "crashed" without a "failover system" in place. :biggrin:
http://www.suite101.com/content/air-traffic-controller-sleeps-while-jets-race-toward-white-house-a361811 )

Oh, don't get me wrong, humans fail, but consider what Stuxnet did compared to what it would take human agents to accomplish.

Hacking is a big deal, it affords precise control, or at least a range of precision options that can be covertly and rapidly implemented from a distance. A person can fall asleep (ATC), or be drunk, or even crooked, but they will show signs of this and a good observer can catch it. It is far easier to program something malicious than it is to induce a human to commit massive crimes in situ, with no hope of escape.

edit: "bifurcating bunnies" :smile: Sorry I forgot to aknowledge that. Ever see a show called 'Father Ted'? Irish program, and one episode involves a man who is going to LITERALLY split hares...
*he doesn't, the bunnies live to terrorize a bishop
 
Last edited by a moderator:
  • #66
nismaratwork said:
It is far easier to program something malicious than it is to induce a human to commit massive crimes in situ, with no hope of escape.
It's just a matter of scale. Same principle, different effort. Doesn't change the things that need to be in-place to prevent it (like having http://news.yahoo.com/s/ap/20110324/ap_on_bi_ge/us_airport_tower" !:eek:).
 
Last edited by a moderator:
  • #67
DaveC426913 said:
It's just a matter of scale. Same principle, different effort. Doesn't change the things that need to be in-place to prevent it (like having http://news.yahoo.com/s/ap/20110324/ap_on_bi_ge/us_airport_tower" !:eek:).

Call me impressed by scale. :-p


Still... ATC's are stupidly overworked...
 
Last edited by a moderator:
  • #68
This is not really apropos of anything that is currently being said, but a thought relating to this topic did occur to me, relating really to this issue of ‘trust’ and BobG’s original question which was about trusting the computer to the point of making no provision for human override. And what I was just remembering is that all this computer technology is usually attributed as a spin-off of the space race, and the point is that there was significant computer control on the Apollo missions. Doubtless BobG would point out that the missions were flown by human intelligence. But there were significant and vital systems that were computer controlled. A former boss of mine from many years ago, when we were first getting to grips with computer controlled systems, if one of us was a little too insistent with the objection ‘but what if it fails?’ would point out that if one of those working on the Apollo missions had said ‘but what if it fails?’ the answer would have been ‘it musn’t fail’.

And, in point of fact, the issue with industrial control systems is not actually just one of safety. The key issue really is reliability. Industrial plants usually calculate their efficiency in terms of actual output against projected capacity, and in the West certainly, for the most part, efficiencies well in excess of 90% are what is expected. If computer control systems were that unreliable, or that prone to falling over, production managers would have no compunction whatever about depositing them in the nearest skip. The major imperative to use computer control systems of course is reduced labour costs. But they would not have found such widespread use if they were anything like so vulnerable to failure as some contributors to this thread seem to believe they are.
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
6K
  • · Replies 42 ·
2
Replies
42
Views
7K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 19 ·
Replies
19
Views
8K
Replies
9
Views
2K
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
9K