Can computer control systems be relied upon for critical processes?

  • Thread starter Thread starter BobG
  • Start date Start date
  • Tags Tags
    Computers
AI Thread Summary
Computer control systems can be reliable for critical processes, but their effectiveness often depends on human oversight. Techniques like triple redundancy help mitigate sensor failures by cross-verifying readings from multiple sensors. Despite advancements, there remains a widespread reluctance to fully trust computers in decision-making roles, especially in high-stakes environments like nuclear power plants. Human operators are typically required to intervene, as they can better assess unique situations that computers may not handle well. Ultimately, while computers can outperform humans in many tasks, the need for human judgment remains crucial in critical applications.
  • #51
DaveC426913 said:
Put a third way: if you put a less reliable system (1% failure) in charge of a more reliable system (.01% failure), then the whole system is only as reliable as the less reliable system (1% failure). So no, not necessarily safer.

Argument for arguments sake.

Closed loop control with a man on the override button is distinctly safer than manual control of a complex system. Automated control gives the man less opportunity to make errors becuase he has less actions.

Risk = probability * number of events * outcome.

If he has to do 1000 operations manually witha 1% error = 10 errors.
Or 100 operations with the computer taking control there = 1 error.
 
Computer science news on Phys.org
  • #52
xxChrisxx said:
Argument for arguments sake.

Closed loop control with a man on the override button is distinctly safer than manual control of a complex system. Automated control gives the man less opportunity to make errors becuase he has less actions.

Risk = probability * number of events * outcome.

If he has to do 1000 operations manually witha 1% error = 10 errors.
Or 100 operations with the computer taking control there = 1 error.

Chernobyl.
 
  • #53
xxChrisxx said:
Argument for arguments sake.

Closed loop control with a man on the override button is distinctly safer than manual control of a complex system. Automated control gives the man less opportunity to make errors becuase he has less actions.

Risk = probability * number of events * outcome.

If he has to do 1000 operations manually witha 1% error = 10 errors.
Or 100 operations with the computer taking control there = 1 error.
Agreed. There's an interplay. I was just pointing out that's it's not as ideal as a human overriding a device only when the device announces it is failing.
 
  • #54
nismaratwork said:
Chernobyl.

Nothing in the world is truly idiot proof. We should also make a distinction between error and blunder.
 
  • #55
DaveC426913 said:
This meme that 'a computer can only do what its programmer tells it to do' is fallacious. It is ignorant of the phenomenon of emergent behaviour.

Not arguing with you, but could you give some examples before I rush off into the wide expanse that is Google? (I'm really interested in this sort of thing.)
 
  • #56
jarednjames said:
Not arguing with you, but could you give some examples before I rush off into the wide expanse that is Google? (I'm really interested in this sort of thing.)
Um.

Can Conway's Game of Life be trusted to generate patternless iterations that do not lend themselves to analysis and comparison to life?

Does the programmer, when he writes the half dozen or so lines it requires to invoke CGoL be held accountable for the behaviour of "[URL and http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" ?

Is it meaningful to say that this computer program is "only doing what its programmer told it to do"?


If so, then the principle can be scaled up to cosmic proportions. The universe exhibits predictable and trustworthy behaviour at all times because it is only doing what the laws of physics allow it to do.


The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that "design" and "organization" can spontaneously emerge in the absence of a designer. For example, philosopher and cognitive scientist Daniel Dennett has used the analogue of Conway's Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws governing our own universe.
http://en.wikipedia.org/wiki/Conway's_Game_of_Life#Origins
 
Last edited by a moderator:
  • #57
xxChrisxx said:
Nothing in the world is truly idiot proof. We should also make a distinction between error and blunder.

Fair enough, but blunder is in our nature as well, and once we stop trusting our systems (a form of DaveC's example) we're in trouble. It's not a simple thing, even with mechanical safeties.

@DaveC: there is another thing: you can't yet hack people, but you can hack a computer. That is something that undermines all of this.
 
  • #58
DaveC426913 said:
Um.

Can Conway's Game of Life be trusted to generate patternless iterations that do not lend themselves to analysis and comparison to life?

Does the programmer, when he writes the half dozen or so lines it requires to invoke CGoL be held accountable for the behaviour of "[URL and http://en.wikipedia.org/wiki/Gun_(cellular_automaton)" ?

Is it meaningful to say that this computer program is "only doing what its programmer told it to do"?


If so, then the principle can be scaled up to cosmic proportions. The universe exhibits predictable and trustworthy behaviour at all times because it is only doing what the laws of physics allow it to do.


http://en.wikipedia.org/wiki/Conway's_Game_of_Life#Origins

Thanks.

@Nismar: People can be bribed.
 
Last edited by a moderator:
  • #59
nismaratwork said:
@DaveC: there is another thing: you can't yet hack people...
Of course you can. Consider the essence of hacking. Anything you can do to a computer, could be done to a human easily enough.

Alter his programming? Sure. Give him alcohol. (With the same input, we now get different output.)

Insert a pernicious subprogram? Sure. Shower him with propoganda, changing his political values (his output may change to something covert that does not benefit the system, and may hurt it.)
 
  • #60
DaveC426913 said:
(With the same input, we now get different output.)

And usually a far more reliable and honest output than you'd get otherwise. :wink:
 
  • #61
jarednjames said:
And usually a far more reliable and honest output than you'd get otherwise. :wink:

That's because this alteration breaks other subprograms, such as Inhibitions and ThingsBestLeftUnsaid. :smile:
 
  • #62
DaveC426913 said:
That's because this alteration breaks other subprograms, such as Inhibitions and ThingsBestLeftUnsaid. :smile:

:smile:
 
  • #63
DaveC426913 said:
Of course you can. Consider the essence of hacking. Anything you can do to a computer, could be done to a human easily enough.

Alter his programming? Sure. Give him alcohol. (With the same input, we now get different output.)

Insert a pernicious subprogram? Sure. Shower him with propoganda, changing his political values (his output may change to something covert that does not benefit the system, and may hurt it.)

It's not the same, not as easy, not as reliable... just ask the CIA and every military in the modern world... people are too variable.

Yeah, stick them with amphetamines and barbituates, or versed and scopalamine and you'll get something (who knwos what), and you can go 'Clockwork Orange' on them, but really it's not that simple.

In a few minutes many people here could insert a routine into these forums to cause a temporary breakdown, or gain administrative privelages. There is no equivalent for humans that isn't M.I.C.E, takes time, and has uncertain outcomes.

*bribery is under MICE
 
  • #64
nismaratwork said:
It's not the same, not as easy, not as reliable... just ask the CIA and every military in the modern world... people are too variable.

Yeah, stick them with amphetamines and barbituates, or versed and scopalamine and you'll get something (who knwos what), and you can go 'Clockwork Orange' on them, but really it's not that simple.

In a few minutes many people here could insert a routine into these forums to cause a temporary breakdown, or gain administrative privelages. There is no equivalent for humans that isn't M.I.C.E, takes time, and has uncertain outcomes.

*bribery is under MICE

But you're bifurcating bunnies and missing the point.

Simply put, humans are, like computers, susceptible to alterations in their expected tasks.


(I just heard on the news about a Washington Airport Tower Controller that "crashed" without a "failover system" in place. :biggrin:
http://www.suite101.com/content/air-traffic-controller-sleeps-while-jets-race-toward-white-house-a361811 )
 
Last edited by a moderator:
  • #65
DaveC426913 said:
But you're bifurcating bunnies and missing the point.

Simply put, humans are, like computers, susceptible to alterations in their expected tasks.


(I just heard on the news about a Washington Airport Tower Controller that "crashed" without a "failover system" in place. :biggrin:
http://www.suite101.com/content/air-traffic-controller-sleeps-while-jets-race-toward-white-house-a361811 )

Oh, don't get me wrong, humans fail, but consider what Stuxnet did compared to what it would take human agents to accomplish.

Hacking is a big deal, it affords precise control, or at least a range of precision options that can be covertly and rapidly implemented from a distance. A person can fall asleep (ATC), or be drunk, or even crooked, but they will show signs of this and a good observer can catch it. It is far easier to program something malicious than it is to induce a human to commit massive crimes in situ, with no hope of escape.

edit: "bifurcating bunnies" :smile: Sorry I forgot to aknowledge that. Ever see a show called 'Father Ted'? Irish program, and one episode involves a man who is going to LITERALLY split hares...
*he doesn't, the bunnies live to terrorize a bishop
 
Last edited by a moderator:
  • #66
nismaratwork said:
It is far easier to program something malicious than it is to induce a human to commit massive crimes in situ, with no hope of escape.
It's just a matter of scale. Same principle, different effort. Doesn't change the things that need to be in-place to prevent it (like having http://news.yahoo.com/s/ap/20110324/ap_on_bi_ge/us_airport_tower" !:eek:).
 
Last edited by a moderator:
  • #67
DaveC426913 said:
It's just a matter of scale. Same principle, different effort. Doesn't change the things that need to be in-place to prevent it (like having http://news.yahoo.com/s/ap/20110324/ap_on_bi_ge/us_airport_tower" !:eek:).

Call me impressed by scale. :-p


Still... ATC's are stupidly overworked...
 
Last edited by a moderator:
  • #68
This is not really apropos of anything that is currently being said, but a thought relating to this topic did occur to me, relating really to this issue of ‘trust’ and BobG’s original question which was about trusting the computer to the point of making no provision for human override. And what I was just remembering is that all this computer technology is usually attributed as a spin-off of the space race, and the point is that there was significant computer control on the Apollo missions. Doubtless BobG would point out that the missions were flown by human intelligence. But there were significant and vital systems that were computer controlled. A former boss of mine from many years ago, when we were first getting to grips with computer controlled systems, if one of us was a little too insistent with the objection ‘but what if it fails?’ would point out that if one of those working on the Apollo missions had said ‘but what if it fails?’ the answer would have been ‘it musn’t fail’.

And, in point of fact, the issue with industrial control systems is not actually just one of safety. The key issue really is reliability. Industrial plants usually calculate their efficiency in terms of actual output against projected capacity, and in the West certainly, for the most part, efficiencies well in excess of 90% are what is expected. If computer control systems were that unreliable, or that prone to falling over, production managers would have no compunction whatever about depositing them in the nearest skip. The major imperative to use computer control systems of course is reduced labour costs. But they would not have found such widespread use if they were anything like so vulnerable to failure as some contributors to this thread seem to believe they are.
 
Back
Top