Pattonias said:
In the defense of the Chernobyl reactor design, several back-up safety measures were disabled while the idiots were running their experiment. Any improvements to prevent that aspect of the disaster would require your to make it impossible to override those safety measure. I assume that is what they did change in addition to other improvements.
This would be an interesting trade-off - and extremely unlikely. What happens if a sensor starts malfunctioning and giving false readings?
I'm not that familiar with nuclear power facilities, but a malfunctioning sensor can always be a threat to a computerized program. Garbage in, garbage out. A good, safe program responds to problems that can be anticipated, but leaves unique malfunctions for humans to solve.
Interesting difference in approaches between Chernobyl and TMI. At TMI, the emphasis was on understanding the underlying problem causing the indications operators were seeing. Instead of focusing on safing the reactor in an anomaly, the operators focused on determining and correcting the underlying problem. The solution was to 'dumb down' operations and rely more on checklists that directed actions to avoid a disaster rather than fixing the underlying problem. At Chernobyl, the attitude all along was that the operators should follow directions (via checklists or engineers) and didn't require the knowledge to diagnose underlying problems.
When you're talking about humans and group dynamics (which is almost more important than the knowledge level of the operators), there are no perfect solutions. You minimize the chance of errors, but you never get the chances down to zero.
I don't enough of the details of either situation to make an authoritative comment about either's solution, but I generally prefer relying on knowledge than checklists. Regardless, all too often, organizations focus on only these two options, since they're easy to quantify, and miss the areas where they could truly make improvements.
It's hard to train teamwork, communication, discipline, and ownership. These qualities play a more important role in crew errors than most people give credit for. The crew that defers to the engineers conducting an important test, not realizing that the engineer is trusting the crew to stop his test if it creates an situation that the crew considers unsafe. The crew that misses the underlying problem because every single person on crew is looking at the same spectacular symptoms of the problem - i.e. a lack of discipline has reduced an entire crew into accomplishing no more than a single person could. The crew that just doesn't understand what they're saying to either because so much communication is unspoken and assumed - or because some crew members are bullied and unwilling to speak up, etc.
These aren't the type of things that are ever likely to be reduced to zero. A very good organizations at least gets them close to zero, while a dysfunctional organization has chronic problems like this. Those are also the type of things that would be hard to measure during a safety inspection.