Chernobyl The steps of Chernobyl and effects of radiation

  • Thread starter the_force
  • Start date
Yes - but one of the problems with Chernobyl is that the operators were improvising
because they didn't have procedures.
My above points are relating to the TMI event, not Chernobyl.

Chernobyl was entirely different. Several extreme example of non-conservative decisions by the operators, all done in a reactor which was very intolerant to mistakes. (positive void coefficient, graphite tipped rods, no spatial control) This is a pretty solid example of why 'cowboying' at the controls is a bad idea.


And yes Paulanddiw, I'm Homers ass double.
 

Morbius

Science Advisor
Dearly Missed
1,125
5
My above points are relating to the TMI event, not Chernobyl.
Homer,

TMI was another case where the accident was initiated by "unthinking" operators.

When I was a student at MIT; Professor Kemeny, who led the investigation of the TMI
accident gave a seminar about the findings of his commission.

At one point in the seminar, Professor Kemeny stated when he toured the TMI control
room, he asked the operators for a "steam table" - a book that gives the equation of state
for water at various conditions and whether the water is in the liquid or gaseous phase or
mixed at the specified conditions.

It took the operators about 30 - 45 minutes to find a steam table!!!!

Professor Kemeny stated that the operators at TMI were absolutely CLUELESS about
what part of phase space of the equation of state for water the reactor coolant was in.

He asked if anyone had read the chronology of the accident in the newspaper. He said
then you know exactly how clueless the operators were.

I knew EXACTLY what he meant. I remember reading the account of the TMI accident
in my morning copy of the Boston Globe. They listed a chronology of events by time.
At one point, the chronology stated that the operators had stabilized the reactor
at a pressure "X" and temperature "Y".

I wondered how far from boiling conditions they were when they stabilized the reactor.
After all, that's what you are trying to prevent in a PWR like TMI; you don't want the
coolant to boil. It creates regions of vapor that won't cool the fuel properly; and hence
the reactor is subject to meltdown.

I reached for my copy of Keenan & Keyes Steam Tables sitting on the filing cabinet that
adjoined by office desk. When I checked the specified conditions of pressure "X" and
temperature "Y" - I found that those conditions were right ON the saturation line.

The operators hadn't "stabilized" the reactor at all. The pressure and termperature weren't
changing because the coolant was boiling - the condition that the operators should be
attempting to prevent. [ Prof. Kemeny confirmed that was EXACTLY what he was
talking about ].

It was 90 minutes after the onset of the accident, and the operators STILL had NOT
consulted a steam table when they noted vibration of the primary coolant pumps.

The vibration of the pumps was an important CLUE - the pumps were vibrating because
they were pumping a two-phase mixture of steam and water. This two-phase mixture
was cooling the core and preventing a meltdown.

However, the operators didn't realized the significance of this clue - they didn't ask why
the pumps might be vibrating. The operator's response was to shutdown the pumps
without thinking about the implication of what that shutdown might do.

It was when the pumps were shutdown that the fate of the reactor was sealed. Without
the circulation of even the two-phase steam / water mixture - the zirconium cladding
tubes melted - and the reactor core was destroyed, and the accident became the
severe event it was.

The Three Mile Island accident was caused by operators that didn't THINK!!!

Prior to TMI, the philosophy was that all one had to do was to give the operators the
information they needed on their displays, as well as total command of the system,
including the ability to over-ride any of the automatic systems; and the operators would
do the right thing.

That proved to be a faulty philosophy. We had operators that were reacting, and taking
actions WITHOUT being cognizant of the current state of the system or what the
consequences would be of their actions. They just DID; they didn't THINK!!!!

Yes - there were some mechanical failures at TMI; a sticking pressure relief valve, and
some faulty indicators. However, Professor Kemeny stated that the accident was
recoverable even with these mechanical problems.

According to Professor Kemeny, what turned what should have been a minor accident
into a major calamity were the actions of operators that acted WITHOUT THINKING!!!

Dr. Gregory Greenman
Physicist
 

Morbius

Science Advisor
Dearly Missed
1,125
5
It is very easy in hindsight to see what could have been done. The wording in the previous posts saying the operators did not think or were stupid comes across as very harsh & implying they were actually incompetent. It is indeed true that at certain times their actions were the cause of or contributed to accidents, but I don't think this warrants calling them generally unthinking or labelled as stupid.
curie,

It was clear from the seminar by Professor Kemeny that I attended, that he DID
consider them UNTHINKING!!! He didn't go as far as to say they were incompetant -
but I would.

As far as the safety of a PWR; "Job 1" has to be to ensure that there is liquid water
covering the core at all times. For the operators to be unaware of what the current
state of the coolant was because they hadn't bothered to consult a steam table, is
INCOMPETENT, at least in my book.

They were acting in unplanned (leading into extreme) conditions & no matter how much training one has in so-called "dynamic risk assessment",emergency response, whatever the latest fad is, etc, under those conditions nothing can be guaranteed, including even the operators' own recognition that nothing is guaranteed.
It's their JOB to be ready to respond to unplanned conditions. I expect a nuclear power
plant operator to be on par with a airline pilots.

Contrast the performance of the operators at Three Mile Island with the performance of
the flight crew of United Airlines Flight 232 that crashed at Sioux City in 1991:

http://en.wikipedia.org/wiki/United_Airlines_Flight_232

Due to the catastrophic failure of the turbine disk in the #2 engine in the tail; the resultant
shrapnel severed lines in all the hydraulic systems that control the aircraft. The flight
crew was left with only the throttles for engines #1 and #3.

They were able to improvise a way of controlling and navigating the aircraft to the nearest
airport; and they were ALMOST totally successful at landing. The aircraft rolled right
just prior to touchdown and the right wing tip hit the ground. The subsequent events
resulted in the breakup of the aircraft.

However, the heroic actions of the flight crew resulted in 62% of those aboard surviving;
when this situation could easily have resulted in the loss of all aboard.

That's the type of performance that I would expect from nuclear power plant operators.

Unfortunately, we can't expect airline flight crew class performance from nuclear power
plant operators; and the plants have been modified to prevent operators from makng
egregious mistakes. We now rely more on the engineered control systems than we
do on thinking operators.

Dr. Gregory Greenman
Physicist
 
Morbius, Try Googling "pilot error"... turns out your example might be a bit biased.

American Airlines Flight 587 lost its tail and plummeted into a New York City neighborhood in November 2001, killing 265 people, because the co-pilot improperly used the rudder to try to steady the plane, federal safety investigators ruled Tuesday.

Your words:
Unfortunately, we can't expect airline flight crew class performance from nuclear power plant operators;
Look friend, it's people like you that incite groundless fears about the nuclear power industry. Operators are tested extensively and repeated times per year, and if they fail, their license is taken away. Unlike your jet pilot example, operators are not encouraged to do all they can to keep "it up in the air", as usually the safest and most conservative thing to do is shut down and then figure out the problem. Operator heroics tend to lead them into a knowledge based decision-making process outside of procedures and that's when mistakes happen. Example: Chernobyl!!

Morbius, I'm sure you are borderline genius on engineering and science of a NPP, but you are about 20 years behind the times in your thinking about operating a nuclear power plant. Your expectation that the cores safety should lie in the "improvisational skills" of an operator demonstrates the same flawed approach as the industry viewed things back in the day of Three mile island. Thankfully the industry has come a long way.
 

Morbius

Science Advisor
Dearly Missed
1,125
5
Morbius, I'm sure you are borderline genius on engineering and science of a NPP, but you are about 20 years behind the times in your thinking about operating a nuclear power plant. Your expectation that the cores safety should lie in the "improvisational skills" of an operator demonstrates the same flawed approach as the industry viewed things back in the day of Three mile island. Thankfully the industry has come a long way.
Homer,

WRONG!!! WRONG!!! WRONG!!!

Where did you EVER get the idea that I wanted to put my trust in the operators????

When I was at Argonne; I was on the team that designed the Integral Fast Reactor
to be "inherently safe"!!!

Where did you get the idea that I was making the operators the first line of defense???

The Kemeny seminar I attended had the effect on me that I would NOT TRUST a
nuclear power plant operator any farther than I could throw him/her. That's why I
worked on inherently safe reactors.

I'm NOT 20 years behind the times - I was at the FOREFRONT of the inherently safe
movement.

Just because I expect more from the operators - does NOT mean I want to rely on them.

You TOTALLY MISREAD my philosophy of nuclear safety.

I don't want to rely on operators for safety - but I expect that they would be better than
what they are. They do NOT have the professionalism that one expects from airline
pilots.

Besides, the DC-10 is a very safe aircraft; but when an unpredicated catastrophic
failure befell one; it was good that United had very good pilots aboard.

McDonnell-Douglas wasn't counting on the pilots either.

Dr. Gregory Greenman
Physicist
 

Morbius

Science Advisor
Dearly Missed
1,125
5
Morbius, Try Googling "pilot error"... turns out your example might be a bit biased.
Homer,

I'm well familiar with the existence of pilot error. In fact, the majoirity of aircraft accidents
are pilot error not mechanical error.

Nuclear power plants in the past where much less dependent on the skills of the operator
for their safety than airliners - and over the last 20 years - it has moved further in that
same direction.

The pilot of an aircraft has much more control over the system, as well as having a
more "dynamic" job in the sense that although flights are fairly routine; they are never
as routine as the operation of a power plant.

However, is it unreasonable for the operators of a nuclear power plant to understand
the equation of state of water? The operators at Three Mile Island showed less
understanding of the system than what I would expect from someone that operates
the steam heating plant for a large building.

Designers of nuclear power plants do all they can to design safety into the system.

What's wrong with having trained operators that are allies in that mission; as opposed
to the fools at Three Mile Island who didn't know enough to look at a steam table when
the pressure and temperature of the system "stabilized"?

Dr. Gregory Greenman
Physicist
 

russ_watters

Mentor
18,747
4,948
Ironically, you guys are arguing against each other using examples that cover both sides of the equation - which I guess is fitting since you are really on the same team anyway! For the plane crashes as well as the nuclear plant accidents, they are almost always combination of operator error, procedures, and design issues. Both are so safe, they virtually require multiple simultaneous failures for accidents to occur.

-The Airbus with the tail ripped off was a combination of bad procedures/training and bad design (either the tail should have been stronger or the fly-by-wire control envelope tighter).
-The Soux City crash was caused by a design failure, but Morbius, it was kept aloft mostly by an instructor who was not part of the flight crew, but happened to be aboard at the time. Had the plane frisbee-d in and killed everyone, no one would have blamed the pilots, the situation was so far from what should have been survivable.

The bottom line - for both planes and nuclear plants - is that so much engineering and training goes into their construction and operation that for major accidents to occur usually requires a combination of major, simultaneous failures in operation, training (procedures), design, etc.

So please take a deep breath here and remember that we're all on the same team.
 

Morbius

Science Advisor
Dearly Missed
1,125
5
The Soux City crash was caused by a design failure, but Morbius, it was kept aloft mostly by an instructor who was not part of the flight crew, but happened to be aboard at the time. Had the plane frisbee-d in and killed everyone, no one would have blamed the pilots, the situation was so far from what should have been survivable.
Russ,

Both Captain Haynes, pilot in command; and Captain Fitch, the training instructor who
handled the throttles share the credit for limiting the loss of life; it really took ALL 3
members of the flight crew working as a team to save that craft. It would be erroneous
to single one out.

I agree that the situation was grave; and by "all rights" everyone on that flight "should"
be dead. However, even in the face of such seemingly impossible odds - a failed
hydraulic system, the rudder stuck in a turn, so that the plane had to maneuver by
a series of loops - the pilots were able to limit the loss of life in this all but impossible
situation.

That's why it contrasts nicely with the Three Mile Island accident, which by rights should
have been a non-event. At Three Mile Island, the automatic systems were doing the
right things to mitigate the accident; until the operators overrode the automatic systems.

When the operators overrode the automatic systems; they did so without a good
understanding of what was transpiring in the system. Their cardinal rule is to keep
the coolant from boiling - yet they never consulted the tables that would have told
them how close to the "abyss" they were.

So in one case, we have an almost impossible problem, that was greatly mitigated by
fine work of the flight crew. The other was a recoverable problem that the operators
was aggrevated into a full scale catastrophe by operators that were clueless as to
what they were doing.

One is a very good example of how operators can help and save the day; the other
is how operators can BE the problem.

Dr. Gregory Greenman
Physicist
 
24
0
curie,

.... I expect a nuclear power
plant operator to be on par with a airline pilots....
I don't know exactly what training etc airline pilots need to have but in my country in both the power & research reactors, the actual operators, ie the people who drive the desk, are effectively little more than specialised techs. They do not have to be of the intellectual equivalent to have a technical degree & while perhaps some of them do go much farther than a vague understanding of the physics, I would expect most of them to just absorb enough from the training they receive in order to keep their operating licences. That is not to diss them - ultimately it's the same as any job, most people drift into a regime where they do enough to get by. This is why it is the job of the people with the numerous degrees etc to design the plant such that unusual events are minimised or when they do occur, human response is minimised as far as possible to reduce these "unthinking" responses.

Maybe this is a terminology thing - do you mean something to the effet of Physics Supervisor rather than actual bog standard operator?
 

Morbius

Science Advisor
Dearly Missed
1,125
5
Maybe this is a terminology thing - do you mean something to the effet of Physics Supervisor rather than actual bog standard operator?
curie,

There should be an SRO - Senior Reactor Operator - on duty; and an SRO should understand the
physics and operation of the reactor, in my opinion.

Perhaps some ROs are just "technicians" - in the sense that they just implement procedures that are
written down for them - "In case of 'A' - do 'B'". However, if we are going to depend on the operators
to also be able to handle unique situations which crop up that haven't been anticipated in the
procedures - then you need someone that really understands the reactor.

Dr. Gregory Greenman
Physicist
 
24
0
curie,

There should be an SRO - Senior Reactor Operator - on duty; and an SRO should understand the
physics and operation of the reactor, in my opinion.
Yes, in my opinion too. However I've seen in many facilities that operators progress to the senior role and that to do this, they just have to tick more boxes, not necessarily have the increased intellectual capacity and understanding that such a role should merit. Indeed, if they had that in the first place they probably wouldn't have started as operators. The people with the real understanding needed tend to be in different roles not concerned primarily with the ins & outs of day to day running, ie not requiring them to actually be in the control room. These people tend to be on call for emergencies & of course by the time they get to a useful position, many decisions & actions may have already been taken that leads the situation down a particular path. This is played out time & time again in the emergency exercises that facilities are required to demonstrate.

I find the notable exception to be in n-powered marine craft where the defence staffing hierachy is more favourable, and of course an on-call team cannot be relied on!
 

Morbius

Science Advisor
Dearly Missed
1,125
5
I find the notable exception to be in n-powered marine craft where the defence staffing hierachy is more favourable, and of course an on-call team cannot be relied on!
curie,

What I find puzzling is that I understand that a goodly fraction of nuclear power plant operators are
former nuclear propulsion operators in the US Navy.

I do know that the officers on board nuclear powered ships in the US Navy are trained in the
US Navy's "Nuclear Power School".

Perhaps the difference is that on a ship; the knowledgeable personell are only minutes away from
the reactor controls. After all, how long does it take to get from the forward torpedo room to the
engineering spaces on a nuclear sub? Not very long. Even on a big Nimitz-class nuclear carrier;
it can't take very long for any of the engineering officers to arrive in the reactor control center.

So on a naval vessel, knowledgeable people can be on the scene quickly before the situation gets
out of hand.

Even so; in the case of Three Mile Island; the "point of no return" in the accident was when the
operators shut down the main coolant pumps. Up to that time, the accident could have been
reversed. The shutdown of the main coolant pumps happened 90 minutes into the incident.

Dr. Gregory Greenman
Physicist
 
I think I should chime in here and comment on a few points about the modern day training involved. Where we are the training is broken into 3 basic parts. The first deals with the general science involved with the plant, mechanical equipment, Rx physics, thermodynamics, etc. This phase is about half a year and we are tested internally and by the regulator. The second phase is the plant specific material. This is a fairly huge amount of material, as you might imagine, is extremely in depth and frankly not easy at all. Again testing is done inside and by the regulator. The third is mainly to do with putting it all together in the simulator where events are fired at the candidates and response is scrutinized by internal and the regulator. The whole process generally takes about 3 years.

Candidates selected for the program go through a selection process based on aptitude test results, past performance, interview, knowledge tests, etc. Although much care is put into the selection, there is only about a 50% success rate through the program, which speaks to its difficulty.

Many licensed engineers take the same program, and failure rates are equivalent. Many engineers comment that the training is more intense than their university ever was.

Once training is complete, the licensed operator has just begun his career full of testing, as continual training and testing occurs several times per year, and each time the operator’s license is on the line.

Make no mistake; the licensed operator is the person you want at the controls.
 

Morbius

Science Advisor
Dearly Missed
1,125
5
Make no mistake; the licensed operator is the person you want at the controls.
Homer,

Unfortunately, the two biggest accidents - Three Mile Island and Chernobyl were situations
where the operators actually percipitated the accident.

As per a seminar I attended given by the lead investigator of the Three Mile Island accident,
Prof. Kemeny; the TMI operators utterly failed to consider the information they had about the
thermodynamic state of the coolant, and how far away from boiling they were. The TMI
operators erroneously thought they had "stabilized" the reactor because coolant temperatures
stopped changing. In reality, the coolant temperature stopped changing because the coolant
was boiling.

So here we had a dire situation in that the coolant was boiling away approaching melting
conditions; and the operators were clueless as to the dire circumstances. They then
precipitated the final plunge to meltdown by shutting down the coolant pumps.

The operators at Chernobyl appeared to be unaware of the problems of trying to operate a
reactor in the middle of a Xenon transient.

Both reactors required more knowledgeable people at the controls than those available;
because in both cases, training not withstanding; the operators were woefully inadequate,
and not up to dealing with incidents which should have been non-events save for the
poor quality of personell at the controls.

Dr. Gregory Greenman
Physicist
 
Last edited:
"Operators responded by reducing the flow of replacement water. Their training told them that the pressurizer water level was the only dependable indication of the amount of cooling water in the system. Because the pressuriser level was increasing, they thought the reactor system was too full of water. Their training told them to do all they could to keep the pressuriser from filling with water. If it filled, they could not control pressure in the cooling system and it might rupture."

http://www.uic.com.au/nip48.htm

There was a huge change in 'common knowledge' after this, in that it is now commonly known to PWR operators that high pressurizer level in these scenario's is caused by coolant voiding.

If I ever see someone in the control room with steam tables in thier hand, I'm going to assume it's either you or the prof, and calling security! :smile:
 

Morbius

Science Advisor
Dearly Missed
1,125
5
"Operators responded by reducing the flow of replacement water. Their training told them that the pressurizer water level was the only dependable indication of the amount of cooling water in the system.
Homer,

That's the PROBLEM - the operators at TMI were so damnable STUPID that the
pressurizer level seemed to be ALL they cared about. If the operators had paid some attention
to the reactor pressure and temperature and consulted a steam table - then they would have
known that the coolant was boiling!!! If the coolant is boiling, then the coolant is being replaced
with vapor that doesn't have the capacity to properly cool the core. That's equivalent to letting
the water level fall too low.

If all we cared about were something as simple as pressurizer level - then we could replace the
operators with some feedback relays. The reason we have operators is to deal with circumstances
that the automation is too stupid to deal with.

Unfortunately, the Three Mile Island operators were about as stupid as the automation they were
supposed to be back-stopping.

The Three Mile Island operators were supposed to THINK - to use their BRAINS.

The operators are NOT supposed to be trained monkeys that just follow a script. Operators have
to understand the system and how it works. I compare nuclear power plant operators to the
pilots of commercial airliners. The reason we have pilots aboard airliners and not just rely on an
autopilot - is that humans are better than machines at dealing with unexpected circumstances.

Consider the crash of United Airlines 232 in Sioux City, IA in 1989. The catastrophic disassembly
of the turbine in the #2 engine and the shrapnel thus produced ended up disabling all the hydraulic
control systems on the plane. Captain Alfred Haynes, and his crew; along with Captain Dennis Fitch,
a DC-10 training instructor that just happened to be aboard; were able to improvise methods to gain
a certain amount of control of the mortally stricken DC-10. They were able to bring the plane into
Sioux City and save the majority of those aboard, and minimize loss of life. THAT's the type of
performance and professionalism that we need at the controls of powerful machines.

Instead of that, at Three Mile Island; we had a crew of operators that monomaniacally focused on
a SINGLE operational parameter; didn't pay attention to what the other indicators in the control room
were telling them. Because they didn't have steam tables, they didn't understand the meaning of
the indications that were presented to them. Because they were so clueless as to the condition of
the plant - they shutdown the primary coolant pumps; an action that led to the meltdown. The operators
overrode the automatic systems.

If we give the operators the authority and ability to override the automatic systems; then they better
be SMARTER than the automatic systems. The Three Mile Island operators proved that assumption
to be false.

That's why many of the responses to the TMI accident was to TAKE AWAY the authority of the
operators to override the automatic systems. We no longer assume that the operators know better
than the automatics and let them trump the decisions of the automated systems.

No - the nuclear design engineers now take the view that we CAN NOT trust operators to do the right
thing in emergency situations. Therefore, the current operators no longer have the ability to override
certain safety systems.

Dr. Gregory Greenman
Physicist
 

nrqed

Science Advisor
Homework Helper
Gold Member
3,540
181
Homer,
.....
No - the nuclear design engineers now take the view that we CAN NOT trust operators to do the right
thing in emergency situations. Therefore, the current operators no longer have the ability to override
certain safety systems.

Dr. Gregory Greenman
Physicist
very very interesting post. Thanks to dr Greenman for posting this.

I hope that the irony of explaining this to Homer Simpson is not lost on anyone :-)
 

vanesch

Staff Emeritus
Science Advisor
Gold Member
5,007
16
very very interesting post. Thanks to dr Greenman for posting this.

I hope that the irony of explaining this to Homer Simpson is not lost on anyone :-)
:rofl: :rofl: :rofl:
 
So Morbius, do you work around or know any NPP operators? Have you seen them in action outside of "the china syndrome"? or at least have any insight to operations since 1979?
 
Last edited:

Morbius

Science Advisor
Dearly Missed
1,125
5
or at least have any insight to operations since 1979?
Homer,

I sincerely hope that operators today I better than the boneheads at Three Mile Island that
just about killed an entire industry with their stupidity.

Dr. Gregory Greenman
Physicist
 

vanesch

Staff Emeritus
Science Advisor
Gold Member
5,007
16
I sincerely hope that operators today I better than the boneheads at Three Mile Island that just about killed an entire industry with their stupidity.
I'm not sure it is their fault. After all, TMI proved finally the robustness of the design of a nuclear power plant in the West: the worst accident in Western history for decades didn't make one single victim - something another industry cannot claim.
The counter example was Chernobyl, showing that if you make a stupid design and do stupid things with it, that things really can be very sour, but not as sour as some fantasies claimed.

So objectively, TMI should have reassured the public that nuclear power, at least in the west, is rather safe. It didn't. I think several factors played a role at the same time. There was an ideological movement in the '70-ies (that became later partly the green movement) that instrumentised their ideological battle (which was anarchist inspired) against everything which was technological, state-driven and with links or origins in the military industrial complex. Nuclear power was of course a perfect target.
But there was also the obscure side of the nuclear industry, partly with its link to the military, and also a kind of justified mistrust of the public with their scientists and leaders who had over-sold several aspects of technology, instead of giving a more moderate and accurate view on things.

All this made that the confidence of the public was lost.
 

Morbius

Science Advisor
Dearly Missed
1,125
5
After all, TMI proved finally the robustness of the design of a nuclear power plant in the West: the worst accident in Western history for decades didn't make one single victim - something another industry cannot claim.
vanesch,

I agree whole heartedly. One of the problems for the nuclear industry is its safety record.

Without a "modest number" of accidents [ whatever that means ], the public doesn't see the safety.

For example, take the airline industry. There's a crash every few years, every million or so flights.
People get an idea what the risk in flying is.

For the nuclear industry, there were no accidents for many years. However, the anti-nukes
successfully promulgated the idea that there was a BIG, BAD accident just waiting to happen.

Since the public hadn't experienced an actual accident - the public perception of an accident was
formed by the wild fantasies of the anti-nukes. The nuclear industry couldn't dispel that with evidence
since there hadn't been an accident.

Then along comes Three Mile Island. Because of the above, it scared many people for a week.
The analysis of the actual consequences that came later did not percolate into the public's mind.
How many people have actually read the Rogovin Report?

So the public came away with the perception that nuclear accidents can happen, and that we got
"lucky" and "dodged a bullet" with Three Mile Island.

So not having any accidents, or having just a single accident doesn't help "normalize" the public's
perception of the risks of nuclear power. The fantasies can run free.

Additionally, I believe it was Henry Kendall of the UCS that rejected the argument that Three Mile
Island shows how safe nuclear power is. His statement was that you don't prove how safe nuclear
power is by having accidents.

My take is somewhere in the middle. The accident at Three Mile Island bolsters the safety of the
things that worked, and detracts from the things that didn't work. Clearly, the wisdom of having a
containment building is shown by Three Mile Island, especially in comparison to Chernobyl.

So much of the equipment performed well - containment building and systems that served to mitigate
the accident - were all shown to be worthy of the confidence we place in them.

The things that didn't work, were the stuck valve and the operators. So the valves and operators
were shown to be less reliable than what one would have hoped for.

On balance; I agree with you, Three Mile Island was a non-event as far as public safety.

Unfortunately, I don't think that's how the public perceives it.

Dr. Gregory Greenman
Physicist
 
24
0
What keeps coming up is that Morbius is calling the operators stupid etc but they clearly were not stupid enough to fail the operator training that was in place so is the underlying reason of the accidents not the single fact that the oeprators were incompetent but rather that the system which allowed them to become operators was incompetent? And as such that is why certain measures have since been introduced to stop them over-riding safety mechanisms? Apologies if that what was you were getting at all along, but the language used obscured it if that was the case.

Incidentally in my country, which I'm pretty sure is actually the first country to introduce nuclear regulation, operators are not tested by the regulator. The training is intense but in no way comparable to a technicial university course - otherwise we would never find anyone to fulfil roles, which doesn't sound very good! The exception is the marine nuclear power training where it does indeed exceed or equal university level training. Also it is common here for ex-naval staff to go into regulation over the regular industry type jobs - probably a power thing!
 
Had an interesting course on Human Error, and the below article I found really to be an eye opener. It's a bit of a read, but worth it to give a deeper view for "finger pointers" to consider. its a document from a human factors speciallist in three parts:

Thoughts on the New View of Human Error Part I: Do Bad Apples Exist?
by Heather Parker, Human Factors Specialist, System Safety, Civil Aviation, Transport Canada
The following article is the first of a three-part series describing some aspects of the “new view” of human error (Dekker, 2002). This “new view” was introduced to you in the previous issue of the Aviation Safety Letter (ASL) with an interview by Sidney Dekker. The three-part series will address the following topics:

Thoughts on the New View of Human Error Part I: Do Bad Apples Exist?
Thoughts on the New View of Human Error Part II: Hindsight Bias
Thoughts on the New View of Human Error Part III: “New View” Accounts of Human Error
http://www.tc.gc.ca/CivilAviation/publications/tp185/4-06/Pre-flight.htm#HumanError
Before debating if bad apples exist, it is important to understand what is meant by the term “bad apple.” Dekker (2002) explains the bad apple theory as follows: “complex systems would be fine, were it not for the erratic behaviour of some unreliable people (bad apples) in it, human errors cause accidents—humans are the dominant contributor to more than two-thirds of them, failures come as unpleasant surprises—they are unexpected and do not belong in the system—failures are introduced to the system only through the inherent unreliability of people.”
The application of the bad apple theory, as described above by Dekker (2002) makes great, profitable news, and it is also very simple to understand. If the operational errors are attributable to poor or lazy operational performance, then the remedy is straightforward—identify the individuals, take away their licences, and put the evil-doers behind bars. The problem with this view is that most operators (pilots, mechanics, air traffic controllers, etc.) are highly competent and do their jobs well. Punishment for wrongdoing is not a deterrent when the actions of the operators involved were actually examples of “right-doing”—the operators were acting in the best interests of those charged to their care, but made an “honest mistake” in the process; this is the case in many operational accidents.

Can perfect pilots and perfect AMEs function in an imperfect system?
This view is a more complex view of how humans are involved in accidents. If the operational errors are attributable to highly competent operational performance, how do we explain the outcome and how do we remedy the situation? This is the crux of the complex problem—the operational error is not necessarily attributable to the operational performance of the human component of the system—rather the operational error is attributable to, or emerges from, the performance of the system as a whole.
The consequences of an accident in safety-critical systems can be death and/or injury to the participants (passengers, etc.). Society demands operators be superhuman and infallible, given the responsibility they hold. Society compensates and cultures operators in a way that demands they perform without error. This is an impossibility—humans, doctors, lawyers, pilots, mechanics, and so on, are fallible. It should be the safety-critical industry’s goal to learn from mistakes, rather than to punish mistakes, because the only way to prevent mistakes from recurring is to learn from them and improve the system. Punishing mistakes only serves to strengthen the old view of human error; preventing true understanding of the complexity of the system and possible routes for building resilience to future mistakes.
To learn from the mistakes of others, accident and incident investigations should seek to investigate how people’s assessments and actions would have made sense at the time, given the circumstances that surrounded them (Dekker, 2002). Once it is understood why their actions made sense, only then can explanations of the human–technology–environment relationships be discussed, and possible means of preventing recurrence can be developed. This approach requires the belief that it is more advantageous to safety if learning is the ultimate result of an investigation, rather than punishment.
In the majority of accidents, good people were doing their best to do a good job within an imperfect system. Pilots, mechanics, air traffic controllers, doctors, engineers, etc., must pass rigorous work requirements. Additionally, they receive extensive training and have extensive systems to support their work. Furthermore, most of these people are directly affected by their own actions, for example, a pilot is onboard the aircraft they are flying. This infrastructure limits the accessibility of these jobs to competent and cognisant individuals. Labelling and reprimanding these individuals as bad apples when honest mistakes are made will only make the system more hazardous. By approaching these situations with the goal of learning from the experience of others, system improvements are possible. Superficially, this way ahead may seem like what the aviation industry has been doing for the past twenty years. However, more often than not, we have only used different bad apple labels, such as complacent, inattentive, distracted, unaware, to name a few; labels that only seek to punish the human component of the system. Investigations into incidents and accidents must seek to understand why the operator’s actions made sense at the time, given the situation, if the human performance is to be explained in context and an understanding of the underlying factors that need reform are to be identified. This is much harder to do than anticipated.
In Part II, the “hindsight bias” will be addressed; a bias that often affects investigators. Simply put, hindsight means being able to look back, from the outside, on a sequence of events that lead to an outcome, and letting the outcome bias one’s view of the events, actions and conditions experienced by the humans involved in the outcome (Dekker, 2002). In Part III, we will explore how to write accounts of human performance following the “new view” of human error.
Part II:
Hindsight Bias
Have you ever pushed on a door that needed to be pulled, or pulled on a door that needed to be pushed—despite signage that indicated to you what action was required? Now consider this same situation during a fire, with smoke hampering your sight and breathing. Why did you not know which way to move the door? There was a sign; you’ve been through the door before. Why would you not be able to move the door? Imagine that because of the problem moving the door, you inhaled too much smoke and were hospitalized for a few days. During your stay in the hospital, an accident investigator visits you. During the interview, the investigator concludes you must have been distracted, such that you did not pay attention to the signage on the door, and that due to your experience with the door, he cannot understand why you did not move the door the right way. Finally, he concludes there is nothing wrong with the door; that rather, it was your unexplainable, poor behaviour that was wrong. It was your fault.
The investigator in this example suffered from the hindsight bias. With a full view of your actions and the events, he can see, after the fact, what information you should have paid attention to and what experience you should have drawn from. He is looking at the scenario from outside the situation, with full knowledge of the outcome. Hindsight means being able to look back, from the outside, on a sequence of events that lead to an outcome you already know about; it gives you almost unlimited access to the true nature of the situation that surrounded people at the time; it also allows you to pinpoint what people missed and shouldn’t have missed; what they didn’t do but should have done (Dekker, 2002).
Thinking more about the case above, put yourself inside the situation and try to understand why you had difficulty exiting. In this particular case, the door needed to be pulled to exit because it was an internal hallway door. Despite a sign indicating the need to pull the door open (likely put there after the door was installed) the handles of the door were designed to be pushed—a horizontal bar across the middle of the door. Additionally, in a normal situation, the doors are kept open by doorstops to facilitate the flow of people; so you rarely have to move the door in your normal routine. In this particular case, it was an emergency situation, smoke reduced your visibility and it is likely you were somewhat agitated due to the real emergency. When looking at the sequence of actions and events from inside the situation, we can explain why you had difficulty exiting safely: a) the design of the door, b) the practice of keeping the fire doors open with doorstops, c) the reduced visibility, and d) the real emergency, are all contributing and underlying factors that help us understand why difficulty was encountered.
According to Dekker (2002), hindsight can bias an investigation towards conclusions that the investigator now knows (given the outcome) that were important, and as a result, the investigator may assess people’s decisions and actions mainly in light of their failure to pick up the information critical to preventing the outcome. When affected by hindsight bias, an investigator looks at a sequence of events from outside the situation with full knowledge of the events and actions and their relationship to the outcome (Dekker, 2002).
The first step in mitigating the hindsight bias is to work towards the goal of learning from the experience of others to prevent recurrence. When the goal is to learn from an investigation, understanding and explanation is sought. Dekker (2002) recommends taking the perspective from “inside the tunnel,” the point of view of people in the unfolding situation. The investigator must guard him/herself against mixing his/her reality with the reality of the people being investigated (Dekker, 2002). A quote from one investigator in a high-profile accident investigation states: “…I have attempted at all times to remind myself of the dangers of using the powerful beam of hindsight to illuminate the situations revealed in the evidence. Hindsight also possesses a lens which can distort and can therefore present a misleading picture: it has to be avoided if fairness and accuracy of judgment is to be sought.” (Hidden, 1989)
Additionally, when writing the investigation report, any conclusions that could be interpreted as coming from hindsight must be supported by analysis and data; a reader must be able to trace through the report how the investigator came to the conclusions. In another high-profile accident, another investigator emphatically asked: “Given all of the training, experience, safeguards, redundant sophisticated electronic and technical equipment and the relatively benign conditions at the time, how in the world could such an accident happen?” (Snook, 2000). To mitigate the tendency to view the events with hindsight, this investigator ensured all accounts in his report clearly stated the goal of the analyses: to understand why people made the assessments or decisions they made—why these assessments of decisions would have made sense from the point of view of the people inside the situation. Learning and subsequent prevention or mitigation activities are the ultimate goals of accident investigation—having agreement from all stakeholders on this goal will go a long way to mitigating the hindsight bias.
Dekker, S., The Field Guide to Human Error Investigations, Ashgate, England, 2002.
Dekker, S., The Field Guide to Understanding Human Error, Ashgate, England, 2006.
Hidden, A., Investigation into the Clapham Junction Railway Accident, Her Majesty’s Stationery Office, London, England, 1989.
Snook, S. A., Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq, Princeton University Press, New Jersey, 2000.
 
Part III

part III
“New View” Accounts of Human Error
The “old view” of human error has its roots in human nature and the culture of blame. We have an innate need to make sense of uncertainty, and find someone who is at fault. This need has its roots in humans needing to believe “that it can’t happen to me.” (Dekker, 2006)
The tenets of the “old view” include (Dekker, 2006):
Human frailties lie behind the majority of remaining accidents. Human errors are the dominant cause of remaining trouble that hasn’t been engineered or organized away yet.
Safety rules, prescriptive procedures and management policies are supposed to control this element of erratic human behaviour. However, this control is undercut by unreliable, unpredictable people who still don’t do what they are supposed to do. Some bad apples keep having negative attitudes toward safety, which adversely affects their behaviour. So not attending to safety is a personal problem; a motivational one; an issue of mere individual choice.
The basically safe system, of multiple defences carefully constructed by the organization, is undermined by erratic people. All we need to do is protect it better from the bad apples.
What we have learned thus far though, is that the “old view” is deeply counterproductive. It has been tried for over two decades without noticeable effect (e.g. the Flight Safety Foundation [FSF] still identifies 80 percent of accidents as caused by human error); and it assumes the system is safe, and that by removing the bad apples, the system will continue to be safe. The basic attribution error is the psychological way of describing the “old view.” All humans have a tendency, when examining the behaviour of other people, to overestimate the degree to which their behaviour results from permanent characteristics, such as attitude or personality, and to underestimate the influence of the situation.
“Old view” explanations of accidents can include things like: somebody did not pay enough attention; if only somebody had recognized the significance of this indication, of that piece of data, then nothing would have happened; somebody should have put in a little more effort; somebody thought that making a shortcut on a safety rule was not such a big deal, and so on. These explanations conform to the view that human error is a cause of trouble in otherwise safe systems. In this case, you stop looking any further as soon as you have found a convenient “human error” to blame for the trouble. Such a conclusion and its implications are thought to get to the causes of system failure.

“Old view” investigations typically single out particularly ill-performing practitioners; find evidence of erratic, wrong or inappropriate behaviour; and bring to light people’s bad decisions, their inaccurate assessments, and their deviations from written guidance or procedures. They also often conclude how frontline operators failed to notice certain data, or did not adhere to procedures that appeared relevant only after the fact. If this is what they conclude, then it is logical to recommend the retraining of particular individuals, and the tightening of procedures or oversight.

Why is it so easy and comfortable to adopt the “old view”? First, it is cheap and easy. The “old view” believes failure is an aberration, a temporary hiccup in an otherwise smoothly-performing, safe operation. Nothing more fundamental, or more expensive, needs to be changed. Second, in the aftermath of failure, pressure can exist to save public image; to do something immediately to return the system to a safe state. Taking out defective practitioners is always a good start to recovering the perception of safety. It tells people that the mishap is not a systemic problem, but just a local glitch in an otherwise smooth operation. You are doing something; you are taking action. The fatal attribution error and the blame cycle are alive and well. Third, personal responsibility and the illusions of choice are two other reasons why it is easy to adopt this view. Practitioners in safety-critical systems usually assume great personal responsibility for the outcomes of their actions. Practitioners are trained and paid to carry this responsibility. But the flip side of taking this responsibility is the assumption that they have the authority, and the power, to match the responsibility. The assumption is that people can simply choose between making errors and not making them—independent of the world around them. In reality, people are not immune to pressures, and organizations would not want them to be. To err or not to err is not a choice. People’s work is subject to and constrained by multiple factors.

To actually make progress on safety, Dekker (2006) argues that you must realize that people come to work to do a good job. The system is not basically safe—people create safety during normal work in an imperfect system. This is the premise of the local rationality principle: people are doing reasonable things, given their point of view, focus of attention, knowledge of the situation, objectives, and the objectives of the larger organization in which they work. People in safety-critical jobs are generally motivated to stay alive and to keep their passengers and customers alive. They do not go out of their way to fly into mountainsides, to damage equipment, to install components backwards, and so on. In the end, what they are doing makes sense to them at that time. It has to make sense; otherwise, they would not be doing it. So, if you want to understand human error, your job is to understand why it made sense to them, because if it made sense to them, it may well make sense to others, which means that the problem may show up again and again. If you want to understand human error, you have to assume that people were doing reasonable things, given the complexities, dilemmas, tradeoffs and uncertainty that surrounded them. Just finding and highlighting people’s mistakes explains nothing. Saying what people did not do, or what they should have done, does not explain why they did what they did.

The “new view” of human error was born out of recent insights in the field of human factors, specifically the study of human performance in complex systems and normal work. What is striking about many mishaps is that people were doing exactly the sorts of things they would usually be doing—the things that usually lead to success and safety. People were doing what made sense, given the situational indications, operational pressures, and organizational norms existing at the time. Accidents are seldom preceded by bizarre behaviour.

To adopt the “new view,” you must acknowledge that failures are baked into the very nature of your work and organization; that they are symptoms of deeper trouble or by-products of systemic brittleness in the way you do your business. (Dekker, 2006) It means having to acknowledge that mishaps are the result of everyday influences on everyday decision making, not isolated cases of erratic individuals behaving unrepresentatively. (Dekker, 2006) It means having to find out why what people did back there actually made sense, given the organization and operation that surrounded them. (Dekker, 2006)

The tenets of the “new view” include (Dekker, 2006):

Systems are not basically safe. People in them have to create safety by tying together the patchwork of technologies, adapting under pressure, and acting under uncertainty. Safety is never the only goal in systems that people operate. Multiple interacting pressures and goals are always at work. There are economic pressures, and pressures that have to do with schedules, competition, customer service, and public image. Trade-offs between safety and other goals often have to be made with uncertainty and ambiguity. Goals, other than safety, are easy to measure. However, how much people borrow from safety to achieve those goals is very difficult to measure. Trade-offs between safety and other goals enter, recognizably or not, into thousands of little and larger decisions and considerations that practitioners make every day. These trades-offs are made with uncertainty, and often under time pressure. The “new view” does not claim that people are perfect, that goals are always met, that situations are always assessed correctly, etc. In the face of failure, the “new view” differs from the “old view” in that it does not judge people for failing; it goes beyond saying what people should have noticed or could have done. Instead, the “new view” seeks to explain “why.” It wants to understand why people made the assessments or decisions they made—why these assessments or decisions would have made sense from their point of view, inside the situation. When you see people’s situation from the inside, as much like these people did themselves as you can reconstruct, you may begin to see that they were trying to make the best of their circumstances, under the uncertainty and ambiguity surrounding them. When viewed from inside the situation, their behaviour probably made sense—it was systematically connected to features of the their tools, tasks, and environment.

“New view” explanations of accidents can include things like: why did it make sense to the mechanic to install the flight controls as he did? What goals was the pilot considering when he landed in an unstable configuration? Why did it make sense for that baggage handler to load the aircraft from that location? Systems are not basically safe. People create safety while negotiating multiple system goals. Human errors do not come unexpectedly. They are the other side of human expertise—the human ability to conduct these negotiations while faced with ambiguous evidence and uncertain outcomes.

“New view” explanations of accidents tend to have the following characteristics:

Overall goal: In “new view” accounts, the goal of the investigation and accompanying report is clearly stated at the very beginning of each report: to learn. Language used: In “new view” accounts, contextual language is used to explain the actions, situations, context and circumstances. Judgment of these actions, situations, and circumstances is not present. Describing the context, the situation surrounding the human actions is critical to understanding why those human actions made sense at the time.

Hindsight bias control employed: The “new view” approach demands that hindsight bias be controlled to ensure investigators understand and reconstruct why things made sense at the time to the operational personnel experiencing the situation, rather than saying what they should have done or could have done. Depth of system issues explored: “New view” accounts are complete descriptions of the accidents from the one or two human operators whose actions directly related to the harm, including the contextual situation and circumstances surrounding their actions and decisions. The goal of “new view” investigations is to reform the situation and learn; the circumstances are investigated to the level of detail necessary to change the system for the better. Amount of data collected and analyzed: “New view” accounts often contain significant amounts of data and analysis. All sources of data necessary to explain the conclusions are to be included in the accounts, along with supporting evidence. In addition,“new view” accounts often contain photos, court statements, and extensive background about the technical and organizational factors involved in the accidents. “New view” accounts are typically long and detailed because this level of analysis and detail is necessary to reconstruct the actions, situations, context and circumstances. Length and development of arguments (“leave a trace”): “New view” accounts typically leave a trace throughout the report from data (sequence of events), analysis, findings, conclusion and recommendations/corrective actions. As a reader of a “new view” account, it is possible to follow from the contextual descriptions to the descriptions of why events and actions made sense to the people at the time, to in some cases, conceptual explanations. By clearly outlining the data, the analysis, and the conclusions, the reader is made fully aware of how the investigator drew their conclusions. “New view” investigations are driven by one unifying principle: human errors are symptoms of deeper trouble. This means a human error is a starting point in an investigation. If you want to learn from failures, you must look at human errors as:

A window on a problem that every practitioner in the system might have; A marker in the system’s everyday behaviour; and An opportunity to learn more about organizational, operational and technological features that create error potential.

Reference: Dekker, S., The Field Guide to Understanding Human Error, Ashgate, England, 2006.
Thoughts on the New View
 

Related Threads for: The steps of Chernobyl and effects of radiation

Replies
3
Views
883
Replies
6
Views
464
Replies
15
Views
6K
  • Posted
Replies
7
Views
22K
Replies
99
Views
29K
  • Posted
Replies
6
Views
4K
Replies
43
Views
3K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top