Reactor scram due to inadvertent opening of multiple safety relief valves

Click For Summary

Discussion Overview

The discussion revolves around the implications of a simulated reactor incident where safety relief valves were inadvertently opened, leading to coolant blowback into the reactor. Participants explore potential solutions to prevent such occurrences, focusing on human error, safety systems, and engineering proposals.

Discussion Character

  • Exploratory
  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • One participant suggests a control module to override safety release valves as a cost-effective solution.
  • Others argue against overriding safety systems, emphasizing the importance of preventing human error through better training and procedures.
  • Another participant proposes a redundant system or failsafe measure for the safety systems to prevent unnecessary valve openings.
  • Concerns are raised about the potential costs associated with temperature changes and reactor testing following the incident, suggesting a need for concrete prevention measures.
  • It is noted that safety systems are designed to activate automatically, regardless of intent, and that any override system could still be vulnerable to human error.

Areas of Agreement / Disagreement

Participants generally agree that preventing human error is crucial, but there is disagreement on the best approach to achieve this. Some advocate for training improvements, while others explore engineering solutions like redundant systems.

Contextual Notes

The discussion includes assumptions about the significance of the incident and the necessity for prevention measures, which may not reflect the actual risk level indicated by the INES rating.

d01100001
Messages
2
Reaction score
0
I'm an engineering physics student writing a mock proposal for a class. I've based the proposal on a specific company that builds reactors and trains personnel for navy vessels and off of this article: http://www.world-nuclear-news.org/nerliste.aspx?id=11724

Essentially it says that during a simulation, the safety release valves were mistakenly open causing a blowback of coolant into the reactor.

My question is, in the hypothetical situation wherein this was less of a random "oops" and more of a probable mistake, what would the best course of action be to rectify the situation?

I have proposed a control module to override the functions of the safety release valves. Would this be the most cost-effective route?

Thank you for any and all help!
 
Last edited by a moderator:
Engineering news on Phys.org
d01100001 said:
I'm an engineering physics student writing a mock proposal for a class. I've based the proposal on a specific company that builds reactors and trains personnel for navy vessels and off of this article: http://www.world-nuclear-news.org/nerliste.aspx?id=11724

Essentially it says that during a simulation, the safety release valves were mistakenly open causing a blowback of coolant into the reactor.

My question is, in the hypothetical situation wherein this was less of a random "oops" and more of a probable mistake, what would the best course of action be to rectify the situation?

I have proposed a control module to override the functions of the safety release valves. Would this be the most cost-effective route?

Thank you for any and all help!

It sounds like from the description in your link that the problem was caused by preventable human error. Overriding the function of safety valves sounds like a dangerous bad idea. A better idea would be to increase worker training and procedures to ensure this type of accident doesn't occur in the future. This would avoid the extremely time consuming and costly measure of having to redesign and reanalyze the entire plant's safety systems.
 
Last edited by a moderator:
I agree with QuantumPion. Overriding the function of a safety system is not a good idea. Preventing inadvertent operation would be better, and that may simply require is better training.

Looking at the WNN article, it indicates that the event has an INES Rating = 1, which is the lowest level.
 
Thank you for the input.

I'd like to add to the question now though...

I initially went with the retraining thought myself, but in this assignment I have to use a bit of poetic justice by making a couple of assumptions. First, I'm operating under the assumption that the problem is more significant than it is. And second, I must use a solution that involves a process relating to my studies, ie. engineering.

How about a redundant system or a failsafe measure for the "SEHR ADS" that would prevent the safety valves from opening without need?

Furthermore, as a points of argument on a somewhat wider scale:
While clearly this is a random, isolated mistake, and the low level of INES rating makes it sound like a miniscule threat, wouldn't the temp change described and the subsequent testing of the reactor cost a *great* deal of money? Possibly enough money to qualify an argument for concrete prevention of such a mistake?
The premise I'm hoping to convey in the proposal is one of the worth of prevention.
And though chasing after every source of human error is understandably naive of me, I'm using it analogously for the field of engineering as a whole...
 
d01100001 said:
How about a redundant system or a failsafe measure for the "SEHR ADS" that would prevent the safety valves from opening without need?

Safety systems have no idea whether you really mean to activate them or not, they are just designed to work. The whole point of a fail-safe mechanism is that it always actuates if anything goes wrong, to protect the plant.

It sounds like in the description of that OE that the workers were testing the actuation signal for the SEHRS, but missed a step which prevented the real signal from occurring. Whatever override system you want to design is still flawed by the fact that this same type of human error could occur - i.e. forgetting to override a real signal during a test.