rmattila said:
If you aim at
-operational occurrence rate of less than once per year
-limiting functions capable of preventing 99 % of occurrences from propagating into accidents
-safety functions capable of preventing core damage in 99 % of accidents
-containment capable of preventing a large release in 99 % of meltdowns
you will not have to deal with smaller probabilities than 1e-2. Instead, you will have to deterministically ensure that each of the levels reaches its goal and is independent of the other levels. If this can be guaranteed, the probability of a large release is around 1e-6 per annum, which is generally deemed acceptable.
Old plants fail miserably with the last bullet, and have to try to compensate the incapacity to deal with consequences of a severe accident with somehow even more improved preventive measures. This will inevitably result into a more challenging safety case to ensure the acceptable safety level.
Your point of generalizing old plants fails by example. TMI2 was a meltdown accident in an older plant with no large release.
Chernobyl was a deliberately initiated event, the safety systems had been deliberately disabled, and there was no containment.
Fukushima was a 1E-3 event (earthquake/tsunami), the safety systems worked initiially, but failed due to lack of protection from flooding. I am not sure that containment failures at Fukushima would have occurred in a plant that didn't have such a glaring design deficiency.
And finally on your risk targets. PRA has exactly those kinds of targets. The problem comes when estimating probabilities for rare events and anticipating all the threats. TEPCO underestimated the threat of tsunami and didn't design turbine buildings to resist flooding. I understand they had completed the IPE level analysis (internal events) and some level of IPEEE analysis (External Events). I am guessing their results showed that they met the acceptable risk goal you describe.
The 1E-3 tsunami greater than 5.5 m was not recognized. failures of individual safety systems and power sources had probably already met or exceeded your 1E-2 failure probabilities. But that had not recognized the common failure mode of flooding. Suddenly all of the safety systems had a failure probability of 1.0. Containment never had a chance.
The biggest lesson to be learned from Fukushima is the importance of safety culture at all levels from National Regulators, to vendors, to the utility management, to operators, to engineers, and technicians, security and general laborers. The single biggest tool for safety is maintaining a questioning attitude. Thus when the geological evidence of large tsunamis was made known the question should have triggered action by regulators, managers, and techical staff. It didn't, and I attribute that to complacency, and a lack of integrity. Anyone with knowledge of the risk who didn't force the issue is at fault.
I don't know how to measure safety culture or include it in a PRA. Rare accidents are obvious triggers of attention to safety, but the real need is to use every issue, equipment failure, and problem, no matter how small, as a similar trigger. If a breaker trips or a fuse blows, don't just reset or replace it. Consider the circumstances when it blew. Was the operation abnormal? Was the circuit overloaded? Is the fuse or breaker the right size or rating? This sort of thinking becomes a habit. If you know people who have worked in the US nuclear industry, you may have observed this kind of thinking. It is why I believe that Fukushima technical solutions will be applied in US plants. It is not enough to simply implement techical solutions from from Japan, the safety culture must also be part of the mix.