russ_watters said:
Summary: Discussion of the TMI accident and recent Netflix documentary about it.
What, exactly, was wrong with the polar crane? Did they even know/was it just "we didn't check enough"?
According to the Netlix summary, the polar crane had been rebuilt, but the summary doesn't state 'how' or 'why' it was rebuilt. Was it built with additional capacity? What was the protocol/procedure for qualifying it?
When Bechtel announced they would be lifting the lid off the Unit 2 core to repair it without testing the newly rebuilt polar crane, Parks’ concerns became dire. He knew that if the crane broke, the entirety of the East Coast of the United States would be contaminated and forced to evacuate.
Parks may have been reflecting on his naval career. The statement I bolded is nonsense.
Ostensibly, the original polar crane had been used to install the upper vessel head (lid). If the 'rebuilt' polar crane had additional capacity, then it would be able to handle the upper head with additional margin. I'm sure that Bechtel was dismissive of Parks, and perhaps they minimized the risk, or they had done an analysis to indicate that the crane was sound.
Unit 2 had been commissioned at the end of 1978, and apparently that was rushed. I'd have to look back through my records, but I believe Unit 1 was in its second cycle.
When the Unit 2 reactor became operational in March 1978, it experienced a number of start-up difficulties. On March 28, the very first day of its startup, one of the reactor coolant pumps failed. A pressure release valve had opened and stayed open, causing coolant to leak out of the system. During its first year of operation, Unit 2 experienced twenty reactor "trips," immediate shutdowns of the reactor due to any malfunction. Despite these early setbacks in the testing phase of the reactor, the Nuclear Regulatory Commission deemed Unit 2's performance satisfactory for commercial operation. Unit 2 finished its testing phase and began full commercial operation on December 30, 1978.
https://pabook.libraries.psu.edu/li...e-articles/disaster-averted-three-mile-island
At the time of the accident in March 1979, Unit 2 had accumulated about 62 effective full power days (EFPD) of operation, which included startup and low power testing during 1978, so the inventory of fission products was no where near that of Chernobyl Unit 4 or the Fukushima units. The burnup on the TMI-2 fuel was probably on the order of 2 to 3 GWd/tU. Yet, there was enough decay heat such that when there was no flow in the coolant, the core essentially became adiabatic, i.e., there was no heat transfer and so the fuel heated up and began to disintegrate (rapid unstable oxidation of cladding and fuel through reaction with the hot coolant). Some of the fuel did interact with the core baffle (the plate structure that forms the outer boundary of the core) and it did fail in one location, and some core debris did spill.
On March 26, 1979, Walter Creitz, president of Met Ed, wrote in an op-ed article that the plants were being "operated in a way that places top priority on safety." The nuclear accident at Three Mile Island occurred just two days later.
Ref: Ibid.
Certainly, such a statement followed by a severe accident undermined the credibility of GPU management and the industry. There were a cascade of operational errors due to poor practice and training.
. . . An emergency cooling water system should have started automatically, but it did not. Due to a maintenance error following a test of this backup system, critical valves were left closed, in violation of NRC regulations. The closed valves prevented this emergency cooling system from engaging.
When the secondary loop stopped flowing and the backup system failed to engage, the cooling effect was lost and the primary loop began to overheat. Pressure inside this loop increased by nearly 5%, which immediately caused the reactor core to automatically shut down and halt all nuclear processes. However, the latent heat of the core continued to rise, and it soon reached a temperature high enough to cause a potential meltdown. As pressure inside the primary cooling loop continued to increase, an automatic pressure relief valve opened to release steam. This valve should have closed again automatically when pressure decreased, but it remained stuck open. When the valve failed to close, steam and cooling water poured out of the valve into a holding tank in the basement of the building. This was the same valve that had malfunctioned during initial reactor tests in March of 1978.
Inside the control room, alarm bells rang and warning lights flashed rapidly. Instruments available to the reactor operators showed confusing and contradictory information. There was no signal that the pressure relief valve was stuck open. It was also impossible to measure the level of coolant in the core, since there was no gauge available to the operators that gave a proper reading. One of the operators, Craig Faust, later remarked, "I would have liked to have thrown away the alarm panel. It wasn't giving us any useful information." As the level of water in the main cooling system dropped rapidly, an emergency core cooling system started automatically and began pumping water into the core at a rate of one thousand gallons per minute. When the operators recognized that this was happening, they mistakenly believed the core was going to overflow with water, which would damage the reactor. This fear made the situation worse when the operators decided to shut down the emergency cooling system.
Ibid.
I recall that when the core temperature indicators were off-scale, someone put a voltmeter across the leads/terminals of the temperature indicator in order to get an estimate of the temperature, and some were skeptical about the temperature indication. However, the estimate was consistent with core damage.
To be sure, there were deficiencies in the plant design, e.g., the hot legs in the primary loops (between RPV outlet nozzles and once-through steam generators (OTSGs)) were vulnerable to steam pockets which prevented natural circulation. Other steam generators designs with bottom entry of the hot leg were less susceptible.
Operating training was poor, which was endemic in the industry at the time. One result was better training and more staff in the control room, including a shift technical advisor (STA). Reactor/plant simulators became a more significant part of training.
IEEE Spectrum did a special issue on the event. Special issue: Three Mile Island and the future of nuclear power
https://ieeexplore.ieee.org/document/6368288
I remember later articles as more information became available.
As for melting, I don't believe the fuel melted as much as it oxidized from metal to ceramic oxide, which is a good indication that there was cooling water in the core, but it was boiling or mostly in steam. With respect to melting temperatures of stainless steel (~1375 - 1400°C), Zircaloy (~1850°C) and the UO
2 fuel (~2840°C), well before melting, the material oxidize/corrode well below those melting temperatures, especially in steam. By the time the fuel temperature reaches about 1000°C (0.77 homologous temperature of stainless steel, and 0.60 homologous temperature of Zircaloy), the materials get very soft and flow under low stress, and they oxidize very rapidly. Fe, Cr and Ni become more readily soluble in Zr, even without the chemical oxidation reactions with H
2O, which is the origin of the hydrogen of concern. In hydrogen, Zr would hydride rapidly, which then is more susceptible to oxidation.