Prompt neutrons, delayed neutrons, chain reaction control

Could you elaborate? Is that some nuclear operations aspect of ASME, or simple ASME prohibitions on temperture change rates on metal structures, steam boilers and such? I'm curious as to what rate would apply to molten salt vessels.
The ASME boiler pressure vessel code for nuclear pressure vessels assumes normal and upset operations are always within 100 degF per hour, and that only emergency or faulted conditions exceed that limit. This is part of the vessel stress analysis. There are typically a certain number of pre-analyzed cycles for exceeding these limits, however normal operations keeps you within the limit.

For something like LFTR where you have no pressure, it will likely be very different as you don't have steam generation and high pressure.

mheslep
mheslep
Gold Member
... They run at low pressure, right?
One atm, plus depth of fluid pressure, is the assumption by several of the MSR firms. Their vessel challenge is proving long term vessel life given corrosion from the salt at ~700C, things like free flourine, and, I suppose, with fuel dissolved in the salt, a higher radiation load on the MSR vessel than is encountered with light water reactors.

The stress analysis for them might look a lot different than the analyses for BWR / PWR vessels, where the pressures are medium/high (1000 / 2500 psia).

mheslep
Gold Member
The ASME boiler pressure vessel code for nuclear pressure vessels assumes normal and upset operations are always within 100 degF per hour, and that only emergency or faulted conditions exceed that limit. This is part of the vessel stress analysis. There are typically a certain number of pre-analyzed cycles for exceeding these limits, however normal operations keeps you within the limit.

For something like LFTR where you have no pressure, it will likely be very different as you don't have steam generation and high pressure.
Thanks for the response.

@Astronuc , no i was thinking loss of water as in an accident scenario like happened at TMI where they jerked the water levels because they had wrong readings on the gauges if I remember correctly from what I have read.So in that sense less dense water or no water at all is part of the negative feedback since now there are far less thermal neutrons instead most are fast which slows or stops? the chain reaction.

Now a few more questions I just want to clear out of the way.Now I am reading that even though the power increase with respect to time is limited in a reactor due to the thermal/mechanical stress limits of the materials as far as the nuclear reaction and reactor stability is concerned it would actually be better that the power is increased much more rapidly to a stable designed output, why is this so and is it true? The reason I read is that that if the rate at which the reactor is brought to full power would bee too slow there would be too much fission neutrons and the reactivity could get dangerously high, why is this so?

Is it because when you increase reactor power (no matter how slowly) which is essentially increasing its neutron flux you have to operate at a k>1 condition and a k>1 condition will yield an exponentially ever increasing neutron rate both from the prompt and also from the delayed neutrons and after a given time the delayed neutrons would bee too much?

And another thing about prompt neutrons that I want to clear up. So I have basically understood that when operating a reactor both at stable power level or an increasing one you always want to keep the prompt neutrons at some maximum number but not over that , say 0.9 because then you can know how much delayed neutrons you will get and after how long on average so you can then be sure what will be the reactor power increase due to the fact that isotope half lives are not changing and so given a certain number of prompt neutrons you know the average delayed neutron/energy increase that will happen at any point in time during a increase , but once you get over a given threshold of prompt neutrons , say over coefficient 1 you then have a reaction whose energy increase doesn't wait for any delayed ones but takes over entirely from prompt ones and since they are so fast with respect to splitting the next nucleus and the next one and so on the reaction goes out of hand time wise?

Astronuc
Staff Emeritus
@Astronuc , no i was thinking loss of water as in an accident scenario like happened at TMI where they jerked the water levels because they had wrong readings on the gauges if I remember correctly from what I have read.So in that sense less dense water or no water at all is part of the negative feedback since now there are far less thermal neutrons instead most are fast which slows or stops? the chain reaction.
Well, with regard to power changes, I was addressing 'normal' operation as opposed to anomalies or accidents. When water in the core becomes less dense, it moderates less, and that will cause the power to decrease. In the case of TMI, where water is lost, the event usually triggers a shutdown or 'scram' of the reactor, i.e., the control rods are inserted into the core, and the reactor power decreases. However, there remain 'decay heat' from the decay of fission products.

The accident began about 4 a.m. on Wednesday, March 28, 1979, when the plant experienced a failure in the secondary, non-nuclear section of the plant (one of two reactors on the site). Either a mechanical or electrical failure prevented the main feedwater pumps from sending water to the steam generators that remove heat from the reactor core. This caused the plant's turbine-generator and then the reactor itself to automatically shut down. Immediately, the pressure in the primary system (the nuclear portion of the plant) began to increase. In order to control that pressure, the pilot-operated relief valve (a valve located at the top of the pressurizer) opened. The valve should have closed when the pressure fell to proper levels, but it became stuck open. Instruments in the control room, however, indicated to the plant staff that the valve was closed. As a result, the plant staff was unaware that cooling water was pouring out of the stuck-open valve.
The initiating event was a 'loss of feedwater' to the steam generator, followed by a turbine trip. The primary system began to overheat since the secondary side was not removing heat from the primary coolant system.

Now a few more questions I just want to clear out of the way.Now I am reading that even though the power increase with respect to time is limited in a reactor due to the thermal/mechanical stress limits of the materials as far as the nuclear reaction and reactor stability is concerned it would actually be better that the power is increased much more rapidly to a stable designed output, why is this so and is it true? The reason I read is that that if the rate at which the reactor is brought to full power would bee too slow there would be too much fission neutrons and the reactivity could get dangerously high, why is this so?
There some confusion expressed here. Reactor heatup from cold temperatures (room or somewhat hotter) is normally done at zero core power in PWRs, whereas BWRs may actually use nuclear heat to start heating up. PWR vessels must hold higher pressure (~155-158 bar) as compared to BWR vessels (72-74 bar). Once the reactor is at operating conditions, power ascension is usually unrestricted to about 20 to 30% power for a PWR, but there may be restrictions self-imposed by the operator to rates of 10%, 15% or 30% per hour, and there is usually a hold at about 15% to get the turbine rolling and synchronized with the grid. I'm not sure about BWRs. Beyond lower power, power ascension rates may a 3 to 5% per hour, or slightly faster if the fuel is consider conditioned. Some German PWRs can increase power at a rate of 60%/hr to a particular threshold, which is plant specific.

To increase power, PWR operators will gradually dilute the boric acid (B-10) in the coolant, since most of the control rods are withdrawn. One group of control rods may be partially inserted to control axial power shape, but will be gradually withdrawn before full power.

BWRs have most control rods fully withdrawn during startup. There will be at least two groups, one deeply and the other shallow inserted into the core to control the power. Slight adjustments are made as the core heats up and fuel temperature and steam voids in the core reduce moderation.

Delayed neutrons represent a fraction (~0.007, or 0.7%) of the neutron population, and this fraction is assigned a value of $1.00 of reactivity, which is a measure of how k differs from 1 when the core configuration changes when neutron absorbers are added or removed from the reactor. There are also values of reactivity given to core conditions, e.g., fuel temperature and coolant (moderator) density. Normally, reactivity changes (increases) are on the order of cents or low fractions of$1, since a small change can increase power at a sufficient rate.

So with delayed neutrons representing about 0.007 of the neutron population, the fast neutrons (including prompt) represent 0.993. If the reactivity is increased by $1.00, the reactor would be prompt critical, which is a forbidden state in a reactor. We do plan for the case where there is a reactivity transient greater than$1.00, e.g., if a control rod is ejected from the core while the reactor is critical.

Ok, I have another question with regard to this, if the prompt vs delayed neutron balance is so important for a safe power increase in a reactor and the prompt neutrons cannot be allowed to exceed 0.99 or thereabout of the coefficient k of reactivity then how do the reactor operators monitor this so precisely?
I assume that there are not just 100 or 1000 prompt neutrons during a reactor k=1 or k>1 condition , there are probably thousands if not millions of them correct? But geiger muller tubes to the best of my knowledge aren't so precise as to be able to give a specific count of such a vast number when the flux is large, I assume there is a constant electric conduction between the anode and the cathode in the tube once the particle count is above a certain threshold so I assume a dosimeter then can only base its relative particle count based on the resistance or voltage drop in the tube or so?

Anyway I am puzzled by how the reactor operators can keep the precise reactivity coefficient given how much neutrons there are in each part of a second and how that count can be then managed by simple mechanical moderator rods that are driven inwards or outwards of the active zone?

mfb
Mentor
There are neutron detectors that work with much higher flux rates.
You can monitor the temperature changes.
You can measure the radiation at various places outside the core.
Computer models simulate the reactor and allow predictions about k based on all the measurements.

Ok, I have another question with regard to this, if the prompt vs delayed neutron balance is so important for a safe power increase in a reactor and the prompt neutrons cannot be allowed to exceed 0.99 or thereabout of the coefficient k of reactivity then how do the reactor operators monitor this so precisely?
I assume that there are not just 100 or 1000 prompt neutrons during a reactor k=1 or k>1 condition , there are probably thousands if not millions of them correct? But geiger muller tubes to the best of my knowledge aren't so precise as to be able to give a specific count of such a vast number when the flux is large, I assume there is a constant electric conduction between the anode and the cathode in the tube once the particle count is above a certain threshold so I assume a dosimeter then can only base its relative particle count based on the resistance or voltage drop in the tube or so?

Anyway I am puzzled by how the reactor operators can keep the precise reactivity coefficient given how much neutrons there are in each part of a second and how that count can be then managed by simple mechanical moderator rods that are driven inwards or outwards of the active zone?
Prompt neutron ratios are controlled by core design. Us operators don't ever have to worry about them in terms of reactor safety.

The thermal neutron flux in a full power reactor at a power plant is in the 10^13 neutrons per cm^2 per second range. This is an absolutely massive number of neutrons. It's not like 10-100.

We operators don't need to do anything. The core design maintains stability under normal and transient conditions as far as prompt/delayed neutrons are concerned. The system stays stable under thermodynamics. Moderator effects and temperature effects and sometimes control systems are used to hold things steady, and transients which result in a reactivity spike are analyzed for worst case conditions to ensure the core doesn't to prompt critical.

There are certain events where you may have localized prompt criticality but the total core is stable or subcritical. For example a control rod drop accident in a BWR can do this. You have localized fuel damage but no gross core damage.

mheslep
Ok so I get from what you say and also from what I have read in my life that most latest generation and second generation reactors are built such that under normal operating conditions the reactor is calculated and designed such that it cannot go supercritical , in other words its coolant and solid reactor moderators combined make it impossible to go supercritical to the point of bomb like chain reaction speed? So what happens for example in a PWR or BWR for that matter if the coolant is lost like at Fukushima or TMI ? If I remember correctly even then the design doesn't allow for supercriticality , its only that with no coolant the decay heat generates enough heat to eventually melt the fuel cladding and make the fuel turn into a pound of lava at the base of the reactor vessel from where it makes some gasses in reaction to the metals around which either are ventilated out the vessel or can cause some pressure damage to the vessel or hydrogen formation and a hydrogen gas explosion but still no criticality explosion correct?

So if the reactor has built in moderators like graphite in some reactors and or water/heavy water in others , then why the control rods are needed at all ? are they simply servig the purpose of stopping the reactor once needed and reactivity increase/decrease aka power level up or down ?
So technically lets imagine that in a modern second or third generation plant you by accident (no matter how unlikely) withdraw say all control rods , what happens? does the reactor design maintain the power at some maximum level and never allow a prompt critical condition or can such a situation still be theoretically possible? I would imagine in a PWR the max level would be constrained by the water evaporating and hence fewer thermal neutron production which slows the U235 reaction?

One more thing, I assume in some older design reactors, like the former rather infamous soviet beast by the name RBMK, a huge responsibility was on the operators.It seems like in the RBMK the reactor design was such that the moving of control rods combined with the manipulation of water level in the reactor basically allowed the reactor to go from its minimal 500MW thermal all the way to prompt critical nuclear bomb mode correct? So would it be correct to say that the reactor design had a flaw in that it wasn't made fool proof atleast not to the point where a few or maybe even one mechanical action leads to a apocalyptic power skyrocket. What would happen to an RBMK reactor as compared to a PWR or BWR if all conditions where normal except for that all control rods where somehow taken out. (I do realize that a PWR's safety systems might not allow for the withdrawal of all rods without bypassing)

I understand I ask many questions here but I would like to know about the neutron measurements during operation of the active zone, so if I gather correctly from what you say here that if the reactor is built such that it normally keeps itself contained in terms of reactivity then the neutron detectors serve mostly only informational and approximate role? Because given the huge amount of neutrons in the very active zone is there any detector that can accurately show the count of neutrons? I have a hard time imagining how a geiger muller tube or a scintillator could measure such a high neutron flux because the avalanche current conduction in the tube or photon/electron multiplication in the cintillator would be continuous and all the power supply current coming through the anode cathode would be simply used at its max isn't this the case?

P.S. I talked to a local nuclear engineer and he said that he thinks the Chernobyl reactor 4 first explosion was a weak but nuclear explosion rather than a simple steam explosion , what do you think ? I must say there was an awful lot of damage for a steam explosion , huge reinforced concrete pillars and walls were thrown aside like sticks, the roof of the reactor hall wasn't particularly strong rather a simple industrial type but the walls and support structure was kind of sturdy so I don't know , has anyone any estimates on the power yield of that explosion?
The eye witness accounts say the blast was very strong and heard miles away, local residents even called to the station asking what is going on.

Thanks.

etudiant
Gold Member
Will defer to the experts we benefit from on this site. They will have better insight.
My understanding is that the Chernobyl reactor did go up in power by about a thousand times in an instant as some of the control rods were being reinserted, because the ends of the control rods did not absorb neutrons, but only slowed them. That was enough power to ensure world's most effective steam explosion as all the coolant was flashed into very high temperature steam in a fraction of a second.
A nuclear explosion by itself is just a heat source, the particles streaming from the fission or fusion reactions may be lethal, but are not really that damaging, a reality that has of course stimulated the development of neutron bombs. Here the steam explosion did the damage.
Chernobyl was a nuclear excursion, but not an explosion, because the reactor blew apart before there could be a nuclear explosion. It underscores that the problem in making a nuclear bomb is how to keep the components together long enough for the explosion to occur.

Astronuc
Staff Emeritus
Ok so I get from what you say and also from what I have read in my life that most latest generation and second generation reactors are built such that under normal operating conditions the reactor is calculated and designed such that it cannot go supercritical , in other words its coolant and solid reactor moderators combined make it impossible to go supercritical to the point of bomb like chain reaction speed? So what happens for example in a PWR or BWR for that matter if the coolant is lost like at Fukushima or TMI ? If I remember correctly even then the design doesn't allow for supercriticality , its only that with no coolant the decay heat generates enough heat to eventually melt the fuel cladding and make the fuel turn into a pound of lava at the base of the reactor vessel from where it makes some gasses in reaction to the metals around which either are ventilated out the vessel or can cause some pressure damage to the vessel or hydrogen formation and a hydrogen gas explosion but still no criticality explosion correct?

So if the reactor has built in moderators like graphite in some reactors and or water/heavy water in others , then why the control rods are needed at all ? are they simply servig the purpose of stopping the reactor once needed and reactivity increase/decrease aka power level up or down ?
There are a number of important technical aspects here. One aspect is normal reactor operation with safeguards to check abnormal situations. The other is abnormal or accident conditions, and how reactors are designed to mitigate adverse consequences.

Here is a reasonable good discussion of criticality, and particularly prompt criticality.
http://www.nuclear-power.net/nuclea.../reactor-criticality/prompt-critical-reactor/

A commercial reactor could go prompt critical if the reactivity increase in the core exceeds the effective delayed neutron fraction (βeff). When a control rod or controlling neutron absorber is withdrawn or removed from the core, the keff increases above 1. The reactor is configured to allow changes in k, or Δk, to be less than βeff. Operators are concerned about operational errors or mishaps where to much reactivity is added to the core.
There are certain events where you may have localized prompt criticality but the total core is stable or subcritical. For example a control rod drop accident in a BWR can do this. You have localized fuel damage but no gross core damage.
Hiddencamper provided an example of localized prompt criticality which would result in local fuel damage, but still allow the reactor to shutdown while maintaining coolability. The PWR 'control rod ejection' is the PWR analog to BWR control rod drop accident.

We are also concerned about accidents in which large amounts of reactivity (e.g., the reactivity addition is >> βeff) are inserted in the core. But I probably need to explain how power responds to reactivity insertions.

Control rods are required in order to assure 'shutdown' of the nuclear reactor, and maintain k < 1. In PWRs, control rods typical sit above the core during operation, although some special control rods may be inserted to facilitate power distribution or are used during power maneuvering (See my previous post on load-following); otherwise, reactivity control is maintained with soluble boron in the coolant, in conjunction with burnable poisons (neutron absorbers) in the fuel.. In contrast, BWRs use control rod during operation, since they cannot use soluble boron in the coolant like PWRs, due to the boiling in the core. BWR fuel also uses burnable poisons in the fuel.