Prompt neutrons, delayed neutrons, chain reaction control

Click For Summary
The discussion centers on the mechanics of nuclear reactors, particularly the roles of prompt and delayed neutrons in fission processes. Prompt neutrons, released immediately during fission, have higher energy and are crucial for initiating the chain reaction, while delayed neutrons, emitted from decay products, help regulate reactor power levels due to their slower response. The neutron absorbers in control rods are designed to capture more high-energy prompt neutrons, allowing delayed neutrons to contribute to fission without prematurely halting the reaction. The energy distribution of these neutrons affects their likelihood of causing fission, with delayed neutrons generally being less effective in initiating fast fissions. Understanding these dynamics is essential for reactor control and safety.
  • #31
Ok, I have another question with regard to this, if the prompt vs delayed neutron balance is so important for a safe power increase in a reactor and the prompt neutrons cannot be allowed to exceed 0.99 or thereabout of the coefficient k of reactivity then how do the reactor operators monitor this so precisely?
I assume that there are not just 100 or 1000 prompt neutrons during a reactor k=1 or k>1 condition , there are probably thousands if not millions of them correct? But geiger muller tubes to the best of my knowledge aren't so precise as to be able to give a specific count of such a vast number when the flux is large, I assume there is a constant electric conduction between the anode and the cathode in the tube once the particle count is above a certain threshold so I assume a dosimeter then can only base its relative particle count based on the resistance or voltage drop in the tube or so?

Anyway I am puzzled by how the reactor operators can keep the precise reactivity coefficient given how much neutrons there are in each part of a second and how that count can be then managed by simple mechanical moderator rods that are driven inwards or outwards of the active zone?
 
Engineering news on Phys.org
  • #32
There are neutron detectors that work with much higher flux rates.
You can monitor the temperature changes.
You can measure the radiation at various places outside the core.
Computer models simulate the reactor and allow predictions about k based on all the measurements.
 
  • #33
Lacplesis said:
Ok, I have another question with regard to this, if the prompt vs delayed neutron balance is so important for a safe power increase in a reactor and the prompt neutrons cannot be allowed to exceed 0.99 or thereabout of the coefficient k of reactivity then how do the reactor operators monitor this so precisely?
I assume that there are not just 100 or 1000 prompt neutrons during a reactor k=1 or k>1 condition , there are probably thousands if not millions of them correct? But geiger muller tubes to the best of my knowledge aren't so precise as to be able to give a specific count of such a vast number when the flux is large, I assume there is a constant electric conduction between the anode and the cathode in the tube once the particle count is above a certain threshold so I assume a dosimeter then can only base its relative particle count based on the resistance or voltage drop in the tube or so?

Anyway I am puzzled by how the reactor operators can keep the precise reactivity coefficient given how much neutrons there are in each part of a second and how that count can be then managed by simple mechanical moderator rods that are driven inwards or outwards of the active zone?

Prompt neutron ratios are controlled by core design. Us operators don't ever have to worry about them in terms of reactor safety.

The thermal neutron flux in a full power reactor at a power plant is in the 10^13 neutrons per cm^2 per second range. This is an absolutely massive number of neutrons. It's not like 10-100.

We operators don't need to do anything. The core design maintains stability under normal and transient conditions as far as prompt/delayed neutrons are concerned. The system stays stable under thermodynamics. Moderator effects and temperature effects and sometimes control systems are used to hold things steady, and transients which result in a reactivity spike are analyzed for worst case conditions to ensure the core doesn't to prompt critical.

There are certain events where you may have localized prompt criticality but the total core is stable or subcritical. For example a control rod drop accident in a BWR can do this. You have localized fuel damage but no gross core damage.
 
  • Like
Likes mheslep
  • #35
Ok so I get from what you say and also from what I have read in my life that most latest generation and second generation reactors are built such that under normal operating conditions the reactor is calculated and designed such that it cannot go supercritical , in other words its coolant and solid reactor moderators combined make it impossible to go supercritical to the point of bomb like chain reaction speed? So what happens for example in a PWR or BWR for that matter if the coolant is lost like at Fukushima or TMI ? If I remember correctly even then the design doesn't allow for supercriticality , its only that with no coolant the decay heat generates enough heat to eventually melt the fuel cladding and make the fuel turn into a pound of lava at the base of the reactor vessel from where it makes some gasses in reaction to the metals around which either are ventilated out the vessel or can cause some pressure damage to the vessel or hydrogen formation and a hydrogen gas explosion but still no criticality explosion correct?So if the reactor has built in moderators like graphite in some reactors and or water/heavy water in others , then why the control rods are needed at all ? are they simply servig the purpose of stopping the reactor once needed and reactivity increase/decrease aka power level up or down ?
So technically let's imagine that in a modern second or third generation plant you by accident (no matter how unlikely) withdraw say all control rods , what happens? does the reactor design maintain the power at some maximum level and never allow a prompt critical condition or can such a situation still be theoretically possible? I would imagine in a PWR the max level would be constrained by the water evaporating and hence fewer thermal neutron production which slows the U235 reaction?One more thing, I assume in some older design reactors, like the former rather infamous soviet beast by the name RBMK, a huge responsibility was on the operators.It seems like in the RBMK the reactor design was such that the moving of control rods combined with the manipulation of water level in the reactor basically allowed the reactor to go from its minimal 500MW thermal all the way to prompt critical nuclear bomb mode correct? So would it be correct to say that the reactor design had a flaw in that it wasn't made fool proof atleast not to the point where a few or maybe even one mechanical action leads to a apocalyptic power skyrocket. What would happen to an RBMK reactor as compared to a PWR or BWR if all conditions where normal except for that all control rods where somehow taken out. (I do realize that a PWR's safety systems might not allow for the withdrawal of all rods without bypassing)I understand I ask many questions here but I would like to know about the neutron measurements during operation of the active zone, so if I gather correctly from what you say here that if the reactor is built such that it normally keeps itself contained in terms of reactivity then the neutron detectors serve mostly only informational and approximate role? Because given the huge amount of neutrons in the very active zone is there any detector that can accurately show the count of neutrons? I have a hard time imagining how a geiger muller tube or a scintillator could measure such a high neutron flux because the avalanche current conduction in the tube or photon/electron multiplication in the cintillator would be continuous and all the power supply current coming through the anode cathode would be simply used at its max isn't this the case?

P.S. I talked to a local nuclear engineer and he said that he thinks the Chernobyl reactor 4 first explosion was a weak but nuclear explosion rather than a simple steam explosion , what do you think ? I must say there was an awful lot of damage for a steam explosion , huge reinforced concrete pillars and walls were thrown aside like sticks, the roof of the reactor hall wasn't particularly strong rather a simple industrial type but the walls and support structure was kind of sturdy so I don't know , has anyone any estimates on the power yield of that explosion?
The eye witness accounts say the blast was very strong and heard miles away, local residents even called to the station asking what is going on.Thanks.
 
  • #36
Will defer to the experts we benefit from on this site. They will have better insight.
My understanding is that the Chernobyl reactor did go up in power by about a thousand times in an instant as some of the control rods were being reinserted, because the ends of the control rods did not absorb neutrons, but only slowed them. That was enough power to ensure world's most effective steam explosion as all the coolant was flashed into very high temperature steam in a fraction of a second.
A nuclear explosion by itself is just a heat source, the particles streaming from the fission or fusion reactions may be lethal, but are not really that damaging, a reality that has of course stimulated the development of neutron bombs. Here the steam explosion did the damage.
Chernobyl was a nuclear excursion, but not an explosion, because the reactor blew apart before there could be a nuclear explosion. It underscores that the problem in making a nuclear bomb is how to keep the components together long enough for the explosion to occur.
 
  • #37
Lacplesis said:
Ok so I get from what you say and also from what I have read in my life that most latest generation and second generation reactors are built such that under normal operating conditions the reactor is calculated and designed such that it cannot go supercritical , in other words its coolant and solid reactor moderators combined make it impossible to go supercritical to the point of bomb like chain reaction speed? So what happens for example in a PWR or BWR for that matter if the coolant is lost like at Fukushima or TMI ? If I remember correctly even then the design doesn't allow for supercriticality , its only that with no coolant the decay heat generates enough heat to eventually melt the fuel cladding and make the fuel turn into a pound of lava at the base of the reactor vessel from where it makes some gasses in reaction to the metals around which either are ventilated out the vessel or can cause some pressure damage to the vessel or hydrogen formation and a hydrogen gas explosion but still no criticality explosion correct?

So if the reactor has built in moderators like graphite in some reactors and or water/heavy water in others , then why the control rods are needed at all ? are they simply servig the purpose of stopping the reactor once needed and reactivity increase/decrease aka power level up or down ?
There are a number of important technical aspects here. One aspect is normal reactor operation with safeguards to check abnormal situations. The other is abnormal or accident conditions, and how reactors are designed to mitigate adverse consequences.

Here is a reasonable good discussion of criticality, and particularly prompt criticality.
http://www.nuclear-power.net/nuclea.../reactor-criticality/prompt-critical-reactor/

A commercial reactor could go prompt critical if the reactivity increase in the core exceeds the effective delayed neutron fraction (βeff). When a control rod or controlling neutron absorber is withdrawn or removed from the core, the keff increases above 1. The reactor is configured to allow changes in k, or Δk, to be less than βeff. Operators are concerned about operational errors or mishaps where to much reactivity is added to the core.
Hiddencamper said:
There are certain events where you may have localized prompt criticality but the total core is stable or subcritical. For example a control rod drop accident in a BWR can do this. You have localized fuel damage but no gross core damage.
Hiddencamper provided an example of localized prompt criticality which would result in local fuel damage, but still allow the reactor to shutdown while maintaining coolability. The PWR 'control rod ejection' is the PWR analog to BWR control rod drop accident.

We are also concerned about accidents in which large amounts of reactivity (e.g., the reactivity addition is >> βeff) are inserted in the core. But I probably need to explain how power responds to reactivity insertions.

Control rods are required in order to assure 'shutdown' of the nuclear reactor, and maintain k < 1. In PWRs, control rods typical sit above the core during operation, although some special control rods may be inserted to facilitate power distribution or are used during power maneuvering (See my previous post on load-following); otherwise, reactivity control is maintained with soluble boron in the coolant, in conjunction with burnable poisons (neutron absorbers) in the fuel.. In contrast, BWRs use control rod during operation, since they cannot use soluble boron in the coolant like PWRs, due to the boiling in the core. BWR fuel also uses burnable poisons in the fuel.
 

Similar threads

Replies
3
Views
2K
Replies
7
Views
3K
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 11 ·
Replies
11
Views
4K
Replies
1
Views
866
  • · Replies 15 ·
Replies
15
Views
4K
Replies
9
Views
3K
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K