Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Concurrent decay calculations in an operating reactor

  1. Nov 23, 2014 #1
    I can’t find yield data for the concurrent creation and decay of fission products in an operating reactor. All the references I can find on the net are for the decay of a fixed lump. In an operating reactor the lump is (for a while) growing at the same time it is decaying. So, I started my own model in a spreadsheet.

    For an isotope with a half-life of one year, and a reactor that operates for one year, I find that 72.1% of the isotope remains at shutdown time (ignoring any parent decays). After ten years of operation, this isotope’s population levels off at double that, 144% of its initial yield.

    Is this a universal figure? Is the equilibrium point for any isotope 44% more than its initial yield? The math seems fairly straight-forward, though I never got the hang of partial differential equations. Is there an exact solution?

    Also, there is a rule-of-thumb: "After ten years, the radioactivity drops to effectively zero." Is there a similar rule: "After ten half-lives in a reactor, an isotope is at equilibrium?"
  2. jcsd
  3. Nov 23, 2014 #2


    User Avatar
    Staff Emeritus
    Science Advisor

    There is not an exact (or analytical) solution since the isotopic vector is constantly changing with transmutation and depletion. When power level changes, the rate of transmutation changes and decay of radionuclides.
  4. Nov 23, 2014 #3
    OK. Without transmutation, and with no contributions from decaying parents, what is the estimate for cumulative yield with respect to initial yield for an isotope that has reached equilibrium?
  5. Nov 23, 2014 #4


    User Avatar
    Staff Emeritus
    Science Advisor

    Well, ignoring the physics of the system, one can write a simple ODE for dN(t)/dt = production rate - decay rate = yi * Nf(t)σfΦ - λiNi(t), where

    yi = fission yield of the particular nuclide
    Nf(t) = fissile atom density, which is decreasing as a function of time
    σf = microscopic fission cross-section for the fissile isotope
    Φ = neutron flux
    λi = decay constant of the particular radionuclide
    Ni(t) = atomic density of radionuclide of interest, i.e., one is solving for this.

    At equilibrium, i.e., the rate of change is zero, the production rate = decay rate.

    Note that Nf starts at some initial value, and starts decreasing with fission, while Ni(t) starts at zero and increases.

    In reality, a fission product radionuclide may be produced by decay (usually beta, but could be positron emission or electron capture), or neutron capture, as well as being transformed by neutron capture, in addition to direct production by fission and transformation (loss) by decay.

    Furthermore, the neutron flux is defined by an energy spectrum from fast neutrons (MeVs) to thermal energies (0.02 eV). In practice, in order to determined fissions and transmutation rates, the microscopic cross-sections σ(E) and neutron flux Φ(E) must be integrated over the energy range. The situation in-core is further complicated fast fission of fertile nuclides, e.g, U-238, and transmutation of U into transuranic isotopes of Np, Pu, Am, Cm, . . .

    Decay chains cannot be ignored.
    Last edited: Nov 23, 2014
  6. Nov 23, 2014 #5
    I 'm not ignoring decay chains. I just haven't gotten to them yet. And I'll handle absorption after that. I'm trying to separate the problem of tracking cumulative yields into manageable pieces. To put this in context, I am trying to simulate the isotopic composition of the salt in a molten salt reactor as the salt mix "matures" over several years of operation.
    The idea is to reduce the calculations for each decay chain by "collapsing" the short-lived isotopes onto the first long-lived isotope in the chain. I would like to collapse all of the isotopes with half-lives described in terms of seconds, minutes and hours This would reduce my calculation load by nearly 90%. I would only need to do a "live" simulation for isotopes that live for days or years. The smallest time-step in my model would be ten days, so I can "bulk load" (not ignore) the isotopes with a half-life of a day or less.
    Collapsing some Copper:
    For atomic weight 70, the decay chain is 70-Cu-m1 to 70-Cu to stable 70-Zn. (Other isotopes in this chain jump to chain 69 when 70-Ni decays via B-m decay.) The copper isotopes come to equilibrium less than ten minutes after the reactor starts. Thereafter, the Zinc accumulates at a rate of 6.25e-9 atom per fission (U-235 thermal). This is the sum of the collapsed copper and native zinc initial yields.
    My (highly aggregated) simulation operates one fission per second, so it would take 5 years to produce a whole 70-Zn atom. At that time I turn the reactor off and want to calculate the amounts of 70-Cu-m1 and 70-Cu. The first has a half-life of 47 seconds an initial yield of 4.46e-9. For the last 47 seconds of operation, (47)*(4.4e-9) 70Cu-m1 nuclei are produced. 72.1% of those survive to the last second. That's 1.5e-7 nuclei. For 70-Cu the value is (4.5)*(1.49e-9)*(0.721) = 4.8e-9 nuclei.
    Considering nine more half-lives - if that is acceptable - I estimate that twice as much of the 70-Cu-m1 survives from time -470 seconds to reactor stop. For 70-Cu only the last 45 seconds of operations matter. These contribute (a bit) to the heat that needs to be dispersed after the reactor has stopped.
    Is this a reasonable approach? 72.1%? Doubling that? Ten half-lives?
  7. Nov 24, 2014 #6


    User Avatar
    Science Advisor
    Gold Member

    Trying to determine isotopic compositions of reactor fuel by hand is completely impractical. Back before they had computers e.g. the Manhattan project they probably had dozens of trained physicists working on such calculations full-time, and even their result would only be a rough approximation. There are too many variables that cannot be simplified in the way that you are describing. There are computer codes which calculate what you are trying to do, the most popular being ORIGEN-ARP as part of SCALE. I believe there is a version available for students.
  8. Nov 24, 2014 #7
    I'm thinking that the problem is a lot simpler if the fuel isn't fixed in place. How much of ORIGEN's code is devoted to the composition of each fuel pellet, which stays in place for ~18 months? How much sim time is devoted to rearranging half-used fuel bundles in different configurations at the end of that 18 months? A lot of that goes away if you assume the fuel is homogeneous.
    More important, I assume many of the questions asked of ORIGEN deal with neutron economy. That is not the purpose of my simulation at all.

    The questions I am trying to answer do not need an engineering level simulation. For example, what is the waste stream if the reactor produces 92% slow and 8% fast neutrons, and the fuel is 4% enriched Uranium? What if we vary the fuel? What if the spectrum is harder? What if we use the waste from a slow reactor (or today's waste) in a faster reactor? Can we design a fleet of reactors that maximize some design criteria? If so, then we plug into ORIGEN to finalize the engineering. It's a requirements-based approach.

    I am assuming that the design space is flexible enough that we can find exact designs to meet just about any reasonable requirements. Getting the politicians to agree on the requirements - now that's a problem. It's also a problem that minimizing waste is about third on the list of requirements. Money and safety should rank higher. But one does what one can to get the discussion going.
  9. Nov 24, 2014 #8


    User Avatar
    Science Advisor
    Gold Member

    None. ORIGEN-S is an isotopic depletion code which generates problem-dependent cross sections. Furthermore TRITON can be used to generate 2D or 3D transport code solutions to depletion analyses. These codes (or the MCNP equivalent with which I am less familiar with) are the tools of the trade for solving the problems you are inquiring about. The purpose of ORIGEN is to calculate isotope production and decay chains for reactors, spent fuel, or any other application.

    These specific questions do require an engineering level simulation. Now maybe there's someone out there whom has already done the work and can answer your questions qualitatively, but I am no expert in LFTR's myself.
  10. Nov 24, 2014 #9


    User Avatar
    Staff Emeritus
    Science Advisor

    Firstly, 70Ni would decay by β- to 70Cu. Atomic weight/mass doesn't change much with beta decay, i.e., Δm or ΔE << 1 amu. Instead the atomic number does change by +1, so Ni isotopes decay to Cu isotopes with beta decay.

    I would recommend foregoing isotopes with yields less than 1E-4, or perhaps 1E-6, and half-lives of less than 1 minute, since they represent a small fraction and decay rapidly. Start with A ~ 75 or 76 up to 160, or A = 80 to 156. It is the isotopes of half-lives of days to years one should be concerned.

    The OP mentions an operating reactor. Well, those are primarily LWRs with UO2.

    Liquid fueled reactors, if they exist in operating mode, are experimental. In addition, one would have to decide if they are flowing, and if so, recirculating with or without f.p. extraction. The neutron spectrum is important since it affects cross-sections and fission yield, to some extent. Transmutation is another issue, since transuranics shift the Z and A-distributions upward slightly. For example, Pu-239 produces less Kr than U-235 isotopes, so the Xe/Kr ratio is different.

    Simulating the isotopic matrix of a LFR is a bit more complicated than a fixed core system, since one has to deal with separation processes in addition to fission, transmutation and decay.
  11. Nov 25, 2014 #10
    Indeed, extraction of the fission products is another facet of MSR operation that I need to handle. Like neutron absorption, I'm putting that off until I nail down a simple decay-only methodology.
    I have data for the rarer and short-lived isotopes. Removing them is slightly error-prone. The computer doesn't care. The reduction in computation time I mentioned only deals with validation and verification. I will run some data through a spreadsheet to see if the main model comes up with the same answers.
    Sorry, I'm reading 100% B-n from the TORI_2 database at Lawrence Livermore. I'm not double (triple) checking everything, but now I see NuDat from Brookhaven has 100% B-, as does JEFF report 20. Good catch.
    I wish I could find amu data equivalent to the yield, half-life, and branching data I have already found, especially for the excited states. That's a secondary project (and decay energies would be more relevant anyway), but do you have any recommendations?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook