Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Analyses in HEP and bugs in codes

  1. Dec 29, 2016 #1

    ChrisVer

    User Avatar
    Gold Member

    I have a stupid question that has been bugging me for quiet some time and I wouldn't try to post it if I could give myself a reasonable answer, but here it goes:
    In order to make an analysis in HEP, people rely on codes/tools/frameworks and stuff like this... Here goes my thinking:
    1. Suppose an analysis A1 was published in 2012 using a code X
    2. In 2013 a bug is spotted in that code X which needs some fixing and is fixed
    3. How reliable is the result of the analysis A1 since it ran under that bug?
    Doesn't the idea that no code is perfect and there are always bugs swarming around (And that's why developments are done on those codes even today), make previous year analyses less reliable for not having spotted it?
     
  2. jcsd
  3. Dec 29, 2016 #2

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    The natural (and somewhat unsatisfactory) answer is that it depends on the nature of the bug. If it is a bug in the tail of some distribution it probably is not going to matter much ... unless what you study is exactly that tail.

    This is not restricted to HEP. It occurs in other fields too. Just this year it turned out there was a significant bug in the software used for functional MRI studies ...
     
  4. Dec 29, 2016 #3

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Analyses rarely rely on a single piece of code. A second method is used to cross-check the result of the main method. Sometimes a third method is used. Some analyses even have two teams working completely independent for a while, and comparing their results afterwards.
    The worst bugs are those that lead to small deviations. If they lead to large deviations anywhere (and they usually do), they are easy to spot.
     
  5. Dec 29, 2016 #4
    This is not always true, and often bugs (which are sometimes, but not always) can lead to incorrect results being published.

    This is true of experimental and theoretical work.

    In some fortunate cases, mistakes are found. For example, someone repeats a calculation and does not find agreement.

    You often find these bugs are large enough to warrant an erratum.

    The fact that some erratum exist, probably means there are mistakes in published results which are not found...
     
  6. Dec 29, 2016 #5

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Oh, for sure they exist. I found one example myself when I checked code used in a previous publication. We checked its influence, it was something like 1/100 of the statistical uncertainty - small enough to ignore it (and also so small that the cross-checks didn't catch it). The follow-up analysis with a larger dataset had the bug fixed of course.

    Errata in HEP are rare, while at the same time the analyses get checked over and over again. The rate of relevant bugs has to be very low.
     
  7. Dec 29, 2016 #6

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    There are surely errors in Geant (I say this because each new version has corrected errors found in previous versions), and that's pretty much the only program of its scope and kind. The experiments try and mitigate this by
    • Looking at known distributions to ensure that any undiscovered errors are small
    • Using data driven backgrounds whenever feasible
    • Reporting which version was used in the publication so if a serious error were found, the community would know how seriously to take a result
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Analyses in HEP and bugs in codes
  1. HEP units (Replies: 4)

  2. HEP Thesis (Replies: 1)

Loading...