I have a stupid question that has been bugging me for quiet some time and I wouldn't try to post it if I could give myself a reasonable answer, but here it goes: In order to make an analysis in HEP, people rely on codes/tools/frameworks and stuff like this... Here goes my thinking: 1. Suppose an analysis A1 was published in 2012 using a code X 2. In 2013 a bug is spotted in that code X which needs some fixing and is fixed 3. How reliable is the result of the analysis A1 since it ran under that bug? Doesn't the idea that no code is perfect and there are always bugs swarming around (And that's why developments are done on those codes even today), make previous year analyses less reliable for not having spotted it?