No one understands all components of the experiments in all details. It is not necessary. Every component has some group that does understand all the details, and those groups work together.
Let's consider the Higgs discovery as an example.
There is a group monitoring the muon detectors: checking that the temperature of every module is right, checking that everything works and so on. The experts in that group have to know what to do if some power supply fails - but they don't have to know anything about the Higgs. They make the muon detector status available to others in the collaboration.
There is a group responsible for the software finding muons in the collision data. They take the muon detector status into account. Their software produces data like "here was a muon with this energy and flight direction, there was another muon with that energy and flight direction". They don't have to know what to do if a power supply fails, they just have to know how to take it into account that some module didn't work. They also don't have to know anything about the Higgs.
There is a group checking the results of the former group: If the software finds a muon, how likely is it an actual muon and not something else? Which fraction of muons stays undetected? How precisely is the muon energy estimate?
There is a group looking for Higgs bosons decaying to four muons. They use the results of the former groups: the software finding muons and the information how often they are actually muons, how many muons are undetected, and the precision of the energy estimates. They don't have to know every detail about the detector itself.
In 2012, they got a possible detection of the Higgs boson: "We have X more events than expected around a mass of 125 GeV, with an uncertainty of Y".Independent of the muon groups, there are similar groups responsible for detecting photons.
There is a group looking for Higgs bosons decaying two photons. In 2012, they also got a possible detection of the Higgs boson.There is a group combining the two independent results. They check if the two partial results are compatible with two possible decays of a single particle, if the observed numbers of events fits to the expectations from the Higgs boson, and so on. They have to know the analyses well - but not in every detail. In 2012, they combined the two results and found "We have a significant result: There is a new particle at a mass of about 125 GeV".
They made this result available to the rest of the collaboration, and everyone could check the analysis. The muon groups verified that their results were used properly, the photon groups checked that their results were used properly, statistics experts checked the combination, and so on. Finally, when everyone was happy with everything, it was made public - and checked by people outside the collaboration.
All this was done independently both in ATLAS and CMS, with the same result from both collaborations.
The analysis was repeated with larger datasets later, and with a higher collision energy even later. With improved analysis methods, with different people working on it and so on. In addition, all the steps described above have multiple internal cross-checks on their own. The "Higgs to muon analysis" was not done once - every step was done
at least twice, often with different methods, to verify that (a) there are no bugs in the code and (b) the methods used are reliable. Same for the "Higgs to photon analysis", the combination and all the other steps.This is an extremely simplified description of how the collaborations work. There are many more groups involved, but the main idea is the same: Have experts for everything, let them produce well-checked and well-defined results that can be used by other groups who don't have to know how all the details work.
The Higgs discovery was the work of more than 1000 people per experiment, all with their small contribution in the group they worked in, checking every step multiple times.Theorists are yet another step: they don't have to know all that. They don't have to know how to exchange a power supply, how to find muons in collision data, or anything like that. All they typically need is the publication: "Ah, there were so many Higgs bosons decaying to muons, and so many Higgs bosons decaying to photons, and the Higgs mass is 125 GeV" together with the experimental uncertainties on all those values.
newjerseyrunner said:
LHC shares and hosts a lot of data. I am sure they didn't do it from scratch, I would assume that it's just a RAID array and apache.
Hundreds of Petabytes on a simple RAID, accessible and analyzable by thousands of users? I want to see that RAID.
The
Grid has a lot of custom software.