Day to day logistics of what happens at the LHC

  • Thread starter DiracPool
  • Start date
  • Tags
    Lhc
In summary: The LHC had ~1700 hours in stable beams since april 1, which corresponds to ~30% overall efficiency. Without technical stops / machine development blocks, this increases to something like ~40%.Yes, but at the same time, for the purpose of making analyses consistent, you avoid changing the triggers unless absolutely necessary. Triggers were changed a lot going from 7 TeV to 8 TeV because of cross-section changes - so for example more pre-scaling on softer triggers was introduced to suppress rates (some lines now prescaled to 1%), but the 'optimal' scenario would be uniform implementation of a trigger over one 'state'.
  • #1
DiracPool
1,243
516
Hello, I've done some research and can't seem to get a read on the day to day logistics of what happens at the LHC. Specifically, say with the ATLAS, do the researchers prepare a proton collision event at a specific day and time, and when that event is done, then spend the next few days or so analyzing the data?

Or, is the experiement continuously running, i.e., is it just a steady stream of particles colliding 24/7, all day long, and they are continually chasing the data by developing algorithms to tease out the interesting interactions? Thanks.
 
Physics news on Phys.org
  • #2


The latter. The LHC collides particles 24/7 when operating. To date is has done over 1 trillion collision events. The amount of data generated is so high the individual detectors are programmed to only up-channel "interesting" events, and to ignore others. IE they only record certain events that the scientists want, as the amount of data generated in each collision is staggering and would overwhelm the data processing capabilities of the LHC and supporting equipment if every single detection event were recorded in its entirety.

For the life of me I can't remember where I found all this information at. If I find it again I'll get you a link.
 
  • #3


Thanks Drakkith.
 
  • #5
Just to flesh this out a bit.

The accelerator is NOT in operation 24/7 - e.g. it is not constantly colliding. There is a lot of time where it is down for light maintenaince, awaiting re-injection, etc.

Typically speaking, you're looking at 30 minutes for injection, 30 minutes for the ramp to 4 TeV, and then 30-45 minutes to achieve stable beams, and then it goes into recordable/useful data.

The triggers designed to pre-select interesting events are generally very stable. Changing them means changing efficiencies, and makes life much harder. Triggers will only be changed if absolutely necessary - e.g. if a major flaw is found in a trigger.
 
  • #6
browha said:
The triggers designed to pre-select interesting events are generally very stable. Changing them means changing efficiencies, and makes life much harder. Triggers will only be changed if absolutely necessary - e.g. if a major flaw is found in a trigger.
Nice theory :/
Triggers can change with running conditions, physics goals and so on. And every improvement of the reconstruction can lead to trigger changes, too.

The LHC had ~1700 hours in stable beams since april 1, which corresponds to ~30% overall efficiency. Without technical stops / machine development blocks, this increases to something like ~40%.
 
  • #7
Yes, but at the same time, for the purpose of making analyses consistent, you avoid changing the triggers unless absolutely necessary. Triggers were changed a lot going from 7 TeV to 8 TeV because of cross-section changes - so for example more pre-scaling on softer triggers was introduced to suppress rates (some lines now prescaled to 1%), but the 'optimal' scenario would be uniform implementation of a trigger over one 'state'. I can't speak for how it is implemented on ATLAS, but I do know they currently quote something like 6? or 7? discrete data taking periods (Run A onwards).

Within what I've seen done at LHCb, I can only think of two analyses where the triggers were changed within a period of data taking - one because there was a serious flaw in the trigger making the trigger efficiency extremely low, and another where the pre-scaling killed the event yield for a very rare decay process. Most of the triggers are loose enough that changes in the reconstruction are propagated to a later stage, rather than effecting the triggers (at least in my own experience). If you look at LHCb, for example, the L0 trigger is a very 'crude' trigger almost, just looking for high Pt events (not quite true, but fundamentally), and then it is stripped off by the HLT. And ultimately, interesting events are interesting events. LHCb was built with muons in mind, and indeed that remains true today - muons get triggered on.

And, yes. That sounds about right. Last year the general motif was to try run 12 hours of physics collisions and 12 hours of machine development to allow increasing the number of bunches (amongst other things). This year is much more focused on data accumulation. But the point is, it is not in operation 24/7.
 
  • #8
but the 'optimal' scenario would be uniform implementation of a trigger over one 'state'
Well, reality can be far away from this optimal scenario. I saw analyses which had ~5 different triggers, corresponding to different data taking periods or other conditions within a single year (LHCb). Ok, I think this was 2010, where luminosity was increased by several orders of magnitude during the year and triggers had to adapt to that. But 2-3 different trigger settings within a year is not uncommon for 2011/2012 data. L0 and HLT1 are fine, HLT2 with its many individual trigger lines can change.
 
  • #9
ah ha

but a lot of those early LHCb triggers were entirely MC driven, and the rates don't correspond to what you get in data (For example). Or you find out, as has been done recently, a couple triggers are much worse in their reconstruction efficiency than you would expect (for example).

2010 is just clearly detector commissioning, ultimately, and that applies as much to the triggers. In 2011 the only triggers I'm personally aware of being changed were either pre-scaling for rate suppression or because of a serious flaw in them. It's not so nice having to adapt your analysis for multiple data taking periods (etc).

Stripping lines after the fact, yes, they can change, that's a different story, but the actual information is still written to disk and stored on a tier 1 site somewhere.
 

What is the purpose of the LHC?

The LHC (Large Hadron Collider) is the world's largest and most powerful particle accelerator, designed to study the fundamental building blocks of matter and the forces that govern them. Its main purpose is to recreate the conditions that existed in the early universe, just fractions of a second after the Big Bang, in order to better understand the origins of our universe.

How does the LHC work?

The LHC works by accelerating subatomic particles, such as protons or lead ions, to almost the speed of light using powerful electromagnets. These particles are then collided at four different points along the 27-kilometer circular tunnel, where they release enormous amounts of energy, which can be observed and studied by detectors.

What kind of research is conducted at the LHC?

The LHC is used for a wide range of research, including studying the properties of the Higgs boson, searching for new particles and forces, and exploring the mysteries of dark matter and dark energy. Scientists also use the LHC to test the predictions of various theories, such as the Standard Model of particle physics.

How are the experiments at the LHC planned and carried out?

The experiments at the LHC are planned and carried out by a large international team of scientists and engineers. They use sophisticated computer simulations and data analysis techniques to design and optimize experiments, and then use the LHC to collect and analyze data. The results are then shared and discussed within the scientific community to further our understanding of the universe.

What are some challenges faced in operating the LHC?

Operating the LHC is a complex and challenging task. Some of the challenges include maintaining the stability of the beams, minimizing energy loss due to collisions, and managing the enormous amounts of data produced by the experiments. There are also technical challenges in maintaining and upgrading the LHC's equipment, as well as ensuring the safety of both the equipment and the people working at the facility.

Similar threads

  • High Energy, Nuclear, Particle Physics
2
Replies
57
Views
13K
  • High Energy, Nuclear, Particle Physics
Replies
7
Views
2K
  • High Energy, Nuclear, Particle Physics
2
Replies
69
Views
12K
  • High Energy, Nuclear, Particle Physics
Replies
17
Views
4K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
9
Views
3K
  • Other Physics Topics
Replies
4
Views
1K
  • Art, Music, History, and Linguistics
Replies
3
Views
2K
  • STEM Academic Advising
Replies
8
Views
951
  • Programming and Computer Science
Replies
4
Views
665
Back
Top