I What are the challenges faced by LHC in the initial data-taking of 2017?

  • Thread starter Thread starter mfb
  • Start date Start date
  • Tags Tags
    2017 Lhc
Click For Summary
The LHC has declared "stable beams," but initial collision rates are low at 0.2% of the design rate, requiring weeks of checks before reaching optimal levels. Experiments are beginning to collect data, with current limitations on collision rates affecting data processing capabilities. Safety is a primary concern, as the stored energy in the beams can cause significant heating, necessitating cautious ramp-up procedures. The beam dump, designed to handle high-energy impacts, operates under strict conditions to prevent activation and ensure safety. As commissioning progresses, the LHC aims to increase bunch numbers and collision rates, with scrubbing runs planned to improve beam quality.
  • #91
It is. Most collisions are quite soft, however, and most analyses look for hard interactions that produce high energy particles.

With charged particles (especially muons and electrons) you have nice tracks pointing to the right primary vertex. With uncharged particles it is more difficult.
The worst case is the transverse momentum balance, where you look for particles that don't interact with the detector at all via conservation of momentum (see here, the part on supersymmetry). You can easily get a wrong result if you assign particles to the wrong primary vertex.

All four big detectors will replace/upgrade their innermost detectors to handle more collisions per bunch crossing in the future.

---

ATLAS and CMS reached 25/fb, with a bit more protons per bunch we reached 140% of the design luminosity and very stable running conditions. The better focusing is in place and works.

---

Edit Friday: 126 billion protons per bunch, should be a new record. About 160% the design luminosity at the start of the run - with just 1916 bunches (1909 colliding in ATLAS and CMS). About 60 (inelastic) proton-proton collisions per bunch crossing (75 if we count elastic scattering).
BCMS could increase this even more.

The detectors were designed for 25 collisions per bunch crossing.

LHCb reached 1/fb.
 
Last edited:
  • Like
Likes dlgoff
Physics news on Phys.org
  • #92
A total of 32/fb collected for ATLAS and CMS. 4/fb in the last week, a record speed, clearly visible in the https://lpc.web.cern.ch/lumiplots_2017_pp.htm as well.
6 weeks of data-taking left, at 3/fb we will end up with 50/fb.

For both ATLAS and CMS, the machine can now deliver more than 60 simultaneous collisions per bunch crossing - too many for the experiments, so they ask to limit that to about 60. Further improvements this year won't increase the peak luminosity, but they can increase the time this luminosity can be maintained (afterwards it goes down as usual, eventually the beams get dumped and a new fill starts). For next year the number of bunches can be increased again, increasing the luminosity without increasing the number of collisions per bunch crossing.

Edit: Plot.
The run starts with the maximal luminosity (region A), here 180% the design value, to find the position for head-on collisions of the beams. Then the beams are quickly shifted a bit with respect to each other to reduce the luminosity to the target of 150% the design value (region B). After several minutes, when the luminosity dropped by about 1% (due to a loss of protons in the collisions and decreasing focusing), the beams are shifted back a little bit to reach the target again. This repeats until the beams are colliding head-on again. Afterwards the machine is not able to deliver this luminosity target any more, and the luminosity goes down over time (region C). Reducing the crossing angle helps a bit to keep the luminosity higher later in the run.

The high-luminosity LHC will use this method extensively, probably with most of the time spent in region B.

lumileveling.png
 
Last edited:
  • Like
Likes vanhees71, Lord Crc and Drakkith
  • #93
mfb said:
The LHC could get a very unusual record this year: The luminosity record for any type of collider.
Electrons and positrons are much lighter than protons. That means they emit more synchrotron radiation when they travel around in a circular collider. Faster electrons radiate more and get slower in the process. That is a very effective "cooling" mechanism, as a result you can put the electrons very close together, increasing the luminosity. KEKB set the world record of 2.11*1034/(cm2*s) in 2009 - with electrons/positrons.
The LHC could reach this value in 2017 - with protons, where it is much harder. As far as I know, it would be the first proton-collider ever to set the absolute luminosity record.
The LHC might have achieved it. The achieved value is consistent with 2.11*1034/(cm2*s) within the uncertainty of the calibration (few percent). Unfortunately the run statistics record the leveled luminosity, not the peak value, so there is a bit of guesswork involved.

Some screen captures of the live display for three different runs, not necessarily with the highest achieved values:
lumia.png
lumi0.png
lumi.png


Maybe we'll get a more official statement for the luminosity values soon.
Edit: See below.

The last few days we had a couple of long runs with luminosity leveling and then many hours more of collisions and not too much time between the runs. Great conditions to collect data. ATLAS and CMS accumulated 36.5/fb, and there are 5 weeks of data-taking left. 45/fb seem easy, we'll probably get more than 50/fb, even 55/fb are not impossible.

For 2018, I expect that both ATLAS and CMS will try to optimize their algorithms to handle even more collisions per bunch crossing (pileup) just in case it becomes important. The gas issue should get fixed, which means the LHC can get filled with more bunches, so the same luminosity can be achieved with a lower pileup (= no need for luminosity leveling). Well, maybe we get both together: more bunches and so much pileup that leveling is important...The LHC had a day of xenon-xenon collisions last week. Nothing surprising here, it will be a nice data point in between proton (small) and lead (big).Edit: A new run just started. The shown luminosity exceeded the record set by KEK.

lumirecord.png


Edit2: 218% the design luminosity in the following run. Looks like the LHC has the record.
 
Last edited:
  • Like
Likes Lord Crc
  • #94
0.93/fb collected in the last 24 hours. About the number of collisions the Tevatron experiments collected in a year. In addition, the LHC collisions have 13 TeV instead of 2 TeV.

This is way beyond the expectations the machine operators or the experiments had for 2017, especially with the vacuum problem mentioned earlier.And it is nearly certain that the luminosity record is there! Note the comment on the right.

lumirecord.png
 
  • Like
Likes fresh_42
  • #95
mfb said:
This is way beyond the expectations the machine operators or the experiments had for 2017, especially with the vacuum problem mentioned earlier.

It's alright if you're on an experiment that doesn't mind pile-up. Personally I was hoping for ~2.0/fb for LHCb from >2,600-bunch beams, but it looks like we might get ~1.7/fb like last year
 
  • #96
ATLAS and CMS do mind pileup - see the luminosity leveling done for them. Sure, they can work with a much higher pileup than LHCb and that has a much higher pileup than ALICE.
For LHCb, all the improvements in number of particles and focusing are useless, only the number of bunches counts - and there the vacuum issue determines the limit. The current availability is still excellent, and 1.7/fb is close to 2/fb.Edit: We had more instances of 0.9x/fb in 24 hours. That happens only if everything is perfect and two long runs follow each other without any issues during re-filling. Unless they manage to keep the luminosity leveling even longer (from even more protons per bunch?), it is unlikely to increase this year. That gives a rate of more than 5/fb per week, however.
 
  • #97
ATLAS and CMS reached 45/fb, LHCb accumulated 1.55/fb.
During the week from Oct 16 to Oct 22 ATLAS and CMS collected 5.2/fb, about half the integrated luminosity the Tevatron experiments got in 20 years.
2.5 weeks left for regular data taking.

The high data rate is great for measurements/searches of rare events, but it is also challenging for the detectors, related infrastructure and some analyses.
  • The readout electronics was not designed for such a high rate of interesting events - the triggers have to get more selective. This doesn't matter much if you are looking for new very heavy particles (events with a lot of energy in the detectors are rare, they are always kept), but it hurts the analyses studying/searching for lighter particles where you have to find the signal events in a lot of other background events. In addition, there are now more background collisions even in the signal events.
  • More collisions at the same time make it harder to identify particles properly and lead to more misreconstructed objects, especially if the algorithms were not designed for it.
  • The high data rate leads to a delay in the software trigger stage. Based on Run 1 (2010-2012) it was expected that the experiments can take data about 1/3 of the time. A trigger system that only runs live would be idle 2/3 of the time. To avoid this, ATLAS, CMS and LHCb all implemented deferred triggers: Some events that cannot be studied in time are simply written to a temporary storage and processed later. If the LHC has stable beams 1/3 of the time this gives a huge boost in processing power - up to a factor 3. That means the trigger algorithms can get more complex and time-consuming. But now the LHC collides protons 2/3 of the time (https://lpc.web.cern.ch/lumiplots_2017_pp.htm), and suddenly this system can only give up to a factor 1.5. The result is a backlog of data that still needs processing. It can be processed after regular data taking ends.
  • The simulations done in advance don't represent the data accurately. They were made according to the expected running conditions, which means a lower pileup and more bunches in the machine than the actual conditions now. This can be fixed later with additional simulation datasets.
An interesting case study will be the decays ##B_s \to \mu \mu## and ##B^0 \to \mu \mu##. They are always measured together as they have the same final state and nearly the same energy in the decay. Both are extremely rare (predicted: 3.6 parts in a billion and 1 part in 10 billion, respectively). A deviation from predictions would be very interesting in the context of other anomalies. The first decay has been found but the measurement accuracy is still poor, and the first clear detection of the second decay is still open. For LHCb, the B mesons are heavy particles and the trigger is designed to look for muon pairs, it has a high efficiency to find these decays - but LHCb has a low overall number of collisions. For ATLAS and CMS, the B mesons are light particles and the trigger has difficulties finding them, the efficiency is low - but these experiments have a high number of collisions. In Run 1, both approaches lead to roughly the same sensitivity, with LHCb a bit ahead of the other experiments. We'll see how this looks like with Run 2 (2015-2018). I expect all three experiments to make relevant contributions. LHCb has a better energy resolution so it performs better in seeing a small ##B^0 \to \mu \mu## peak directly next to the ##B_s \to \mu \mu## signal. Here is an image, red is ##B_s##, green is ##B^0##, the rest is background. The latest with Run 3 (2021+) I expect LHCb to be much better than the other experiments.
 
  • Like
Likes odietrich, Lord Crc and Amrator
  • #98
LHC will end the run 1 week early, on December 4th. This is to allow the CMS access to their pixel detector before CERN's end of the year shutdown.
 
  • Like
Likes Amrator and mfb
  • #99
Meanwhile the LHC makes extra long runs. 0.77/fb for ATLAS, 0.74/fb for CMS, 0.033/fb for LHCb in 27 hours.
50/fb collected by ATLAS and CMS, 1.7/fb by LHCb.

Regular data-taking will end on Friday, then we get special runs for two weeks, followed by a week of machine development, and then the usual winter shutdown. No lead collisions this year.
Various things will need fixes, upgrades and so on. The 16L2 issue discussed earlier will be investigated, the CMS pixel detector can be accessed.

First collisions in 2018 are expected for March to April.
 
  • Like
Likes Lord Crc
  • #100
The LHC Page 1 shows that they are currently running tests at 2.51 TeV. Why this particular energy?
 
  • #101
It's the energy used for proton "reference" runs, with the same per-nucleon energy as the lead-ion collisions.
 
  • Like
Likes vanhees71 and websterling
  • #102
In numbers: Protons can be accelerated to 6.5 TeV each, lead with its 82 protons to 82*6.5 TeV per nucleus, with its 208 nucleons this leads to an energy of 6.5*82/208 = 2.56 TeV per nucleon. The difference to 2.51 TeV is probably a rounding error.

Result of the high energy high luminosity proton run: 51/fb for ATLAS and CMS, 1.75/fb for LHCb, 0.017/fb for ALICE.
And a luminosity world record, the first time a proton-proton collider achieved it.
 
  • Like
Likes vanhees71
  • #103
The Pb beams in Run 2 have been 6.37 Z TeV per ion for some reason.
 

Attachments

  • Screenshot from 2017-11-11 16-04-47.png
    Screenshot from 2017-11-11 16-04-47.png
    43.6 KB · Views: 528
  • #104
Ah right. There was some reason, if I remember correctly it was chosen to match the nucleus-nucleus center of mass energy of earlier proton lead collisions or something like that. The better comparability was more important than 2% in energy.
 
  • Like
Likes vanhees71
  • #105
The high ##\beta^*## run is canceled because the background levels are too high for the Roman Pot experiments. Therefore the LHC will return to 13 TeV proton physics for 22nd–26th November. CMS will level lower to make the intervention easier.
 
  • Like
Likes mfb
  • #106
The 2017 run has ended this morning and the machine is being shutdown.

lhc1.png
 

Attachments

  • lhc1.png
    lhc1.png
    7.9 KB · Views: 854
Last edited:
  • Like
Likes Lord Crc and mfb
  • #108
Updates and outlook:

Sector 1-2, which had the 16L2 problem, has been warmed up partially. It is expected that nearly all of the gas in it is gone now.
For 2018, 138 days of proton-proton running are planned (compared to 127 in 2017), with an expected luminosity of 60/fb for ATLAS and CMS.
It is expected that both ATLAS and CMS will want to keep pileup at about 60 interactions per bunch crossing. Without the 16L2 issue we get up to 2544 colliding bunches, or 2.15*1034/(cm2*s) luminosity, 215% the design value. If that works well and the experiments are happy with more pileup, the machine operators have some ideas how they could possibly go to 250% to 280% the design value. Such a high initial luminosity makes faster re-filling more interesting for ATLAS/CMS. LHCb mainly wants long fills and is not interested in these high luminosity values, so some compromise has to be found.
 
  • Like
Likes Greg Bernhardt
  • #109
mfb said:
Updates and outlook:
Will need to start a 2018 thread soon :)
 
  • Like
Likes sandy stone
  • #110
With 2018 data-taking probably. I don't expect so many events in 2018, however. Unless something unexpected comes up the focus will be on doing more of the same to get as many collisions as possible before the longer shutdown.

SuperKEKB/Belle II are expected to start data-taking in 2018, that will be new.
 
  • Like
Likes vanhees71

Similar threads

  • · Replies 57 ·
2
Replies
57
Views
15K
  • · Replies 20 ·
Replies
20
Views
3K
Replies
1
Views
1K
  • · Replies 69 ·
3
Replies
69
Views
14K
  • Sticky
  • · Replies 28 ·
Replies
28
Views
11K
  • · Replies 48 ·
2
Replies
48
Views
8K
  • · Replies 17 ·
Replies
17
Views
6K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 49 ·
2
Replies
49
Views
12K
  • · Replies 18 ·
Replies
18
Views
3K