Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Featured I LHC starts 2017 data-taking

  1. Jul 12, 2017 #81

    dlgoff

    User Avatar
    Science Advisor
    Gold Member

    @mfb and @Vanadium 50

    Thanks for your replies. I don't mean to hijack this thread, but these things are what I live for.
     
  2. Jul 12, 2017 #82

    dlgoff

    User Avatar
    Science Advisor
    Gold Member

    Speaking of composition (materials), don't these very low pressures evaporate some components? or degrade them?
     
  3. Jul 12, 2017 #83

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Steel and copper (outside the experiments) and beryllium (at the experiments) don't evaporate notably, especially at cryogenic temperatures (some parts of the beam pipe are at room temperature, however). The LHCb VELO detector uses an AlMg3 foil, no idea about that but it has a small surface anyway. I don't see how vacuum would degrade these materials.
     
  4. Jul 16, 2017 #84

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Recovery from technical stop is still ongoing. The RF system (radio frequency cavities to accelerate the beam) cannot handle the 2556 bunches we had before, the problem is under investigation. With 2317 bunches it works, for now the LHC is running with this lower number of bunches. Still enough to collect a lot of collisions. ATLAS and CMS reached 7/fb, LHCb collected 0.24/fb.


    I made a thread about results from EPS.

    Edit on Wednesday: Finally back at 2556 bunches.
     
    Last edited: Jul 19, 2017
  5. Jul 31, 2017 #85

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Both ATLAS and CMS reached 10/fb, about 1/4 of the 2016 dataset. 16 more weeks for data-taking are planned. At pessimistic 2/(fb*week), we get the same number of collisions as last year, at optimistic 3.5/(fb*week) we get 65% more.
    If everything in the LHC would work perfectly 100% of the time, more than 5/(fb*week) would be possible, but that is unrealistic with such a complex machine.


    We had a short break for machine development and van-der-Meer scans:
    Cross section measurements are an important part of the physics program, and they require an accurate luminosity estimate. What the machine can deliver from normal operation has an uncertainty of a few percent. That is good for the machine operators, but for physics you want to get the uncertainty to be smaller - 2% is nice, 1% is better. The luminosity depends on a couple of machine parameters:$$\mathcal{L} = \frac{N_1 N_2 f N_b S}{4 \pi \sigma_x \sigma_y}$$
    ##f## is the revolution frequency - fixed and known to many decimal places.
    ##N_b## is the number of bunches per beam - known exactly.
    ##N_1## and ##N_2## are the numbers of protons in the bunches, they can be measured via the electromagnetic fields they induce when moving around the ring.
    ##S \leq 1## is a factor that takes the crossing angle into account, it can be calculated precisely. See also post 58.
    ##\sigma_x## and ##\sigma_y## are the widths of the bunches in x/y direction. There is no good direct way to measure that accurately.

    To estimate the width of the bunches, the machine operators shift the relative positions of the beams around at the collision points while the experiments monitor the collision rate as function of the shift. A fit to the observed rates leads to the widths. This procedure was named after Simon van der Meer.
     
  6. Aug 11, 2017 #86

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    A few updates: The LHC experiments got collisions at a high rate, and the machine operators found some methods to improve the rate further.

    ATLAS and CMS reached 15.5/fb. 11 days since they had 10/fb, this means 0.5/(fb*day) or 3.5/fb per week.
    Wednesday 6:46 to Thursday 6:46 this week we had a record of 0.83/fb in 24 hours. As comparison: In these 24 hours, the LHC experiments had 4 times the number of Higgs boson and 8 times the number of top quarks the Tevatron experiments had - in their 20 years of operational history.

    LHCb surpassed 0.5/fb, nearly 1/3 of the 2016 dataset.

    The stepwise reduction of the crossing angle, discussed earlier was studied in more detail. Previously it was reduced in steps of 10 millirad (150 -> 140 -> 130 -> ...). That increases the collected data by about 3.5%. The process now works so smoothly that it became possible to reduce it in steps of 1 millirad, always following the optimal angle. This increases the number of collisions by additional 1.5%. That doesn't sound much, but all these small improvements add up.

    The number of protons per bunch went up a bit. We reached a record of 3.1*1014 protons per beam at high energy, or 320 MJ per beam. Correspondingly, the initial luminosity reached a new record, 174% the design value.
    The machine operators tried to get even more, but that lead to problems, so they added a day of scrubbing.

    Another thing discussed is the focusing of the beams at the collision points. Based on the analysis of the machine development block, it can be improved a bit more. That could increase the luminosity by ~20%. 1.74*1.2=2.09. There is still hope for the absolute luminosity record!
     
  7. Sep 1, 2017 #87

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    ATLAS and CMS reached 20/fb. We have gained 4.5/fb since the previous post 21 days ago, or 1.5/fb per week, even below the pessimistic estimate from above. You can see this clearly in the luminosity plots as well.

    A problem appeared in a region called 16L2, which lead to the dump of many fills, often before collisions started. Although the cause is not well understood, the process is always the same: Some beam particles are lost in this region, and a few milliseconds (tens of revolutions) later many more particles are lost roughly at the opposite side of the ring - more than acceptable, this triggers a beam dump. This can happen from either beam 1 or beam 2, although they fly in separate beam pipes in 16L2.
    The problem appeared early in the year already, but until August, the dump rate could be managed by adjusting the control magnets in this region a bit. With increasing beam currents, it got more problematic and the machine operators wanted to get rid of the problem. The losses look gas-induced. The gas can stick to parts called "beam screen", and get released during the run, the collision of the beam with gas particles leads to the observed losses. The usual approach is to heat this beam screen, then all the gas evaporates, and gets pumped out or sticks to even colder parts of the beam pipe where it stays.
    That was done on August 10 - and then everything got worse. Now more than half of the fills were dumped due to 16L2, even at lower numbers of bunches. The smaller fraction of time in stable beams plus the reduced number of bunches lead to the slower accumulation of collision data in the last three weeks. The leading hypothesis is gas in other components of 16L2 that redistributed when heating the beam screen and other components, leading to even more gas there.

    What to do?
    • The problem could be solved by heating up the whole sector and pumping it out properly. That would probably take 2-3 months, doing it now would mean most of the time planned for data-taking this year is gone. Unless data-taking becomes completely impossible this won't be done before the winter shutdown.
    • The machine operators see if there is a stable running condition that works for now. The last few runs with 1550 bunches were promising, at this rate the LHC would be limited to ~2/fb per week, but that is still a reasonable rate that would double the 2016 dataset by the end of the year.
    • Gaps between bunches can reduce losses, e. g. "8 bunches in a row, then 4 slots free, then 8 bunches in a row, then 4 slots free, ...". This might be tested. It would also mean the number of bunches has to be reduced compared to the initial plan, but if it reduces the number of dumps sufficiently it can be worth it.
    • There are some special runs planned/proposed for 2018, some at lower energies and some with a very low collision rate, for something like 1 week in total. They might be shifted to 2017 as they won't be affected by the 16L2 issue as much as the regular operation at high energy and collision rate.
    • The machine operators discuss what else can be done.

    LHC report: Something in the nothing
     
  8. Sep 13, 2017 #88

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    ATLAS and CMS reached about 24/fb.
    The mitigation approaches, especially the "8 bunches, then 4 slots free, repeat" pattern worked, in the last days ~2/3 of the time could be spent with data-taking. The luminosity is lower, but still at the design value. There are still some dumps due to 16L2 but they don't break everything any more.

    A scheduled machine development block started, followed by a few days of technical stop. About 9 weeks for data-taking left in 2017. Unless there are some new ideas how to solve the 16L2 issue, I guess they will just keep the current configuration, it should lead to about 2-2.5/fb per week, so we will still get more than the 40/fb of last year.

    LHC Report: operation with holes
     
  9. Sep 24, 2017 #89

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Back to data-taking. Currently 1900 bunches, with the "8 bunches, 4 free, repeat" pattern. Initial luminosity was 120% the design value, quite nice for the relatively low number of bunches.

    The machine operators work around the 16L2 issue:
    • Combine the bunch/empty pattern with BCMS, a different way to prepare the beam in the preaccelerators. This will reduce the number of bunches to 1800, but give more collisions per bunch crossing. This will be tested in the next few days.
    • Focus the beam better at the collision point. This was tested during the machine development block, and the operators are confident they can do this (technically: reduce ##\beta^*## from 40 cm to 30 cm).
    • Move some special runs from 2018 to 2017:
      • collisions of xenon ions, probably for a day in November
      • proton-proton collisions at lower energy and lower collision rate to cross-check heavy ion results (as they have a lower energy per nucleon) and for some studies that don't care much about energy (or even work better at low energy) but suffer from many simultaneous collisions. About two weeks in December.

    Other news: The machine development block included a run where some bunch crossings lead to 100 simultaneous collisions in ATLAS and CMS, compared to 40-50 during normal operation. This is an interesting test for future running conditions (~150-200 expected for the HL-LHC upgrade). These are averages, individual bunch crossings vary in the number of collisions, of course. An average of 100 means you have events with more than 130 collisions.
     
  10. Sep 25, 2017 #90
    I assume it must be very challenging to track the collision products back to the specific collision that produced them.
     
  11. Sep 25, 2017 #91

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    It is. Most collisions are quite soft, however, and most analyses look for hard interactions that produce high energy particles.

    With charged particles (especially muons and electrons) you have nice tracks pointing to the right primary vertex. With uncharged particles it is more difficult.
    The worst case is the transverse momentum balance, where you look for particles that don't interact with the detector at all via conservation of momentum (see here, the part on supersymmetry). You can easily get a wrong result if you assign particles to the wrong primary vertex.

    All four big detectors will replace/upgrade their innermost detectors to handle more collisions per bunch crossing in the future.

    ---

    ATLAS and CMS reached 25/fb, with a bit more protons per bunch we reached 140% of the design luminosity and very stable running conditions. The better focusing is in place and works.

    ---

    Edit Friday: 126 billion protons per bunch, should be a new record. About 160% the design luminosity at the start of the run - with just 1916 bunches (1909 colliding in ATLAS and CMS). About 60 (inelastic) proton-proton collisions per bunch crossing (75 if we count elastic scattering).
    BCMS could increase this even more.

    The detectors were designed for 25 collisions per bunch crossing.

    LHCb reached 1/fb.
     
    Last edited: Sep 29, 2017
  12. Oct 7, 2017 #92

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    A total of 32/fb collected for ATLAS and CMS. 4/fb in the last week, a record speed, clearly visible in the luminosity plots as well.
    6 weeks of data-taking left, at 3/fb we will end up with 50/fb.

    For both ATLAS and CMS, the machine can now deliver more than 60 simultaneous collisions per bunch crossing - too many for the experiments, so they ask to limit that to about 60. Further improvements this year won't increase the peak luminosity, but they can increase the time this luminosity can be maintained (afterwards it goes down as usual, eventually the beams get dumped and a new fill starts). For next year the number of bunches can be increased again, increasing the luminosity without increasing the number of collisions per bunch crossing.

    Edit: Plot.
    The run starts with the maximal luminosity (region A), here 180% the design value, to find the position for head-on collisions of the beams. Then the beams are quickly shifted a bit with respect to each other to reduce the luminosity to the target of 150% the design value (region B). After several minutes, when the luminosity dropped by about 1% (due to a loss of protons in the collisions and decreasing focusing), the beams are shifted back a little bit to reach the target again. This repeats until the beams are colliding head-on again. Afterwards the machine is not able to deliver this luminosity target any more, and the luminosity goes down over time (region C). Reducing the crossing angle helps a bit to keep the luminosity higher later in the run.

    The high-luminosity LHC will use this method extensively, probably with most of the time spent in region B.

    lumileveling.png
     
    Last edited: Oct 9, 2017
  13. Oct 17, 2017 #93

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    The LHC might have achieved it. The achieved value is consistent with 2.11*1034/(cm2*s) within the uncertainty of the calibration (few percent). Unfortunately the run statistics record the leveled luminosity, not the peak value, so there is a bit of guesswork involved.

    Some screen captures of the live display for three different runs, not necessarily with the highest achieved values:
    lumia.png lumi0.png lumi.png

    Maybe we'll get a more official statement for the luminosity values soon.
    Edit: See below.

    The last few days we had a couple of long runs with luminosity leveling and then many hours more of collisions and not too much time between the runs. Great conditions to collect data. ATLAS and CMS accumulated 36.5/fb, and there are 5 weeks of data-taking left. 45/fb seem easy, we'll probably get more than 50/fb, even 55/fb are not impossible.

    For 2018, I expect that both ATLAS and CMS will try to optimize their algorithms to handle even more collisions per bunch crossing (pileup) just in case it becomes important. The gas issue should get fixed, which means the LHC can get filled with more bunches, so the same luminosity can be achieved with a lower pileup (= no need for luminosity leveling). Well, maybe we get both together: more bunches and so much pileup that leveling is important...


    The LHC had a day of xenon-xenon collisions last week. Nothing surprising here, it will be a nice data point in between proton (small) and lead (big).


    Edit: A new run just started. The shown luminosity exceeded the record set by KEK.

    lumirecord.png

    Edit2: 218% the design luminosity in the following run. Looks like the LHC has the record.
     
    Last edited: Oct 17, 2017
  14. Oct 18, 2017 #94

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    0.93/fb collected in the last 24 hours. About the number of collisions the Tevatron experiments collected in a year. In addition, the LHC collisions have 13 TeV instead of 2 TeV.

    This is way beyond the expectations the machine operators or the experiments had for 2017, especially with the vacuum problem mentioned earlier.


    And it is nearly certain that the luminosity record is there! Note the comment on the right.

    lumirecord.png
     
  15. Oct 18, 2017 #95
    It's alright if you're on an experiment that doesn't mind pile-up. Personally I was hoping for ~2.0/fb for LHCb from >2,600-bunch beams, but it looks like we might get ~1.7/fb like last year
     
  16. Oct 18, 2017 #96

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    ATLAS and CMS do mind pileup - see the luminosity leveling done for them. Sure, they can work with a much higher pileup than LHCb and that has a much higher pileup than ALICE.
    For LHCb, all the improvements in number of particles and focusing are useless, only the number of bunches counts - and there the vacuum issue determines the limit. The current availability is still excellent, and 1.7/fb is close to 2/fb.


    Edit: We had more instances of 0.9x/fb in 24 hours. That happens only if everything is perfect and two long runs follow each other without any issues during re-filling. Unless they manage to keep the luminosity leveling even longer (from even more protons per bunch?), it is unlikely to increase this year. That gives a rate of more than 5/fb per week, however.
     
  17. Oct 30, 2017 #97

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    ATLAS and CMS reached 45/fb, LHCb accumulated 1.55/fb.
    During the week from Oct 16 to Oct 22 ATLAS and CMS collected 5.2/fb, about half the integrated luminosity the Tevatron experiments got in 20 years.
    2.5 weeks left for regular data taking.

    The high data rate is great for measurements/searches of rare events, but it is also challenging for the detectors, related infrastructure and some analyses.
    • The readout electronics was not designed for such a high rate of interesting events - the triggers have to get more selective. This doesn't matter much if you are looking for new very heavy particles (events with a lot of energy in the detectors are rare, they are always kept), but it hurts the analyses studying/searching for lighter particles where you have to find the signal events in a lot of other background events. In addition, there are now more background collisions even in the signal events.
    • More collisions at the same time make it harder to identify particles properly and lead to more misreconstructed objects, especially if the algorithms were not designed for it.
    • The high data rate leads to a delay in the software trigger stage. Based on Run 1 (2010-2012) it was expected that the experiments can take data about 1/3 of the time. A trigger system that only runs live would be idle 2/3 of the time. To avoid this, ATLAS, CMS and LHCb all implemented deferred triggers: Some events that cannot be studied in time are simply written to a temporary storage and processed later. If the LHC has stable beams 1/3 of the time this gives a huge boost in processing power - up to a factor 3. That means the trigger algorithms can get more complex and time-consuming. But now the LHC collides protons 2/3 of the time (average in October), and suddenly this system can only give up to a factor 1.5. The result is a backlog of data that still needs processing. It can be processed after regular data taking ends.
    • The simulations done in advance don't represent the data accurately. They were made according to the expected running conditions, which means a lower pileup and more bunches in the machine than the actual conditions now. This can be fixed later with additional simulation datasets.
    An interesting case study will be the decays ##B_s \to \mu \mu## and ##B^0 \to \mu \mu##. They are always measured together as they have the same final state and nearly the same energy in the decay. Both are extremely rare (predicted: 3.6 parts in a billion and 1 part in 10 billion, respectively). A deviation from predictions would be very interesting in the context of other anomalies. The first decay has been found but the measurement accuracy is still poor, and the first clear detection of the second decay is still open. For LHCb, the B mesons are heavy particles and the trigger is designed to look for muon pairs, it has a high efficiency to find these decays - but LHCb has a low overall number of collisions. For ATLAS and CMS, the B mesons are light particles and the trigger has difficulties finding them, the efficiency is low - but these experiments have a high number of collisions. In Run 1, both approaches lead to roughly the same sensitivity, with LHCb a bit ahead of the other experiments. We'll see how this looks like with Run 2 (2015-2018). I expect all three experiments to make relevant contributions. LHCb has a better energy resolution so it performs better in seeing a small ##B^0 \to \mu \mu## peak directly next to the ##B_s \to \mu \mu## signal. Here is an image, red is ##B_s##, green is ##B^0##, the rest is background. The latest with Run 3 (2021+) I expect LHCb to be much better than the other experiments.
     
  18. Nov 2, 2017 #98

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    LHC will end the run 1 week early, on December 4th. This is to allow the CMS access to their pixel detector before CERN's end of the year shutdown.
     
  19. Nov 7, 2017 #99

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Meanwhile the LHC makes extra long runs. 0.77/fb for ATLAS, 0.74/fb for CMS, 0.033/fb for LHCb in 27 hours.
    50/fb collected by ATLAS and CMS, 1.7/fb by LHCb.

    Regular data-taking will end on Friday, then we get special runs for two weeks, followed by a week of machine development, and then the usual winter shutdown. No lead collisions this year.
    Various things will need fixes, upgrades and so on. The 16L2 issue discussed earlier will be investigated, the CMS pixel detector can be accessed.

    First collisions in 2018 are expected for March to April.
     
  20. Nov 10, 2017 #100
    The LHC Page 1 shows that they are currently running tests at 2.51 TeV. Why this particular energy?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: LHC starts 2017 data-taking
  1. Accessing LHC data (Replies: 4)

  2. LHC Data (Replies: 6)

  3. The LHC (Replies: 18)

Loading...