I What are the challenges faced by LHC in the initial data-taking of 2017?

  • I
  • Thread starter Thread starter mfb
  • Start date Start date
  • Tags Tags
    2017 Lhc
  • #51
mfb said:
Stable beams with 600 bunches, 30% of the design luminosity for ATLAS/CMS, 125% of the (lower) design luminosity for LHCb.
Just want to say: I love your running commentary. :oldbiggrin:
 
  • Like
Likes vanhees71, mfb and fresh_42
Physics news on Phys.org
  • #52
Thanks.
I add it when something happened since the last post - or make a new post if it is a major milestone.The https://lpc.web.cern.ch/lumiplots_2017_pp.htm are now online. LHCb data seems to be missing.

ATLAS had some issues with its magnet in the muon system, it was switched off last night. As long as the luminosity is low, that is not a large loss, and analyses that don't need muons can probably use the data. The magnet is running again now.

We'll probably get some more collisions with 600 bunches next night..
Edit: There they are. 30% design luminosity again.

Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.
 
Last edited:
  • Like
Likes member 563992, fresh_42 and vanhees71
  • #53
Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.

Was just wondering since I noticed the beam went up:

07_June_2017-pp_luminosity_integrated_date_2017.png
 
  • #54
That should be the run from the night to Monday.
 
  • Like
Likes member 563992
  • #55
What units is luminosity measured in?

Graph shows fb, physical splanation please.

Fill number?
 
  • #56
Inverse femtobarn (fb-1). I wrote an Insights article about it.
1/fb corresponds to roughly 1014 collisions.

Fill number is just counting how often protons have been put in the machine. After beams are dumped, the number is increased by 1 for the next protons to go in. "Fill 5750" is more convenient than "the protons we had in the machine at June 5 from 8:23 to 13:34".
 
  • Like
Likes member 563992 and houlahound
  • #57
After a few days of scrubbing, we are back to data-taking.

Scrubbing went well. We had a record number of 3.37*1014 protons per beam, with 2820 bunches per beam (slightly exceeding the design value of 2808).
The heating of the magnets went down by ~40% in the most problematic region, enough to continue operation with higher beam intensities.

Currently with a short (1 hour) run with 10 bunches each, then they'll complete the 600 bunch step (~3 hours), and then go on with 900 and 1200 bunches. Each step gets 20 hours of stable beams to verify nothing goes wrong. These two steps combined should deliver about 0.5/fb worth of data. Progress in the first weeks is always a bit slow, but it starts to get an interesting dataset.

Edit: We got 900 bunches. 68% design luminosity, about 50 inelastic ("destructive") proton-proton collisions per bunch crossing (design: ~25). Unfortunately the beam was dumped after just 20 minutes of data-taking for safety reasons. Now they are working on a cooling issue, that will take several hours.

Edit2: More 900 bunches (980 actually), nearly 0.15/fb of data collected on Wednesday. We'll probably get 1200 late Thursday to Friday.
 
Last edited:
  • Like
Likes Lord Crc and odietrich
  • #58
We had some runs with 900-980 bunches in the last two days, about 65% the design luminosity. Each step gets 20 hours before the number of bunches is increased. 900 is done now, the next step is 1200 bunches, probably this evening.
Edit in the evening: Stable beams with 1225 bunches, 75% the design luminosity. A bit lower than expected.

ATLAS and CMS both reached 0.5/fb of data. Not much compared to last year's 40/fb, but we are still in the very early phase of data-taking.
The machine operators found another way to increase the number of collisions a bit. The bunches have to hit each other with a minimal crossing angle to avoid additional collisions outside the design point. That means the bunches don't overlap completely (see this image). With the HL-LHC in 2025+ it is planned to "rotate" the bunches, but that needs additional hardware not available now.
In long runs (many hours), the number of protons per bunch goes down over time - some are collided, some are lost elsewhere in the machine. That means the long-range interactions get less problematic, and the crossing angle can be reduced. This increases the number of collisions by a few percent. It does not change the maximal luminosity, but it reduces the drop of the luminosity over time.
The LHC could get a very unusual record this year: The luminosity record for any type of collider.
Electrons and positrons are much lighter than protons. That means they emit more synchrotron radiation when they travel around in a circular collider. Faster electrons radiate more and get slower in the process. That is a very effective "cooling" mechanism, as a result you can put the electrons very close together, increasing the luminosity. KEKB set the world record of 2.11*1034/(cm2*s) in 2009 - with electrons/positrons.
The LHC could reach this value in 2017 - with protons, where it is much harder. As far as I know, it would be the first proton-collider ever to set the absolute luminosity record.
KEKB is currently upgraded, and the new version (SuperKEKB) is supposed to reach 100*1034/(cm2*s), way above everything the LHC can achieve, but it will probably need until late 2018 to beat its old record, and several more years to reach its design value. There is a small time window where LHC could get the record for a while.
 
Last edited:
  • Like
Likes Lord Crc and QuantumQuest
  • #59
The LHC is progressing fast this week. The 20 hours at 1200 bunches were completed today, and the machine switched to 1550 bunches. Collisions in ATLAS and CMS reached ~100% the design luminosity this evening. If everything goes well, we get 1800 bunches on Monday, clearly exceeding the design luminosity.

The luminosity record last year was 140% the design value, with a naive scaling we need 2150 bunches to reach this, and 2820 bunches will give 180% the design luminosity. Similar to last year, I expect that the luminosity goes up more, as they'll implement more and more improvements. The absolute luminosity record is certainly realistic.

Both experiments collected 1.25/fb of data now, and the https://lpc.web.cern.ch/lumiplots_2017_pp.htm is going upwards rapidly.

Edit: They shortened the 1500 bunch step and went directly to 1740 after just ~10 hours. Initial luminosity ~110% the design value.
 
Last edited:
  • Like
Likes member 563992, arivero, vanhees71 and 1 other person
  • #60
2029 bunches!
125% of the design luminosity. Approaching the 2016 record.

ATLAS and CMS now have 2.2/fb, twice the data they had three days ago. It is also half the total 2015 dataset.

The machine operators expect that they can go to 2300 bunches without issues. Afterwards the heat load from the electrons in the beam pipe could get too high. Then we need more scrubbing or simply more data-taking at 2300 bunches - that acts as scrubbing as well.
 
  • Like
Likes vanhees71 and Drakkith
  • #61
mfb said:
Then we need more scrubbing or simply more data-taking at 2300 bunches - that acts as scrubbing as well.

What do you mean? What does data-taking have to do with scrubbing?
 
  • #62
Scrubbing = have as many protons as possible circling in the machine.
Data-taking = have as many protons as possible at high energy circling in the machine
The second approach has fewer protons, as higher energies means the magnets get more heat (that is the problem reduced by scrubbing).

Scrubbing runs have 2820 bunches, data-taking might be limited to 2300. The latter is not so bad - especially as it means more data keeps coming in. And that is what counts.2.3/fb, 10 hours in stable beams already. We might get 2300 bunches as early as Wednesday evening.

Edit 13:00 CERN time: 21 hours in stable beams, 0.5/fb in less than a day, the beam will get dumped soon. New records for this year. And enough time to go up another step, to 2173 bunches.

Edit on Thursday 12:00: 2173 bunches, initial luminosity was about 140% the design value. At the level of the 2016 record. 2.8/fb in total. We'll get 2317 bunches later today, probably with a new luminosity record.
 
Last edited:
  • Like
Likes member 563992, hsdrop, Amrator and 3 others
  • #63
We have a new all-time luminosity record! Over the night, a fill with 2317 bunches had more than 140% the design luminosity. Close to 150%.

Unfortunately, the LHC encountered multiple issues in the last day, so the overall number of collisions collected was very low (just 1 hour of stable beams since yesterday afternoon). One of these issues lead to a lower number of protons in the ring than usual - we can get new luminosity records once that is fixed.

The heat in the magnets is now close to its limit, I expect that we get data-taking at 2300 bunches for a while before the beam pipe is "clean" enough to put even more protons in.Edit: They decided that the margin is large enough. 2460 bunches! And a bit more than 150% the design luminosity.
 
Last edited:
  • Like
Likes member 563992, Imager, odietrich and 1 other person
  • #64
The current fill has 2556 bunches. Is this a record? I looked but didn't find the max from 2016.

Also, earlier in the fill there was a notice about a failed 'Hobbit scan'. What's a Hobbit scan?
 
  • #65
websterling said:
The current fill has 2556 bunches. Is this a record? I looked but didn't find the max from 2016.
It is a record at 13 TeV. I don't know if we had more at the end of 2012 where they did some first tests with 25 ns bunch spacing (most of 2012 had 50 ns, where you are limited to ~1400 bunches).

No idea about the Hobbit scan. They are not new, but apart from the LHC status page I only find amused Twitter users.The LHC started its first phase of machine development, followed by a week of technical stop. Data-taking will probably resume July 10th. https://beams.web.cern.ch/sites/beams.web.cern.ch/files/schedules/LHC_Schedule_2017.pdf.

In the last days we had a couple of runs starting at ~150% design luminosity with a record of 158%. The initial phase of rapid luminosity increase is over. While the machine operators will try to increase the luminosity a bit more, this is basically how most of the year will look like now.

ATLAS and CMS got 6.3/fb so far. As comparison: Last year they collected about 40/fb.
LHCb collected 0.24/fb. Last year it was 1.9/fb.
https://lpc.web.cern.ch/lumiplots_2017_pp.htm
In both cases, the final 2017 dataset will probably be similar to the 2016 dataset. In 2018 we will get a bit more than that, 2019 and 2020 are reserved for machine and detector upgrades. Long-term schedule. With the 2017 dataset you can improve some limits a bit, you can improve the precision of some measurements a bit, but many studies will aim for an update after 2018.

LHC report: full house for the LHC
 
Last edited:
  • Like
Likes websterling
  • #67
A nice start for EPS.

The first baryon with two heavy quarks. It needs the production of two charm/anticharm pairs in the collision, with the charm quarks with similar momenta, that makes the production very rare.

Now that ##ccu## (=quark content) is found (with a very clear signal peak), ##ccd## should be possible to find as well - the mass should be extremely similar, but it has a shorter lifetime and a larger background. ##ccs## will be much more challenging - it needs an additional strange quark, that makes it even rarer. In addition, its lifetime should be even shorter.

Baryons with bottom and charm together: A naive estimate would suggest one such baryon per 20 double-charm baryons. That is probably too optimistic. The lifetime could be a bit longer. Maybe with data from 2016-2018?
Two bottom: Another factor 20, maybe more. Good luck.

--------------------

ATLAS showed updated results for Higgs decays to bottom/antibottom. Consistent with the Standard Model expectation, the significance went up a bit, now at 3.6 standard deviations. If CMS also shows new results, we can probably get 5 standard deviations in a combination. It is not surprising, but it would still be nice to have a good measurement how often this decay happens.I'll have to look through the EPS talks for more, but I didn't find the time for it today.
The technical stop is nearly done, the machine will continue operation tomorrow.
 
  • Like
Likes Amrator
  • #68
mfb said:
In addition, its lifetime should be even shorter.

Why? The strange lifetime is much, much longer than the charm. I'd expect that, up to final state effects, the lifetimes would be about the same.
 
  • #69
When they find those mesons or hadrons, and claim a discovery, I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...
 
  • #70
I don't know how large the effect would be, but it should have a larger overall mass, although its decay products could have a higher mass as well. I didn't draw Feynman diagrams and I certainly didn't calculate it.
ChrisVer said:
When they find those mesons or hadrons, and claim a discovery, I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...
They found 300 in the whole Run 2 dataset. Double charm production is rare, and both charm in the same hadron is rare even for this rare process.
 
  • #71
ChrisVer said:
I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...

Do you always denigrate the accomplishments of others?
 
  • #72
Vanadium 50 said:
Do you always denigrate the accomplishments of others?
That'd be an offensive behavior from my side. Nope, I don't denigrate their or any discovery...
I am just wondering what factors made it take so long. We have undeniably found heavier particles, so the machines that produced those heavy particles could also produce the ccu ...
 
  • #73
The mass is not everything that matters. See the top discovery long before the Higgs discovery.
The cross section, the decays, the backgrounds - all these things matter.

Could LHCb have seen a hint of this particle in Run 1? Probably. But manpower is limited, they probably didn't look into this particular channel at that time.
Could other experiments have seen it before? Every other experiment has a much smaller dataset for heavy baryons. Probably not, at least not with a high significance.Edit: Beam is back in the machine. Some issues with the accelerating cavities delay the operation. We'll probably get collisions on Sunday, with rapidly increasing number of bunches, and get back to the full intensity on Monday.
 
Last edited:
  • Like
Likes fresh_42
  • #74
I don't think I've ever seen anything here on PF about the LHC's beam tube vacuum. Seeing the problems/leaks I'm having with my little rough vacuum system @ around 2 or 3 mTorr, how do you guys maintain, I'm assuming an ultrahigh vacuum, on such a huge system? @mfb and @Vanadium 50.

Thanks

Edit: Thanks again guys. You've given me information about the beam pipe vacuum I would have never known about.
 
Last edited:
  • #75
Lots of pumps, lots of getters, and the fact that it's at cryogenic temperatures helps - residual gas tends to freeze.
 
  • Like
Likes dlgoff
  • #76
~800 getter pumps, plus various others.
The pressure in the beam pipe is 1 to 10 nPa. A weaker vacuum would mean too many protons get lost. That would be bad for the magnets (heat load) and for the luminosity (the runs are several hours long, most protons should survive that long).Some more issues with the machine delayed the recovery from the technical stop. We had 600 bunches overnight, now the cryogenics system has another issue. Once that is fixed a few hours with 1300 bunches will be needed, and a few more hours of tests, then the machine goes back to the previous record of 2556 bunches and the experiments can resume regular data-taking at the full luminosity.
 
  • Like
Likes Drakkith and dlgoff
  • #77
mfb said:
The pressure in the beam pipe is 1 to 10 nPa.
:oldsurprised: I'm blown away by this. Numbers do tell.
 
  • #78
To put that in perspective, this is comparable to the lunar atmosphere. (It's better than 1 nPa at the IP, 10 nPa in the arcs, and the moon is about 0.3 nPa if I remember right. The moon's atmosphere is mostly argon, the LHC is atomic hydrogen, molecular hydrogen, helium and possibly CO)
 
  • Like
Likes dlgoff
  • #79
The LHC beam pipe should be the largest vacuum of that quality (150 m3).
The LIGO vacuum is much larger (10,000 m3), but it has ~100 nPa.
The LHC magnets are in an insulation vacuum (to limit heat transfer) - 9000 m3, but at a "high" pressure of about 100 µPa (1µtorr).

For smaller volumes, it is possible to make the vacuum orders of magnitude better. Pump out the system, close all exits, and then cool everything down until all remaining atoms freeze out at the walls. BASE (also at CERN) has a vacuum so good that they can store antiprotons for more than a year without annihilations. The expected number of remaining gas atoms is zero in their 1.2 liter vacuum chamber, and there is no tool that can detect any remaining gas. They didn't observe annihilations, based on that they set an upper limit of ~1 fPa on the remaining pressure, or 3 atoms per cubic centimeter. Here is an article about it.
 
  • Like
Likes vanhees71 and dlgoff
  • #80
And not only does the pressure of the residual gas vary around the ring, but so does its composition.
 
  • Like
Likes dlgoff
  • #81
@mfb and @Vanadium 50

Thanks for your replies. I don't mean to hijack this thread, but these things are what I live for.
 
  • #82
Vanadium 50 said:
... but so does its composition.
Speaking of composition (materials), don't these very low pressures evaporate some components? or degrade them?
 
  • #83
Steel and copper (outside the experiments) and beryllium (at the experiments) don't evaporate notably, especially at cryogenic temperatures (some parts of the beam pipe are at room temperature, however). The LHCb VELO detector uses an AlMg3 foil, no idea about that but it has a small surface anyway. I don't see how vacuum would degrade these materials.
 
  • Like
Likes dlgoff
  • #84
Recovery from technical stop is still ongoing. The RF system (radio frequency cavities to accelerate the beam) cannot handle the 2556 bunches we had before, the problem is under investigation. With 2317 bunches it works, for now the LHC is running with this lower number of bunches. Still enough to collect a lot of collisions. ATLAS and CMS reached 7/fb, LHCb collected 0.24/fb.I made a thread about results from EPS.

Edit on Wednesday: Finally back at 2556 bunches.
 
Last edited:
  • #85
Both ATLAS and CMS reached 10/fb, about 1/4 of the 2016 dataset. 16 more weeks for data-taking are planned. At pessimistic 2/(fb*week), we get the same number of collisions as last year, at optimistic 3.5/(fb*week) we get 65% more.
If everything in the LHC would work perfectly 100% of the time, more than 5/(fb*week) would be possible, but that is unrealistic with such a complex machine.We had a short break for machine development and van-der-Meer scans:
Cross section measurements are an important part of the physics program, and they require an accurate luminosity estimate. What the machine can deliver from normal operation has an uncertainty of a few percent. That is good for the machine operators, but for physics you want to get the uncertainty to be smaller - 2% is nice, 1% is better. The luminosity depends on a couple of machine parameters:$$\mathcal{L} = \frac{N_1 N_2 f N_b S}{4 \pi \sigma_x \sigma_y}$$
##f## is the revolution frequency - fixed and known to many decimal places.
##N_b## is the number of bunches per beam - known exactly.
##N_1## and ##N_2## are the numbers of protons in the bunches, they can be measured via the electromagnetic fields they induce when moving around the ring.
##S \leq 1## is a factor that takes the crossing angle into account, it can be calculated precisely. See also post 58.
##\sigma_x## and ##\sigma_y## are the widths of the bunches in x/y direction. There is no good direct way to measure that accurately.

To estimate the width of the bunches, the machine operators shift the relative positions of the beams around at the collision points while the experiments monitor the collision rate as function of the shift. A fit to the observed rates leads to the widths. This procedure was named after Simon van der Meer.
 
  • Like
Likes strangerep, odietrich and fresh_42
  • #86
A few updates: The LHC experiments got collisions at a high rate, and the machine operators found some methods to improve the rate further.

ATLAS and CMS reached 15.5/fb. 11 days since they had 10/fb, this means 0.5/(fb*day) or 3.5/fb per week.
Wednesday 6:46 to Thursday 6:46 this week we had a record of 0.83/fb in 24 hours. As comparison: In these 24 hours, the LHC experiments had 4 times the number of Higgs boson and 8 times the number of top quarks the Tevatron experiments had - in their 20 years of operational history.

LHCb surpassed 0.5/fb, nearly 1/3 of the 2016 dataset.

The stepwise reduction of the crossing angle, discussed earlier was studied in more detail. Previously it was reduced in steps of 10 millirad (150 -> 140 -> 130 -> ...). That increases the collected data by about 3.5%. The process now works so smoothly that it became possible to reduce it in steps of 1 millirad, always following the optimal angle. This increases the number of collisions by additional 1.5%. That doesn't sound much, but all these small improvements add up.

The number of protons per bunch went up a bit. We reached a record of 3.1*1014 protons per beam at high energy, or 320 MJ per beam. Correspondingly, the initial luminosity reached a new record, 174% the design value.
The machine operators tried to get even more, but that lead to problems, so they added a day of scrubbing.

Another thing discussed is the focusing of the beams at the collision points. Based on the analysis of the machine development block, it can be improved a bit more. That could increase the luminosity by ~20%. 1.74*1.2=2.09. There is still hope for the absolute luminosity record!
 
  • Like
Likes Lord Crc and stoomart
  • #87
ATLAS and CMS reached 20/fb. We have gained 4.5/fb since the previous post 21 days ago, or 1.5/fb per week, even below the pessimistic estimate from above. You can see this clearly in https://lpc.web.cern.ch/lumiplots_2017_pp.htm as well.

A problem appeared in a region called 16L2, which lead to the dump of many fills, often before collisions started. Although the cause is not well understood, the process is always the same: Some beam particles are lost in this region, and a few milliseconds (tens of revolutions) later many more particles are lost roughly at the opposite side of the ring - more than acceptable, this triggers a beam dump. This can happen from either beam 1 or beam 2, although they fly in separate beam pipes in 16L2.
The problem appeared early in the year already, but until August, the dump rate could be managed by adjusting the control magnets in this region a bit. With increasing beam currents, it got more problematic and the machine operators wanted to get rid of the problem. The losses look gas-induced. The gas can stick to parts called "beam screen", and get released during the run, the collision of the beam with gas particles leads to the observed losses. The usual approach is to heat this beam screen, then all the gas evaporates, and gets pumped out or sticks to even colder parts of the beam pipe where it stays.
That was done on August 10 - and then everything got worse. Now more than half of the fills were dumped due to 16L2, even at lower numbers of bunches. The smaller fraction of time in stable beams plus the reduced number of bunches lead to the slower accumulation of collision data in the last three weeks. The leading hypothesis is gas in other components of 16L2 that redistributed when heating the beam screen and other components, leading to even more gas there.

What to do?
  • The problem could be solved by heating up the whole sector and pumping it out properly. That would probably take 2-3 months, doing it now would mean most of the time planned for data-taking this year is gone. Unless data-taking becomes completely impossible this won't be done before the winter shutdown.
  • The machine operators see if there is a stable running condition that works for now. The last few runs with 1550 bunches were promising, at this rate the LHC would be limited to ~2/fb per week, but that is still a reasonable rate that would double the 2016 dataset by the end of the year.
  • Gaps between bunches can reduce losses, e. g. "8 bunches in a row, then 4 slots free, then 8 bunches in a row, then 4 slots free, ...". This might be tested. It would also mean the number of bunches has to be reduced compared to the initial plan, but if it reduces the number of dumps sufficiently it can be worth it.
  • There are some special runs planned/proposed for 2018, some at lower energies and some with a very low collision rate, for something like 1 week in total. They might be shifted to 2017 as they won't be affected by the 16L2 issue as much as the regular operation at high energy and collision rate.
  • The machine operators discuss what else can be done.

LHC report: Something in the nothing
 
  • Like
Likes Amrator, arivero, stoomart and 4 others
  • #88
ATLAS and CMS reached about 24/fb.
The mitigation approaches, especially the "8 bunches, then 4 slots free, repeat" pattern worked, in the last days ~2/3 of the time could be spent with data-taking. The luminosity is lower, but still at the design value. There are still some dumps due to 16L2 but they don't break everything any more.

A https://beams.web.cern.ch/sites/beams.web.cern.ch/files/schedules/LHC_Schedule_2017.pdf machine development block started, followed by a few days of technical stop. About 9 weeks for data-taking left in 2017. Unless there are some new ideas how to solve the 16L2 issue, I guess they will just keep the current configuration, it should lead to about 2-2.5/fb per week, so we will still get more than the 40/fb of last year.

LHC Report: operation with holes
 
  • Like
Likes Amrator, Lord Crc, protonsarecool and 1 other person
  • #89
Back to data-taking. Currently 1900 bunches, with the "8 bunches, 4 free, repeat" pattern. Initial luminosity was 120% the design value, quite nice for the relatively low number of bunches.

The machine operators work around the 16L2 issue:
  • Combine the bunch/empty pattern with BCMS, a different way to prepare the beam in the preaccelerators. This will reduce the number of bunches to 1800, but give more collisions per bunch crossing. This will be tested in the next few days.
  • Focus the beam better at the collision point. This was tested during the machine development block, and the operators are confident they can do this (technically: reduce ##\beta^*## from 40 cm to 30 cm).
  • Move some special runs from 2018 to 2017:
    • collisions of xenon ions, probably for a day in November
    • proton-proton collisions at lower energy and lower collision rate to cross-check heavy ion results (as they have a lower energy per nucleon) and for some studies that don't care much about energy (or even work better at low energy) but suffer from many simultaneous collisions. About two weeks in December.

Other news: The machine development block included a run where some bunch crossings lead to 100 simultaneous collisions in ATLAS and CMS, compared to 40-50 during normal operation. This is an interesting test for future running conditions (~150-200 expected for the HL-LHC upgrade). These are averages, individual bunch crossings vary in the number of collisions, of course. An average of 100 means you have events with more than 130 collisions.
 
  • Like
Likes arivero and Amrator
  • #90
I assume it must be very challenging to track the collision products back to the specific collision that produced them.
 
  • #91
It is. Most collisions are quite soft, however, and most analyses look for hard interactions that produce high energy particles.

With charged particles (especially muons and electrons) you have nice tracks pointing to the right primary vertex. With uncharged particles it is more difficult.
The worst case is the transverse momentum balance, where you look for particles that don't interact with the detector at all via conservation of momentum (see here, the part on supersymmetry). You can easily get a wrong result if you assign particles to the wrong primary vertex.

All four big detectors will replace/upgrade their innermost detectors to handle more collisions per bunch crossing in the future.

---

ATLAS and CMS reached 25/fb, with a bit more protons per bunch we reached 140% of the design luminosity and very stable running conditions. The better focusing is in place and works.

---

Edit Friday: 126 billion protons per bunch, should be a new record. About 160% the design luminosity at the start of the run - with just 1916 bunches (1909 colliding in ATLAS and CMS). About 60 (inelastic) proton-proton collisions per bunch crossing (75 if we count elastic scattering).
BCMS could increase this even more.

The detectors were designed for 25 collisions per bunch crossing.

LHCb reached 1/fb.
 
Last edited:
  • Like
Likes dlgoff
  • #92
A total of 32/fb collected for ATLAS and CMS. 4/fb in the last week, a record speed, clearly visible in the https://lpc.web.cern.ch/lumiplots_2017_pp.htm as well.
6 weeks of data-taking left, at 3/fb we will end up with 50/fb.

For both ATLAS and CMS, the machine can now deliver more than 60 simultaneous collisions per bunch crossing - too many for the experiments, so they ask to limit that to about 60. Further improvements this year won't increase the peak luminosity, but they can increase the time this luminosity can be maintained (afterwards it goes down as usual, eventually the beams get dumped and a new fill starts). For next year the number of bunches can be increased again, increasing the luminosity without increasing the number of collisions per bunch crossing.

Edit: Plot.
The run starts with the maximal luminosity (region A), here 180% the design value, to find the position for head-on collisions of the beams. Then the beams are quickly shifted a bit with respect to each other to reduce the luminosity to the target of 150% the design value (region B). After several minutes, when the luminosity dropped by about 1% (due to a loss of protons in the collisions and decreasing focusing), the beams are shifted back a little bit to reach the target again. This repeats until the beams are colliding head-on again. Afterwards the machine is not able to deliver this luminosity target any more, and the luminosity goes down over time (region C). Reducing the crossing angle helps a bit to keep the luminosity higher later in the run.

The high-luminosity LHC will use this method extensively, probably with most of the time spent in region B.

lumileveling.png
 
Last edited:
  • Like
Likes vanhees71, Lord Crc and Drakkith
  • #93
mfb said:
The LHC could get a very unusual record this year: The luminosity record for any type of collider.
Electrons and positrons are much lighter than protons. That means they emit more synchrotron radiation when they travel around in a circular collider. Faster electrons radiate more and get slower in the process. That is a very effective "cooling" mechanism, as a result you can put the electrons very close together, increasing the luminosity. KEKB set the world record of 2.11*1034/(cm2*s) in 2009 - with electrons/positrons.
The LHC could reach this value in 2017 - with protons, where it is much harder. As far as I know, it would be the first proton-collider ever to set the absolute luminosity record.
The LHC might have achieved it. The achieved value is consistent with 2.11*1034/(cm2*s) within the uncertainty of the calibration (few percent). Unfortunately the run statistics record the leveled luminosity, not the peak value, so there is a bit of guesswork involved.

Some screen captures of the live display for three different runs, not necessarily with the highest achieved values:
lumia.png
lumi0.png
lumi.png


Maybe we'll get a more official statement for the luminosity values soon.
Edit: See below.

The last few days we had a couple of long runs with luminosity leveling and then many hours more of collisions and not too much time between the runs. Great conditions to collect data. ATLAS and CMS accumulated 36.5/fb, and there are 5 weeks of data-taking left. 45/fb seem easy, we'll probably get more than 50/fb, even 55/fb are not impossible.

For 2018, I expect that both ATLAS and CMS will try to optimize their algorithms to handle even more collisions per bunch crossing (pileup) just in case it becomes important. The gas issue should get fixed, which means the LHC can get filled with more bunches, so the same luminosity can be achieved with a lower pileup (= no need for luminosity leveling). Well, maybe we get both together: more bunches and so much pileup that leveling is important...The LHC had a day of xenon-xenon collisions last week. Nothing surprising here, it will be a nice data point in between proton (small) and lead (big).Edit: A new run just started. The shown luminosity exceeded the record set by KEK.

lumirecord.png


Edit2: 218% the design luminosity in the following run. Looks like the LHC has the record.
 
Last edited:
  • Like
Likes Lord Crc
  • #94
0.93/fb collected in the last 24 hours. About the number of collisions the Tevatron experiments collected in a year. In addition, the LHC collisions have 13 TeV instead of 2 TeV.

This is way beyond the expectations the machine operators or the experiments had for 2017, especially with the vacuum problem mentioned earlier.And it is nearly certain that the luminosity record is there! Note the comment on the right.

lumirecord.png
 
  • Like
Likes fresh_42
  • #95
mfb said:
This is way beyond the expectations the machine operators or the experiments had for 2017, especially with the vacuum problem mentioned earlier.

It's alright if you're on an experiment that doesn't mind pile-up. Personally I was hoping for ~2.0/fb for LHCb from >2,600-bunch beams, but it looks like we might get ~1.7/fb like last year
 
  • #96
ATLAS and CMS do mind pileup - see the luminosity leveling done for them. Sure, they can work with a much higher pileup than LHCb and that has a much higher pileup than ALICE.
For LHCb, all the improvements in number of particles and focusing are useless, only the number of bunches counts - and there the vacuum issue determines the limit. The current availability is still excellent, and 1.7/fb is close to 2/fb.Edit: We had more instances of 0.9x/fb in 24 hours. That happens only if everything is perfect and two long runs follow each other without any issues during re-filling. Unless they manage to keep the luminosity leveling even longer (from even more protons per bunch?), it is unlikely to increase this year. That gives a rate of more than 5/fb per week, however.
 
  • #97
ATLAS and CMS reached 45/fb, LHCb accumulated 1.55/fb.
During the week from Oct 16 to Oct 22 ATLAS and CMS collected 5.2/fb, about half the integrated luminosity the Tevatron experiments got in 20 years.
2.5 weeks left for regular data taking.

The high data rate is great for measurements/searches of rare events, but it is also challenging for the detectors, related infrastructure and some analyses.
  • The readout electronics was not designed for such a high rate of interesting events - the triggers have to get more selective. This doesn't matter much if you are looking for new very heavy particles (events with a lot of energy in the detectors are rare, they are always kept), but it hurts the analyses studying/searching for lighter particles where you have to find the signal events in a lot of other background events. In addition, there are now more background collisions even in the signal events.
  • More collisions at the same time make it harder to identify particles properly and lead to more misreconstructed objects, especially if the algorithms were not designed for it.
  • The high data rate leads to a delay in the software trigger stage. Based on Run 1 (2010-2012) it was expected that the experiments can take data about 1/3 of the time. A trigger system that only runs live would be idle 2/3 of the time. To avoid this, ATLAS, CMS and LHCb all implemented deferred triggers: Some events that cannot be studied in time are simply written to a temporary storage and processed later. If the LHC has stable beams 1/3 of the time this gives a huge boost in processing power - up to a factor 3. That means the trigger algorithms can get more complex and time-consuming. But now the LHC collides protons 2/3 of the time (https://lpc.web.cern.ch/lumiplots_2017_pp.htm), and suddenly this system can only give up to a factor 1.5. The result is a backlog of data that still needs processing. It can be processed after regular data taking ends.
  • The simulations done in advance don't represent the data accurately. They were made according to the expected running conditions, which means a lower pileup and more bunches in the machine than the actual conditions now. This can be fixed later with additional simulation datasets.
An interesting case study will be the decays ##B_s \to \mu \mu## and ##B^0 \to \mu \mu##. They are always measured together as they have the same final state and nearly the same energy in the decay. Both are extremely rare (predicted: 3.6 parts in a billion and 1 part in 10 billion, respectively). A deviation from predictions would be very interesting in the context of other anomalies. The first decay has been found but the measurement accuracy is still poor, and the first clear detection of the second decay is still open. For LHCb, the B mesons are heavy particles and the trigger is designed to look for muon pairs, it has a high efficiency to find these decays - but LHCb has a low overall number of collisions. For ATLAS and CMS, the B mesons are light particles and the trigger has difficulties finding them, the efficiency is low - but these experiments have a high number of collisions. In Run 1, both approaches lead to roughly the same sensitivity, with LHCb a bit ahead of the other experiments. We'll see how this looks like with Run 2 (2015-2018). I expect all three experiments to make relevant contributions. LHCb has a better energy resolution so it performs better in seeing a small ##B^0 \to \mu \mu## peak directly next to the ##B_s \to \mu \mu## signal. Here is an image, red is ##B_s##, green is ##B^0##, the rest is background. The latest with Run 3 (2021+) I expect LHCb to be much better than the other experiments.
 
  • Like
Likes odietrich, Lord Crc and Amrator
  • #98
LHC will end the run 1 week early, on December 4th. This is to allow the CMS access to their pixel detector before CERN's end of the year shutdown.
 
  • Like
Likes Amrator and mfb
  • #99
Meanwhile the LHC makes extra long runs. 0.77/fb for ATLAS, 0.74/fb for CMS, 0.033/fb for LHCb in 27 hours.
50/fb collected by ATLAS and CMS, 1.7/fb by LHCb.

Regular data-taking will end on Friday, then we get special runs for two weeks, followed by a week of machine development, and then the usual winter shutdown. No lead collisions this year.
Various things will need fixes, upgrades and so on. The 16L2 issue discussed earlier will be investigated, the CMS pixel detector can be accessed.

First collisions in 2018 are expected for March to April.
 
  • Like
Likes Lord Crc
  • #100
The LHC Page 1 shows that they are currently running tests at 2.51 TeV. Why this particular energy?
 

Similar threads

Replies
57
Views
15K
Replies
20
Views
3K
Replies
48
Views
7K
Replies
17
Views
6K
Replies
10
Views
2K
Replies
49
Views
12K
Replies
18
Views
3K
Back
Top