Just want to say: I love your running commentary.
I add it when something happened since the last post - or make a new post if it is a major milestone.
The luminosity plots are now online. LHCb data seems to be missing.
ATLAS had some issues with its magnet in the muon system, it was switched off last night. As long as the luminosity is low, that is not a large loss, and analyses that don't need muons can probably use the data. The magnet is running again now.
We'll probably get some more collisions with 600 bunches next night..
Edit: There they are. 30% design luminosity again.
Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.
Was just wondering since I noticed the beam went up:
That should be the run from the night to Monday.
What units is luminosity measured in?
Graph shows fb, physical splanation please.
Inverse femtobarn (fb-1). I wrote an Insights article about it.
1/fb corresponds to roughly 1014 collisions.
Fill number is just counting how often protons have been put in the machine. After beams are dumped, the number is increased by 1 for the next protons to go in. "Fill 5750" is more convenient than "the protons we had in the machine at June 5 from 8:23 to 13:34".
After a few days of scrubbing, we are back to data-taking.
Scrubbing went well. We had a record number of 3.37*1014 protons per beam, with 2820 bunches per beam (slightly exceeding the design value of 2808).
The heating of the magnets went down by ~40% in the most problematic region, enough to continue operation with higher beam intensities.
Currently with a short (1 hour) run with 10 bunches each, then they'll complete the 600 bunch step (~3 hours), and then go on with 900 and 1200 bunches. Each step gets 20 hours of stable beams to verify nothing goes wrong. These two steps combined should deliver about 0.5/fb worth of data. Progress in the first weeks is always a bit slow, but it starts to get an interesting dataset.
Edit: We got 900 bunches. 68% design luminosity, about 50 inelastic ("destructive") proton-proton collisions per bunch crossing (design: ~25). Unfortunately the beam was dumped after just 20 minutes of data-taking for safety reasons. Now they are working on a cooling issue, that will take several hours.
Edit2: More 900 bunches (980 actually), nearly 0.15/fb of data collected on Wednesday. We'll probably get 1200 late Thursday to Friday.
We had some runs with 900-980 bunches in the last two days, about 65% the design luminosity. Each step gets 20 hours before the number of bunches is increased. 900 is done now, the next step is 1200 bunches, probably this evening.
Edit in the evening: Stable beams with 1225 bunches, 75% the design luminosity. A bit lower than expected.
ATLAS and CMS both reached 0.5/fb of data. Not much compared to last year's 40/fb, but we are still in the very early phase of data-taking.
The machine operators found another way to increase the number of collisions a bit. The bunches have to hit each other with a minimal crossing angle to avoid additional collisions outside the design point. That means the bunches don't overlap completely (see this image). With the HL-LHC in 2025+ it is planned to "rotate" the bunches, but that needs additional hardware not available now.
In long runs (many hours), the number of protons per bunch goes down over time - some are collided, some are lost elsewhere in the machine. That means the long-range interactions get less problematic, and the crossing angle can be reduced. This increases the number of collisions by a few percent. It does not change the maximal luminosity, but it reduces the drop of the luminosity over time.
The LHC could get a very unusual record this year: The luminosity record for any type of collider.
Electrons and positrons are much lighter than protons. That means they emit more synchrotron radiation when they travel around in a circular collider. Faster electrons radiate more and get slower in the process. That is a very effective "cooling" mechanism, as a result you can put the electrons very close together, increasing the luminosity. KEKB set the world record of 2.11*1034/(cm2*s) in 2009 - with electrons/positrons.
The LHC could reach this value in 2017 - with protons, where it is much harder. As far as I know, it would be the first proton-collider ever to set the absolute luminosity record.
KEKB is currently upgraded, and the new version (SuperKEKB) is supposed to reach 100*1034/(cm2*s), way above everything the LHC can achieve, but it will probably need until late 2018 to beat its old record, and several more years to reach its design value. There is a small time window where LHC could get the record for a while.
The LHC is progressing fast this week. The 20 hours at 1200 bunches were completed today, and the machine switched to 1550 bunches. Collisions in ATLAS and CMS reached ~100% the design luminosity this evening. If everything goes well, we get 1800 bunches on Monday, clearly exceeding the design luminosity.
The luminosity record last year was 140% the design value, with a naive scaling we need 2150 bunches to reach this, and 2820 bunches will give 180% the design luminosity. Similar to last year, I expect that the luminosity goes up more, as they'll implement more and more improvements. The absolute luminosity record is certainly realistic.
Both experiments collected 1.25/fb of data now, and the trend is going upwards rapidly.
Edit: They shortened the 1500 bunch step and went directly to 1740 after just ~10 hours. Initial luminosity ~110% the design value.
125% of the design luminosity. Approaching the 2016 record.
ATLAS and CMS now have 2.2/fb, twice the data they had three days ago. It is also half the total 2015 dataset.
The machine operators expect that they can go to 2300 bunches without issues. Afterwards the heat load from the electrons in the beam pipe could get too high. Then we need more scrubbing or simply more data-taking at 2300 bunches - that acts as scrubbing as well.
What do you mean? What does data-taking have to do with scrubbing?
Scrubbing = have as many protons as possible circling in the machine.
Data-taking = have as many protons as possible at high energy circling in the machine
The second approach has fewer protons, as higher energies means the magnets get more heat (that is the problem reduced by scrubbing).
Scrubbing runs have 2820 bunches, data-taking might be limited to 2300. The latter is not so bad - especially as it means more data keeps coming in. And that is what counts.
2.3/fb, 10 hours in stable beams already. We might get 2300 bunches as early as Wednesday evening.
Edit 13:00 CERN time: 21 hours in stable beams, 0.5/fb in less than a day, the beam will get dumped soon. New records for this year. And enough time to go up another step, to 2173 bunches.
Edit on Thursday 12:00: 2173 bunches, initial luminosity was about 140% the design value. At the level of the 2016 record. 2.8/fb in total. We'll get 2317 bunches later today, probably with a new luminosity record.
We have a new all-time luminosity record! Over the night, a fill with 2317 bunches had more than 140% the design luminosity. Close to 150%.
Unfortunately, the LHC encountered multiple issues in the last day, so the overall number of collisions collected was very low (just 1 hour of stable beams since yesterday afternoon). One of these issues lead to a lower number of protons in the ring than usual - we can get new luminosity records once that is fixed.
The heat in the magnets is now close to its limit, I expect that we get data-taking at 2300 bunches for a while before the beam pipe is "clean" enough to put even more protons in.
Edit: They decided that the margin is large enough. 2460 bunches! And a bit more than 150% the design luminosity.
The current fill has 2556 bunches. Is this a record? I looked but didn't find the max from 2016.
Also, earlier in the fill there was a notice about a failed 'Hobbit scan'. What's a Hobbit scan?
It is a record at 13 TeV. I don't know if we had more at the end of 2012 where they did some first tests with 25 ns bunch spacing (most of 2012 had 50 ns, where you are limited to ~1400 bunches).
No idea about the Hobbit scan. They are not new, but apart from the LHC status page I only find amused Twitter users.
The LHC started its first phase of machine development, followed by a week of technical stop. Data-taking will probably resume July 10th. Here is the schedule.
In the last days we had a couple of runs starting at ~150% design luminosity with a record of 158%. The initial phase of rapid luminosity increase is over. While the machine operators will try to increase the luminosity a bit more, this is basically how most of the year will look like now.
ATLAS and CMS got 6.3/fb so far. As comparison: Last year they collected about 40/fb.
LHCb collected 0.24/fb. Last year it was 1.9/fb.
In both cases, the final 2017 dataset will probably be similar to the 2016 dataset. In 2018 we will get a bit more than that, 2019 and 2020 are reserved for machine and detector upgrades. Long-term schedule. With the 2017 dataset you can improve some limits a bit, you can improve the precision of some measurements a bit, but many studies will aim for an update after 2018.
LHC report: full house for the LHC
So mfb, what can you tell us about the recent discovery? https://press.cern/press-releases/2...nce-observation-new-particle-two-heavy-quarks
A nice start for EPS.
The first baryon with two heavy quarks. It needs the production of two charm/anticharm pairs in the collision, with the charm quarks with similar momenta, that makes the production very rare.
Now that ##ccu## (=quark content) is found (with a very clear signal peak), ##ccd## should be possible to find as well - the mass should be extremely similar, but it has a shorter lifetime and a larger background. ##ccs## will be much more challenging - it needs an additional strange quark, that makes it even rarer. In addition, its lifetime should be even shorter.
Baryons with bottom and charm together: A naive estimate would suggest one such baryon per 20 double-charm baryons. That is probably too optimistic. The lifetime could be a bit longer. Maybe with data from 2016-2018?
Two bottom: Another factor 20, maybe more. Good luck.
ATLAS showed updated results for Higgs decays to bottom/antibottom. Consistent with the Standard Model expectation, the significance went up a bit, now at 3.6 standard deviations. If CMS also shows new results, we can probably get 5 standard deviations in a combination. It is not surprising, but it would still be nice to have a good measurement how often this decay happens.
I'll have to look through the EPS talks for more, but I didn't find the time for it today.
The technical stop is nearly done, the machine will continue operation tomorrow.
Why? The strange lifetime is much, much longer than the charm. I'd expect that, up to final state effects, the lifetimes would be about the same.
When they find those mesons or hadrons, and claim a discovery, I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...
I don't know how large the effect would be, but it should have a larger overall mass, although its decay products could have a higher mass as well. I didn't draw Feynman diagrams and I certainly didn't calculate it.
They found 300 in the whole Run 2 dataset. Double charm production is rare, and both charm in the same hadron is rare even for this rare process.
Do you always denigrate the accomplishments of others?
That'd be an offensive behavior from my side. Nope, I don't denigrate their or any discovery...
I am just wondering what factors made it take so long. We have undeniably found heavier particles, so the machines that produced those heavy particles could also produce the ccu ...
The mass is not everything that matters. See the top discovery long before the Higgs discovery.
The cross section, the decays, the backgrounds - all these things matter.
Could LHCb have seen a hint of this particle in Run 1? Probably. But manpower is limited, they probably didn't look into this particular channel at that time.
Could other experiments have seen it before? Every other experiment has a much smaller dataset for heavy baryons. Probably not, at least not with a high significance.
Edit: Beam is back in the machine. Some issues with the accelerating cavities delay the operation. We'll probably get collisions on Sunday, with rapidly increasing number of bunches, and get back to the full intensity on Monday.
I don't think I've ever seen anything here on PF about the LHC's beam tube vacuum. Seeing the problems/leaks I'm having with my little rough vacuum system @ around 2 or 3 mTorr, how do you guys maintain, I'm assuming an ultrahigh vacuum, on such a huge system? @mfb and @Vanadium 50.
Edit: Thanks again guys. You've given me information about the beam pipe vacuum I would have never known about.
Lots of pumps, lots of getters, and the fact that it's at cryogenic temperatures helps - residual gas tends to freeze.
Separate names with a comma.