What do you mean? What does data-taking have to do with scrubbing?
Scrubbing = have as many protons as possible circling in the machine.
Data-taking = have as many protons as possible at high energy circling in the machine
The second approach has fewer protons, as higher energies means the magnets get more heat (that is the problem reduced by scrubbing).
Scrubbing runs have 2820 bunches, data-taking might be limited to 2300. The latter is not so bad - especially as it means more data keeps coming in. And that is what counts.
2.3/fb, 10 hours in stable beams already. We might get 2300 bunches as early as Wednesday evening.
Edit 13:00 CERN time: 21 hours in stable beams, 0.5/fb in less than a day, the beam will get dumped soon. New records for this year. And enough time to go up another step, to 2173 bunches.
Edit on Thursday 12:00: 2173 bunches, initial luminosity was about 140% the design value. At the level of the 2016 record. 2.8/fb in total. We'll get 2317 bunches later today, probably with a new luminosity record.
We have a new all-time luminosity record! Over the night, a fill with 2317 bunches had more than 140% the design luminosity. Close to 150%.
Unfortunately, the LHC encountered multiple issues in the last day, so the overall number of collisions collected was very low (just 1 hour of stable beams since yesterday afternoon). One of these issues lead to a lower number of protons in the ring than usual - we can get new luminosity records once that is fixed.
The heat in the magnets is now close to its limit, I expect that we get data-taking at 2300 bunches for a while before the beam pipe is "clean" enough to put even more protons in.
Edit: They decided that the margin is large enough. 2460 bunches! And a bit more than 150% the design luminosity.
The current fill has 2556 bunches. Is this a record? I looked but didn't find the max from 2016.
Also, earlier in the fill there was a notice about a failed 'Hobbit scan'. What's a Hobbit scan?
It is a record at 13 TeV. I don't know if we had more at the end of 2012 where they did some first tests with 25 ns bunch spacing (most of 2012 had 50 ns, where you are limited to ~1400 bunches).
No idea about the Hobbit scan. They are not new, but apart from the LHC status page I only find amused Twitter users.
The LHC started its first phase of machine development, followed by a week of technical stop. Data-taking will probably resume July 10th. Here is the schedule.
In the last days we had a couple of runs starting at ~150% design luminosity with a record of 158%. The initial phase of rapid luminosity increase is over. While the machine operators will try to increase the luminosity a bit more, this is basically how most of the year will look like now.
ATLAS and CMS got 6.3/fb so far. As comparison: Last year they collected about 40/fb.
LHCb collected 0.24/fb. Last year it was 1.9/fb.
In both cases, the final 2017 dataset will probably be similar to the 2016 dataset. In 2018 we will get a bit more than that, 2019 and 2020 are reserved for machine and detector upgrades. Long-term schedule. With the 2017 dataset you can improve some limits a bit, you can improve the precision of some measurements a bit, but many studies will aim for an update after 2018.
LHC report: full house for the LHC
So mfb, what can you tell us about the recent discovery? https://press.cern/press-releases/2...nce-observation-new-particle-two-heavy-quarks
A nice start for EPS.
The first baryon with two heavy quarks. It needs the production of two charm/anticharm pairs in the collision, with the charm quarks with similar momenta, that makes the production very rare.
Now that ##ccu## (=quark content) is found (with a very clear signal peak), ##ccd## should be possible to find as well - the mass should be extremely similar, but it has a shorter lifetime and a larger background. ##ccs## will be much more challenging - it needs an additional strange quark, that makes it even rarer. In addition, its lifetime should be even shorter.
Baryons with bottom and charm together: A naive estimate would suggest one such baryon per 20 double-charm baryons. That is probably too optimistic. The lifetime could be a bit longer. Maybe with data from 2016-2018?
Two bottom: Another factor 20, maybe more. Good luck.
ATLAS showed updated results for Higgs decays to bottom/antibottom. Consistent with the Standard Model expectation, the significance went up a bit, now at 3.6 standard deviations. If CMS also shows new results, we can probably get 5 standard deviations in a combination. It is not surprising, but it would still be nice to have a good measurement how often this decay happens.
I'll have to look through the EPS talks for more, but I didn't find the time for it today.
The technical stop is nearly done, the machine will continue operation tomorrow.
Why? The strange lifetime is much, much longer than the charm. I'd expect that, up to final state effects, the lifetimes would be about the same.
When they find those mesons or hadrons, and claim a discovery, I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...
I don't know how large the effect would be, but it should have a larger overall mass, although its decay products could have a higher mass as well. I didn't draw Feynman diagrams and I certainly didn't calculate it.
They found 300 in the whole Run 2 dataset. Double charm production is rare, and both charm in the same hadron is rare even for this rare process.
Do you always denigrate the accomplishments of others?
That'd be an offensive behavior from my side. Nope, I don't denigrate their or any discovery...
I am just wondering what factors made it take so long. We have undeniably found heavier particles, so the machines that produced those heavy particles could also produce the ccu ...
The mass is not everything that matters. See the top discovery long before the Higgs discovery.
The cross section, the decays, the backgrounds - all these things matter.
Could LHCb have seen a hint of this particle in Run 1? Probably. But manpower is limited, they probably didn't look into this particular channel at that time.
Could other experiments have seen it before? Every other experiment has a much smaller dataset for heavy baryons. Probably not, at least not with a high significance.
Edit: Beam is back in the machine. Some issues with the accelerating cavities delay the operation. We'll probably get collisions on Sunday, with rapidly increasing number of bunches, and get back to the full intensity on Monday.
I don't think I've ever seen anything here on PF about the LHC's beam tube vacuum. Seeing the problems/leaks I'm having with my little rough vacuum system @ around 2 or 3 mTorr, how do you guys maintain, I'm assuming an ultrahigh vacuum, on such a huge system? @mfb and @Vanadium 50.
Edit: Thanks again guys. You've given me information about the beam pipe vacuum I would have never known about.
Lots of pumps, lots of getters, and the fact that it's at cryogenic temperatures helps - residual gas tends to freeze.
~800 getter pumps, plus various others.
The pressure in the beam pipe is 1 to 10 nPa. A weaker vacuum would mean too many protons get lost. That would be bad for the magnets (heat load) and for the luminosity (the runs are several hours long, most protons should survive that long).
Some more issues with the machine delayed the recovery from the technical stop. We had 600 bunches overnight, now the cryogenics system has another issue. Once that is fixed a few hours with 1300 bunches will be needed, and a few more hours of tests, then the machine goes back to the previous record of 2556 bunches and the experiments can resume regular data-taking at the full luminosity.
I'm blown away by this. Numbers do tell.
To put that in perspective, this is comparable to the lunar atmosphere. (It's better than 1 nPa at the IP, 10 nPa in the arcs, and the moon is about 0.3 nPa if I remember right. The moon's atmosphere is mostly argon, the LHC is atomic hydrogen, molecular hydrogen, helium and possibly CO)
The LHC beam pipe should be the largest vacuum of that quality (150 m3).
The LIGO vacuum is much larger (10,000 m3), but it has ~100 nPa.
The LHC magnets are in an insulation vacuum (to limit heat transfer) - 9000 m3, but at a "high" pressure of about 100 µPa (1µtorr).
For smaller volumes, it is possible to make the vacuum orders of magnitude better. Pump out the system, close all exits, and then cool everything down until all remaining atoms freeze out at the walls. BASE (also at CERN) has a vacuum so good that they can store antiprotons for more than a year without annihilations. The expected number of remaining gas atoms is zero in their 1.2 liter vacuum chamber, and there is no tool that can detect any remaining gas. They didn't observe annihilations, based on that they set an upper limit of ~1 fPa on the remaining pressure, or 3 atoms per cubic centimeter. Here is an article about it.
And not only does the pressure of the residual gas vary around the ring, but so does its composition.
Separate names with a comma.