I Data Collection Begins: Monitoring the Diphoton Excess

  • I
  • Thread starter Thread starter mfb
  • Start date Start date
  • Tags Tags
    Data
Messages
37,374
Reaction score
14,208
Data collection can begin! This night the luminosity ("collision rate") was negligible (0.05% of the design value), but it should go up quickly as more and more bunches are filled in for the runs.

By August we might know if the diphoton excess is something real or just an extremely weird statistical fluctuation.

Edit: Another run, now with 0.2% the design luminosity.
Edit2: Another run, 0.4% of the design luminosity.
 
Last edited:
  • Like
Likes quantumgeography, Amrator, Andreas C and 13 others
Physics news on Phys.org
mfb said:
0.05% of the design value
mfb said:
Edit: Another run, now with 0.2% the design luminosity.
I daresay that that is a big jump.
 
  • Like
Likes makemoneyu
Well, still irrelevant. We had about 3/fb integrated luminosity last year, collected at about half the design luminosity for several weeks. Adding 0.0006/fb luminosity from this morning does not really help. Time for the machine operators to verify that nothing is in the way for a higher luminosity, and time for the experiments to check that everything works as expected. Ramping up the luminosity should be much easier this year, due to the experience gained last year.

We'll get another run with something like 0.2% of the design luminosity this night, then LHC will do "scrubbing". The name is more fitting than it might look like: the ring is filled with a lot of particles that are kept in the machine for as long as possible. Some of them will hit the beam pipe and remove imperfections there. You don't want this happening too much during regular operation at the full energy (would be bad for the magnets), so it is done at a lower energy. Scrubbing will probably take 3-4 days. Afterwards the LHC returns to deliver stable beams, with a quickly increasing luminosity.
 
  • Like
Likes ProfuselyQuarky
mfb said:
We'll get another run with something like 0.2% of the design luminosity this night, then LHC will do "scrubbing". The name is more fitting than it might look like: the ring is filled with a lot of particles that are kept in the machine for as long as possible. Some of them will hit the beam pipe and remove imperfections there. You don't want this happening too much during regular operation at the full energy (would be bad for the magnets), so it is done at a lower energy. Scrubbing will probably take 3-4 days. Afterwards the LHC returns to deliver stable beams, with a quickly increasing luminosity.
Never heard of that before! So, this “scrubbing” is pretty much just a way to “clean” the vacuum?
 
Mainly the beam pipe, but yes.

Protons hitting the beam pipe release electrons. Those electrons can have a high energy, impacting the beam pipe again, releasing more electrons... if electrons come close to the proton bunches, they can get accelerated, hit the beam pipe again, release more electrons... this effect is called "electron cloud". It is not a runaway effect, but it leads to significant heat load in the beam pipe and the magnets around it. The magnets are superconducting, if they get too much heat they quench (stop being superconducting). The number of electrons released goes down over time, scrubbing is trying to accelerate this process as much as possible. At the low energy (=low magnetic field), the magnets tolerate more heat than at the full energy.

Edit 28th: Now 50 bunches, 2% design luminosity.
 
Last edited:
  • Like
Likes atyy, dlgoff and Lord Crc
After the weasel incident (see e.g. BBC), that damaged a transformer and lead to a CERN-wide power cut, everything is back running now. Collisions resumed, and we are back to about 2% design luminosity (with 50 colliding bunches), with plans to increase that quickly. Tomorrow night we probably get 300 bunches, for more than 10% the design luminosity. That will become notable for physics analyses.

Afterwards we'll see how fast the luminosity and the amount of collected data can go up. One of the preaccelerators has an issue with its vacuum, which limits the number of bunches that can get delivered to the LHC. It is unclear how fast that gets fixed, and how many bunches they manage to inject while that issue is still there. Certainly more than ~300, but certainly not as much as planned (up to ~2700) until that vacuum issue gets fixed.

For reference: last year the LHC reached up to ~50% the design luminosity. All the values are for ATLAS and CMS, LHCb has a lower luminosity and ALICE has a much lower luminosity.Edit: Sunday afternoon: Stable beam with 300 bunches, ~10% design luminosity for ATLAS/CMS (LHCb: 1/4 of their design luminosity).
 
Last edited:
  • Like
Likes nrqed, 1oldman2, Imager and 4 others
Interesting stuff man, keep us posted.
 
They just went to 600 bunches, 21% design luminosity for ATLAS/CMS (for LHCb: 50% of their lower design value). As with the previous steps, they want 20 hours of stable beams at that intensity before they move on, probably to ~900 bunches (~1/3 design luminosity). Each step takes about 2 to 3 days and typically adds ~300 bunches.

At higher beam intensities, things get a bit more problematic. Electrons in the beam pipe get more problematic (see post #3, scrubbing), which probably slows down the intensity increase beyond 1500 bunches, and might need additional time for scrubbing. Another potential issue appeared in one of the preaccelerators (SPS): It has a small vacuum leak. Gas in the beam pipe leads to losses of protons, which heats all the elements around it - not good. It currently limits the number of bunches the SPS can have at the same time, which will then limit the number of bunches it can inject into the LHC. It is unclear when exactly this limit will be hit, and if the leak can be repaired before that.For the experiments, it is a race against the clock. The most important conference this summer is ICHEP (3rd to 10th of August). All the experiments want to present new results, and improve the precision compared to 2015. Take as much data as possible? Well, you still have to analyze it, the more data you include (=the more relevant your result might become) the less time you have for the analysis (=less time for all the necessary cross-checks, especially for important results).
Last year both ATLAS and CMS presented first results 6 weeks after data-taking ended, that would point to June 22nd. That is soon, if you take into account that there will be machine development and a technical stop in between (~2 weeks), and the LHC is still running at low collision rates. For possible impact on the diphoton excess, see here.Collected luminosity so far for ATLAS+CMS, excluding the current run: 84/pb = 0.084/fb. (LHCb: 4.4/pb)
As comparison: last year we had 4/fb.
What is this weird unit?
 
  • Like
Likes nrqed, atyy and Garlic
How long would it take to repair the leak in the SPS? I'm guessing it needs to be heated up and cooled down in a similar to the main ring itself, and finding the culprit may take some time?

What I'm trying to get an idea of is how few bunches they can live with before it's better to take the repair downtime.
 
  • #10
Lord Crc said:
How long would it take to repair the leak in the SPS?
That is unclear.
The leak is at the beam dump, and the SPS does not use superconducting magnets, so heating/cooling times are not an issue.

Two good runs with 600 bunches per beam increased the collected integrated luminosity to 195/pb. The step to 900 bunches is planned for the weekend.
 
  • #11
is the IBL turned on? :biggrin:
 
  • #12
The ATLAS IBL? I don't know, if you work for ATLAS ask your coworkers. Why should it be off?
 
  • #13
mfb said:
That is unclear.
The leak is at the beam dump, and the SPS does not use superconducting magnets, so heating/cooling times are not an issue.
Thanks, for some reason I thought the Super bit had something to do with superconducting... No idea where that came from, I'm blaming a lack of caffeine.

IIRC they upgraded the SPS beam dump last year, maybe just coincidence?
 
  • #14
There is a smaller and older Proton Synchrotron, they just named the next bigger machine Super Proton Synchrotron.
Lord Crc said:
IIRC they upgraded the SPS beam dump last year, maybe just coincidence?
Most things get upgraded frequently. Found this meeting from last year about upgrading the SPS beam dump.
 
  • #15
mfb said:
There is a smaller and older Proton Synchrotron, they just named the next bigger machine Super Proton Synchrotron.
Yea I know about PS, I just really need to stop posting too early in the morning I think :)

In any case I find it impressive that they don't have more issues, given the complexity of the whole thing. That said it must be really frustrating for everyone involved to have this string of issues given the earlier hints of something new.

Anywsy, thanks again.
 
  • #16
900 bunches in now, initial luminosity for ATLAS/CMS was 30% the design luminosity. Total integrated luminosity as of now: 290/pb.
LHCb values are about 5% of the ATLAS/CMS values.For ATLAS and CMS, the two experiments with the highest luminosity, the bunches are made to collide head-on. As the bunches lose some protons over time (from collisions in the experiments but also from losses in the machine) and the focussing of the bunches gets worse over time, they start with a high luminosity which then goes down over the lifetime of a fill (typically a few hours).
LHCb cannot handle the high collision rate the LHC could deliver. There, the beams are shifted a bit so they don't collide head-on. While the intensity and focus quality goes down, the shift is reduced, so the collision rate stays constant all the time, and LHCb is always operating at the optimal collision rate.
Lord Crc said:
That said it must be really frustrating for everyone involved to have this string of issues given the earlier hints of something new.
In 2008, we hoped to have 14 TeV collisions in 2009 or even 2008. One or two weeks delay don't really matter in the long run.
 
  • Like
Likes nrqed
  • #17
As a layman I'm trying really hard to understand what you folks are talking about. But don't dumb it down too much because then I won't learn much. However, I have a question. It may sound dumb but here goes. What happens if a magnet fails while the protons are circulating? Will the protons "hit the wall" or another mag and do a lot of damage? Or is there some kind of back-up in place to keep them on track? Or does that only seem like a lot of energy because of the density but is actually not a big deal? I was under the impression that a freight train was flying through that thing.
Those things must be synchronized pretty tightly in order for that energetic mass to stay 'on track'.
 
  • #18
If a magnet begins to fail, the beam is steered to the dump. This takes about 3 microseconds.
 
  • #19
  • #20
The beam has a lot of energy, and could burn a hole through the machine if it would not be contained within the beam pipe. The bending magnets store even more energy, however - and that energy does not disappear at once. The magnets are superconducting coils in a closed circuit, during operation they do not need additional power - as long as they stay cold, they work. If they get too warm, the coils get a resistance and current starts to drop - but slow enough to dump the beam before the magnetic field gets too far away from its design value.
 
  • Like
Likes hsdrop, Uranic_Wabbit and dlgoff
  • #21
Vanadium 50 said:
If a magnet begins to fail, the beam is steered to the dump. This takes about 3 microseconds.

The kicker rise times are 3 microseconds. Machine protection dumps happen within 3 orbits (~300 microseconds) of something going wrong.
 
  • Like
Likes Greg Bernhardt and mfb
  • #22
That's right -the steering takes 3 microseconds, and then the travel time to the dump is whatever it is.
 
  • #23
Well, the travel time is up to one orbit. The kicker magnet can only ramp up in the abort gap (a region without bunches), so in the worst case we have to wait nearly one orbit until the kicker magnet can ramp up, and then another orbit until all bunches are out. That leaves one orbit time (~90µs) for the accelerator system to decide that the beams have to get dumped. Not much time, given that the signals do not travel faster than the particles (but have a shorter path).Edit: 470/pb, the 900 bunch step is done, next will be 1177 bunches tonight. Some issues in the preaccelerators degraded the beam quality in the last runs, so the luminosity might be a bit lower than the 40% you could expect from a linear extrapolation. Maybe 30% to 35%.
Edit2: Stable beams with 1177 in the night to Wednesday. Initial luminosity was somewhere between 30% and 35%. Just 4 hours of stable beams, unfortunately.
 
Last edited:
  • Like
Likes nrqed and ProfuselyQuarky
  • #24
One of the preaccelerators (the PS) had a fire or something similar Friday morning, and won't work before Wednesday. It was decided to keep the current fill in as long as possible. Data-taking started at around 5:00, now we are at 21 hours. Not the longest fill ever (yet? I think the record is a bit below 30 hours), but still a lot of data: 190/pb for ATLAS and CMS so far, 14/pb for LHCb. For ATLAS and CMS, luminosity dropped from 35% design value to 19%, while LHCb is running constantly at its chosen luminosity (~90% of its design value).

Collected integrated luminosities as of now:
ATLAS/CMS: 690/pb
LHCb: 46/pb
~30% of that from the last 24 hours!
 
  • Like
Likes Lord Crc and Drakkith
  • #25
mfb said:
Not the longest fill ever (yet? I think the record is a bit below 30 hours)!

The longest fill was #2006 at 25:59:08
 
  • #26
This fill, #4947, is now the longest ever! Right now it's 33:22:05 of stable beam.
 
  • #27
dukwon said:
The longest fill was #2006 at 25:59:08
According to today's morning meeting, the longest in stable beams was #1058 in April 2010 with a duration of 30:17. Well, we broke that record. 4:35 -> 14:55 (+1d) are 34:20 so far.

Integrated luminosity record for a single fill is 290/pb in a ~24-hour fill last year (also with a preaccelerator issue). 265/pb so far, a few hours more and we might break it.

Edit: Trip in sector 78 (power supply issue) at 15:59, final luminosity values 272/pb for ATLAS, ~265/pb for CMS, 22.2/pb for LHCb.
35 hours 24 minutes in stable beams.
 
Last edited:
  • #28
The PS works again, there is beam in the LHC. They'll check that everything is working with 3 bunches this evening, followed by a quick run with 600 bunches, and then back to 1200 bunches in the early Friday morning.
 
  • Like
Likes vanhees71 and ProfuselyQuarky
  • #29
After a very short run with 1200 bunches, the decision was made to go directly to 1465 bunches. Collisions just started, with an initial luminosity of 45% the design value for ATLAS/CMS, and 110% of the lower design value for LHCb.

The next step afterwards will be 1752 bunches, planned for today or tomorrow if nothing goes wrong. The SPS vacuum issue is still there, but the machine operators found ways to get many bunches in even with that limitation.
 
  • #30
Thanks for the updates.

Ignoring the current SPS issues, why the bunch ramp-up "profile"? That is why ramp up with more and more bunches like that, rather than doing a couple with a few bunches to and then go to full beans?
 
  • #31
See posts #8, #3 and #5. The machine safety is one thing, the other issue (now dominant) is the heat load of the magnets. Heat load limits the number of bunches - heat load per bunch goes down over time, but that is not a very fast process.Over 100/pb collected so far today.
Edit: Beam got dumped due to a network failure. 105/pb for ATLAS/CMS, 6/pb for LHCb. They'll go to 1750 bunches now, probably reaching more than 50% design luminosity. The record last year was 51%, so we are heading towards a luminosity record at 13 TeV.
 
Last edited:
  • Like
Likes Lord Crc
  • #32
We have a luminosity record!
Probably. The luminosity measurements are not that accurate and the values are close. ATLAS shows 53% design luminosity, CMS 51%, the difference is mainly a different calibration.

Stable beams with 1752 bunches.

Edit: And gone after 15 minutes :(. Some problem with the electricity.LHC will continue to take data until Tuesday, then make a two-week break from data-collection, one week for machine development (to improve the luminosity later on) and one week for work in the tunnel. More collisions are planned for June 13th.

Edit 2: After 1752 (needs at least one more run for a few hours), the next step is 1824, then 2040. Both still work with the SPS issue. Injection might take longer but the LHC usually gets priority over other uses of the SPS preaccelerator.

Edit 3: 9 hours of stable beams with 1752 bunches over night (Su->Mo). Which probably means we go to 1824 later today. ATLAS and CMS collected a bit more than 1/fb in total now, ~15% of that in the last night. LHCb is at 70/pb.
 
Last edited:
  • Like
Likes ProfuselyQuarky
  • #33
mfb said:
We have a luminosity record!

Although I'm still working on a supporting framework for studying physics later this year (or in other words I just don't comprehend the above very well right now), I still find your enthusiasm just so very adorable and it makes me want to join the celebration too! :smile:

Off to googling some more terms...
:partytime:
 
  • #34
:partytime:

Even more data, and new records.
1752 bunches but with more protons per bunch yesterday afternoon -> 60% of the design luminosity, and 200/pb=0.2/fb more integrated luminosity.
1824 bunches now, initial luminosity was 66% the design value.
The heat load for the magnets due to the high-intensity beam is significant now. The next steps after 2000 bunches will probably take much longer. Heat load goes down over time, slowly allowing to fill in more bunches.

ATLAS and CMS collected 1.4/fb so far, compared to ~4/fb last year.
LHCb doesn't profit that much from the better running conditions this year, most of their analyses will probably wait for the full 2016 dataset - for them a quick ramp up of the collected data rate is not that critical.

The machine development break got shifted to collect more data. The new plan is not fixed yet, but this week will certainly be available for data-taking.
 
Last edited:
  • Like
Likes Lord Crc and ProfuselyQuarky
  • #35
Interesting, thanks for updating us!
What is the integrated luminosity people aim/realistically hope for analyses presented this summer?
 
  • #36
Dr.AbeNikIanEdL said:
What is the integrated luminosity people aim/realistically hope for analyses presented this summer?
I don't think the collaborations made that public, and it can also depend on the individual analyses. Two numbers as comparison:

  • ATLAS and CMS showed first results of last year 6 weeks after the end of (proton collision) data-taking. The date was set in advance, so the experiments were confident to get their fast analyses done within 6 weeks.*
  • The Higgs boson discovery in July 2012: The high-priority analysis of 2012. At the time the discovery was announced, they had data up to 2-3 weeks before the presentations. Both collaborations were really pushing to include as much data as possible, so that is probably a lower limit.

6 weeks before ICHEP would be 21st of June, or three weeks from now. 2-3 weeks before ICHEP would give 7-6 weeks from now. A good week of data-taking now is probably 1/fb, so if we have the technical stop but not the machine development block, the lower estimate would be 2 weeks of data-taking for 3.5/fb, if the technical stop is shortened and the machine development gets moved to end of July (and merged with the next one), we might get 6 weeks of data-taking to have 7.5/fb. Maybe even a bit more if everything runs very smoothly. Probably something between those values, unless some problem comes up.

*the analyses start earlier, usually even before data-taking, with simulated events, it's not like the whole analysis could be done within a few weeks. But some parts of the analysis (in particular, all final results...) need the full dataset, and that determines the timescale.

A decision about the technical stop will probably be made later today.
 
  • Like
Likes nrqed, Dr.AbeNikIanEdL and arivero
  • #37
News:
  • The technical stop will be shortened as much as possible, from 5-7 days to something like 2-2.5, starting Tuesday.
  • The machine development block originally scheduled for this week gets shifted significantly. A second block is scheduled for July 25 - July 29, those two might merge - data collected that late won't be included in results shown at ICHEP in August anyway.
  • We are now at 2040 bunches. Further steps will probably take much more time.
  • Initial luminosity this afternoon was shown as ~80% of the design value for ATLAS and ~73% of it for CMS. The truth is probably somewhere in between. This is at the record set 2012, where we had 77% of the design luminosity, but at a lower energy back then. The collision rate per luminosity rises with energy, so we certainly have a new record in terms of collision rate.
Integrated luminosity for ATLAS/CMS: 1.7/fb, 0.23 of that from Wednesday.
 
  • Like
Likes Lord Crc and ProfuselyQuarky
  • #38
The current record for stable beams duration (and the previous record) both occurred when there were upstream problems that prevented a refill. Assuming that there is nothing preventing refilling the LHC, what (if any) is the criteria for doing a dump and refill?
 
  • #39
Most fills end due to machine protection - some heat load is too high, some connection got lost, and so on. Apart from that: the number of protons in the ring goes down over time, and the beam quality gets worse. Both leads to a decreasing luminosity over time for ATLAS and CMS, typically with a half-life of ~20-30 hours this year. After a while it becomes more efficient to dump the beam and re-fill. Ideally this takes 2-3 hours, sometimes it takes more.

2.1/fb in total, 0.35/fb from Thursday. Two fills today got lost quickly, the next attempt is on the way.
 
  • Like
Likes websterling
  • #40
how many protons go in each fill?
 
  • #41
Typically 115 billions per bunch, 2040 bunches per beam, and 2 beams => 4.7*1014 protons, or 0.8 nanogram, about the mass of a white blood cell.
The stored energy in that small amount of matter is 500 MJ, twice the kinetic energy of a 85 ton Boeing 737 at takeoff.
 
  • Like
Likes Imager
  • #42
The LHC is back after the technical stop. 2040 bunches as before, won't get more until the SPS vacuum leak issue is solved. That still allows to improve the beam focus a bit, so the initial luminosity was somewhere at 85% the design luminosity an hour ago.

ATLAS and CMS are at 3.0/fb integrated luminosity now, that is nearly the size of the 2015 dataset, and that should increase fast now.
 
  • Like
Likes Lord Crc and 1oldman2
  • #43
Thanks for the updates.

I've been looking at the beam status page (Vistars) every now and then and it seems to take about 2-3 hours from the beam is dumped to the next beam getting going again. If that's correct (and not just me misinterpreting or similar), why the long down-time?
 
  • #44
At least 2-3 hours, sometimes longer.

Here is a description from 2010. The main parts that need time:

- the magnets have to be ramped down to allow injection at 450 GeV (~20 min)
- the magnets have some hysteresis, their current state depends on what happened in the past. The curvature of the proton beam has to be correct to 1 part in a million, so you really want to be sure the magnets have the right magnetic field. If there was an issue with the magnets in the previous run, the magnets have to be brought to a known state again, which means they have to be ramped up and down once (~40 min, if necessary).
- the machine operators have to verify everything is in the expected state - for the machine, for the preaccelerators (same control room) and for the experiments (different control rooms, they have to call the experiments and those have to give permission for injection) - a few minutes.
- a "probe beam" is injected - very few protons, to verify that they cycle as expected and that the beam doesn't get lost - a few minutes.
- the 2040 bunches have to be made and accelerated by the preaccelerators. This happens in steps of 72 bunches now, and every group needs about a minute, if nothing goes wrong this takes ~30 minutes.
- the energy is ramped up from 450 GeV to 6500 GeV. Ramping up the dipole magnets needs about 20 minutes.
- the beams have to get focused, which involves ramping up superconducting quadrupole magnets. About 20 minutes again.
- once the machine operators verify again that everything is expected, they let the beams collide at the experiments (before they are separated) and find the ideal spot for the highest collision rate for ATLAS and CMS, and a lower rate for LHCb and ALICE. That takes about 10 minutes.

If you add those things, even in the ideal case it needs 2 hours. Usually something needs longer for various reasons.
The run that started last night is at 0.32/fb, adding another 10% to the total dataset this year. It is still ongoing, chances are good it will break some record later.
As comparison: The LHC produced more Higgs bosons today (literally: this Sunday) than the Tevatron did in 20 years.
 
  • Like
Likes Lord Crc, ProfuselyQuarky and fresh_42
  • #45
mfb said:
chances are good it will break some record later.

The plan is to terminate it in about four hours. If it goes this long, it may be the first intemtional termination this year.
 
  • #46
They had programmed dumps before this year (in particular, to go to more bunches), but I don't know if that included fills with 2040 bunches.

0.40/fb now, a new record for "per fill", "per 24 hours" and "per day". Will probably rise to ~0.45/fb for ATLAS.
 
  • #47
Thanks a lot mfb for the detailed response, very interesting.
 
  • #48
A CERN article about the recent data collection and records

The last run, yesterday to today morning, was 0.50/fb of data. ATLAS and CMS now have 4/fb in total.

Edit: Next run started ~7 pm, initial luminosity was about 93% design luminosity.

There is just one week of planned interruption until September (https://espace.cern.ch/be-dep/BEDepartmentalDocuments/BE/LHC_Schedule_2016.pdf ), so let's extrapolate a bit (optimistic)...

lumi.png
 
Last edited by a moderator:
  • Like
Likes arivero, vanhees71 and ProfuselyQuarky
  • #49
~6.3/fb integrated luminosity (average over ATLAS and CMS), more than 1.5 times the size of the 2015 dataset. Combining both years, both experiments now have more than 10/fb. For most analyses, this gives better statistics than the 20/fb at the lower energy in 2012, and more collisions are coming in.

The LHC operators made a large table as overview over the runs of the last month: slides, table alone. The last run, collecting a bit more than 0.5/fb, is not included there yet.
SB = stable beams, needed for data-taking
B1, B2 = number of protons in beam 1 and 2
Unit conversions:
L peak: 10 would be the LHC design luminosity here.
1000/pb = 1/fb.[/url]
 
Last edited:
  • #50
Thanks for the update.

How roughly much data would they need to say something definitive regarding the 750 GeV bump? I'm thinking "yeah something is there" vs "nope, just fluctuations".
 

Similar threads

Replies
13
Views
4K
Replies
57
Views
15K
Replies
9
Views
192
Replies
49
Views
12K
Replies
3
Views
3K
Back
Top