Data Collection Begins: Monitoring the Diphoton Excess

  • I
  • Thread starter mfb
  • Start date
  • Tags
    Data
In summary: But at lower energies, they come out more often, and they're more energetic. This means that they can remove imperfections from the beam pipe, which is not something you want happening too often. So, they do a thing called "scrubbing". Basically, they fill the ring with a bunch of particles, and when they hit the beam pipe, some of them are accelerated and removed. It takes about 3 to 4 days, but it's worth it, because it keeps the beam pipe clean.
  • #1
37,088
13,923
Data collection can begin! This night the luminosity ("collision rate") was negligible (0.05% of the design value), but it should go up quickly as more and more bunches are filled in for the runs.

By August we might know if the diphoton excess is something real or just an extremely weird statistical fluctuation.

Edit: Another run, now with 0.2% the design luminosity.
Edit2: Another run, 0.4% of the design luminosity.
 
Last edited:
  • Like
Likes quantumgeography, Amrator, Andreas C and 13 others
Physics news on Phys.org
  • #2
mfb said:
0.05% of the design value
mfb said:
Edit: Another run, now with 0.2% the design luminosity.
I daresay that that is a big jump.
 
  • Like
Likes makemoneyu
  • #3
Well, still irrelevant. We had about 3/fb integrated luminosity last year, collected at about half the design luminosity for several weeks. Adding 0.0006/fb luminosity from this morning does not really help. Time for the machine operators to verify that nothing is in the way for a higher luminosity, and time for the experiments to check that everything works as expected. Ramping up the luminosity should be much easier this year, due to the experience gained last year.

We'll get another run with something like 0.2% of the design luminosity this night, then LHC will do "scrubbing". The name is more fitting than it might look like: the ring is filled with a lot of particles that are kept in the machine for as long as possible. Some of them will hit the beam pipe and remove imperfections there. You don't want this happening too much during regular operation at the full energy (would be bad for the magnets), so it is done at a lower energy. Scrubbing will probably take 3-4 days. Afterwards the LHC returns to deliver stable beams, with a quickly increasing luminosity.
 
  • Like
Likes ProfuselyQuarky
  • #4
mfb said:
We'll get another run with something like 0.2% of the design luminosity this night, then LHC will do "scrubbing". The name is more fitting than it might look like: the ring is filled with a lot of particles that are kept in the machine for as long as possible. Some of them will hit the beam pipe and remove imperfections there. You don't want this happening too much during regular operation at the full energy (would be bad for the magnets), so it is done at a lower energy. Scrubbing will probably take 3-4 days. Afterwards the LHC returns to deliver stable beams, with a quickly increasing luminosity.
Never heard of that before! So, this “scrubbing” is pretty much just a way to “clean” the vacuum?
 
  • #5
Mainly the beam pipe, but yes.

Protons hitting the beam pipe release electrons. Those electrons can have a high energy, impacting the beam pipe again, releasing more electrons... if electrons come close to the proton bunches, they can get accelerated, hit the beam pipe again, release more electrons... this effect is called "electron cloud". It is not a runaway effect, but it leads to significant heat load in the beam pipe and the magnets around it. The magnets are superconducting, if they get too much heat they quench (stop being superconducting). The number of electrons released goes down over time, scrubbing is trying to accelerate this process as much as possible. At the low energy (=low magnetic field), the magnets tolerate more heat than at the full energy.

Edit 28th: Now 50 bunches, 2% design luminosity.
 
Last edited:
  • Like
Likes atyy, dlgoff and Lord Crc
  • #6
After the weasel incident (see e.g. BBC), that damaged a transformer and lead to a CERN-wide power cut, everything is back running now. Collisions resumed, and we are back to about 2% design luminosity (with 50 colliding bunches), with plans to increase that quickly. Tomorrow night we probably get 300 bunches, for more than 10% the design luminosity. That will become notable for physics analyses.

Afterwards we'll see how fast the luminosity and the amount of collected data can go up. One of the preaccelerators has an issue with its vacuum, which limits the number of bunches that can get delivered to the LHC. It is unclear how fast that gets fixed, and how many bunches they manage to inject while that issue is still there. Certainly more than ~300, but certainly not as much as planned (up to ~2700) until that vacuum issue gets fixed.

For reference: last year the LHC reached up to ~50% the design luminosity. All the values are for ATLAS and CMS, LHCb has a lower luminosity and ALICE has a much lower luminosity.Edit: Sunday afternoon: Stable beam with 300 bunches, ~10% design luminosity for ATLAS/CMS (LHCb: 1/4 of their design luminosity).
 
Last edited:
  • Like
Likes nrqed, 1oldman2, Imager and 4 others
  • #7
Interesting stuff man, keep us posted.
 
  • #8
They just went to 600 bunches, 21% design luminosity for ATLAS/CMS (for LHCb: 50% of their lower design value). As with the previous steps, they want 20 hours of stable beams at that intensity before they move on, probably to ~900 bunches (~1/3 design luminosity). Each step takes about 2 to 3 days and typically adds ~300 bunches.

At higher beam intensities, things get a bit more problematic. Electrons in the beam pipe get more problematic (see post #3, scrubbing), which probably slows down the intensity increase beyond 1500 bunches, and might need additional time for scrubbing. Another potential issue appeared in one of the preaccelerators (SPS): It has a small vacuum leak. Gas in the beam pipe leads to losses of protons, which heats all the elements around it - not good. It currently limits the number of bunches the SPS can have at the same time, which will then limit the number of bunches it can inject into the LHC. It is unclear when exactly this limit will be hit, and if the leak can be repaired before that.For the experiments, it is a race against the clock. The most important conference this summer is ICHEP (3rd to 10th of August). All the experiments want to present new results, and improve the precision compared to 2015. Take as much data as possible? Well, you still have to analyze it, the more data you include (=the more relevant your result might become) the less time you have for the analysis (=less time for all the necessary cross-checks, especially for important results).
Last year both ATLAS and CMS presented first results 6 weeks after data-taking ended, that would point to June 22nd. That is soon, if you take into account that there will be machine development and a technical stop in between (~2 weeks), and the LHC is still running at low collision rates. For possible impact on the diphoton excess, see here.Collected luminosity so far for ATLAS+CMS, excluding the current run: 84/pb = 0.084/fb. (LHCb: 4.4/pb)
As comparison: last year we had 4/fb.
What is this weird unit?
 
  • Like
Likes nrqed, atyy and Garlic
  • #9
How long would it take to repair the leak in the SPS? I'm guessing it needs to be heated up and cooled down in a similar to the main ring itself, and finding the culprit may take some time?

What I'm trying to get an idea of is how few bunches they can live with before it's better to take the repair downtime.
 
  • #10
Lord Crc said:
How long would it take to repair the leak in the SPS?
That is unclear.
The leak is at the beam dump, and the SPS does not use superconducting magnets, so heating/cooling times are not an issue.

Two good runs with 600 bunches per beam increased the collected integrated luminosity to 195/pb. The step to 900 bunches is planned for the weekend.
 
  • #11
is the IBL turned on? :biggrin:
 
  • #12
The ATLAS IBL? I don't know, if you work for ATLAS ask your coworkers. Why should it be off?
 
  • #13
mfb said:
That is unclear.
The leak is at the beam dump, and the SPS does not use superconducting magnets, so heating/cooling times are not an issue.
Thanks, for some reason I thought the Super bit had something to do with superconducting... No idea where that came from, I'm blaming a lack of caffeine.

IIRC they upgraded the SPS beam dump last year, maybe just coincidence?
 
  • #14
There is a smaller and older Proton Synchrotron, they just named the next bigger machine Super Proton Synchrotron.
Lord Crc said:
IIRC they upgraded the SPS beam dump last year, maybe just coincidence?
Most things get upgraded frequently. Found this meeting from last year about upgrading the SPS beam dump.
 
  • #15
mfb said:
There is a smaller and older Proton Synchrotron, they just named the next bigger machine Super Proton Synchrotron.
Yea I know about PS, I just really need to stop posting too early in the morning I think :)

In any case I find it impressive that they don't have more issues, given the complexity of the whole thing. That said it must be really frustrating for everyone involved to have this string of issues given the earlier hints of something new.

Anywsy, thanks again.
 
  • #16
900 bunches in now, initial luminosity for ATLAS/CMS was 30% the design luminosity. Total integrated luminosity as of now: 290/pb.
LHCb values are about 5% of the ATLAS/CMS values.For ATLAS and CMS, the two experiments with the highest luminosity, the bunches are made to collide head-on. As the bunches lose some protons over time (from collisions in the experiments but also from losses in the machine) and the focussing of the bunches gets worse over time, they start with a high luminosity which then goes down over the lifetime of a fill (typically a few hours).
LHCb cannot handle the high collision rate the LHC could deliver. There, the beams are shifted a bit so they don't collide head-on. While the intensity and focus quality goes down, the shift is reduced, so the collision rate stays constant all the time, and LHCb is always operating at the optimal collision rate.
Lord Crc said:
That said it must be really frustrating for everyone involved to have this string of issues given the earlier hints of something new.
In 2008, we hoped to have 14 TeV collisions in 2009 or even 2008. One or two weeks delay don't really matter in the long run.
 
  • Like
Likes nrqed
  • #17
As a layman I'm trying really hard to understand what you folks are talking about. But don't dumb it down too much because then I won't learn much. However, I have a question. It may sound dumb but here goes. What happens if a magnet fails while the protons are circulating? Will the protons "hit the wall" or another mag and do a lot of damage? Or is there some kind of back-up in place to keep them on track? Or does that only seem like a lot of energy because of the density but is actually not a big deal? I was under the impression that a freight train was flying through that thing.
Those things must be synchronized pretty tightly in order for that energetic mass to stay 'on track'.
 
  • #18
If a magnet begins to fail, the beam is steered to the dump. This takes about 3 microseconds.
 
  • #19
  • #20
The beam has a lot of energy, and could burn a hole through the machine if it would not be contained within the beam pipe. The bending magnets store even more energy, however - and that energy does not disappear at once. The magnets are superconducting coils in a closed circuit, during operation they do not need additional power - as long as they stay cold, they work. If they get too warm, the coils get a resistance and current starts to drop - but slow enough to dump the beam before the magnetic field gets too far away from its design value.
 
  • Like
Likes hsdrop, Uranic_Wabbit and dlgoff
  • #21
Vanadium 50 said:
If a magnet begins to fail, the beam is steered to the dump. This takes about 3 microseconds.

The kicker rise times are 3 microseconds. Machine protection dumps happen within 3 orbits (~300 microseconds) of something going wrong.
 
  • Like
Likes Greg Bernhardt and mfb
  • #22
That's right -the steering takes 3 microseconds, and then the travel time to the dump is whatever it is.
 
  • #23
Well, the travel time is up to one orbit. The kicker magnet can only ramp up in the abort gap (a region without bunches), so in the worst case we have to wait nearly one orbit until the kicker magnet can ramp up, and then another orbit until all bunches are out. That leaves one orbit time (~90µs) for the accelerator system to decide that the beams have to get dumped. Not much time, given that the signals do not travel faster than the particles (but have a shorter path).Edit: 470/pb, the 900 bunch step is done, next will be 1177 bunches tonight. Some issues in the preaccelerators degraded the beam quality in the last runs, so the luminosity might be a bit lower than the 40% you could expect from a linear extrapolation. Maybe 30% to 35%.
Edit2: Stable beams with 1177 in the night to Wednesday. Initial luminosity was somewhere between 30% and 35%. Just 4 hours of stable beams, unfortunately.
 
Last edited:
  • Like
Likes nrqed and ProfuselyQuarky
  • #24
One of the preaccelerators (the PS) had a fire or something similar Friday morning, and won't work before Wednesday. It was decided to keep the current fill in as long as possible. Data-taking started at around 5:00, now we are at 21 hours. Not the longest fill ever (yet? I think the record is a bit below 30 hours), but still a lot of data: 190/pb for ATLAS and CMS so far, 14/pb for LHCb. For ATLAS and CMS, luminosity dropped from 35% design value to 19%, while LHCb is running constantly at its chosen luminosity (~90% of its design value).

Collected integrated luminosities as of now:
ATLAS/CMS: 690/pb
LHCb: 46/pb
~30% of that from the last 24 hours!
 
  • Like
Likes Lord Crc and Drakkith
  • #25
mfb said:
Not the longest fill ever (yet? I think the record is a bit below 30 hours)!

The longest fill was #2006 at 25:59:08
 
  • #26
This fill, #4947, is now the longest ever! Right now it's 33:22:05 of stable beam.
 
  • #27
dukwon said:
The longest fill was #2006 at 25:59:08
According to today's morning meeting, the longest in stable beams was #1058 in April 2010 with a duration of 30:17. Well, we broke that record. 4:35 -> 14:55 (+1d) are 34:20 so far.

Integrated luminosity record for a single fill is 290/pb in a ~24-hour fill last year (also with a preaccelerator issue). 265/pb so far, a few hours more and we might break it.

Edit: Trip in sector 78 (power supply issue) at 15:59, final luminosity values 272/pb for ATLAS, ~265/pb for CMS, 22.2/pb for LHCb.
35 hours 24 minutes in stable beams.
 
Last edited:
  • #28
The PS works again, there is beam in the LHC. They'll check that everything is working with 3 bunches this evening, followed by a quick run with 600 bunches, and then back to 1200 bunches in the early Friday morning.
 
  • Like
Likes vanhees71 and ProfuselyQuarky
  • #29
After a very short run with 1200 bunches, the decision was made to go directly to 1465 bunches. Collisions just started, with an initial luminosity of 45% the design value for ATLAS/CMS, and 110% of the lower design value for LHCb.

The next step afterwards will be 1752 bunches, planned for today or tomorrow if nothing goes wrong. The SPS vacuum issue is still there, but the machine operators found ways to get many bunches in even with that limitation.
 
  • #30
Thanks for the updates.

Ignoring the current SPS issues, why the bunch ramp-up "profile"? That is why ramp up with more and more bunches like that, rather than doing a couple with a few bunches to and then go to full beans?
 
  • #31
See posts #8, #3 and #5. The machine safety is one thing, the other issue (now dominant) is the heat load of the magnets. Heat load limits the number of bunches - heat load per bunch goes down over time, but that is not a very fast process.Over 100/pb collected so far today.
Edit: Beam got dumped due to a network failure. 105/pb for ATLAS/CMS, 6/pb for LHCb. They'll go to 1750 bunches now, probably reaching more than 50% design luminosity. The record last year was 51%, so we are heading towards a luminosity record at 13 TeV.
 
Last edited:
  • Like
Likes Lord Crc
  • #32
We have a luminosity record!
Probably. The luminosity measurements are not that accurate and the values are close. ATLAS shows 53% design luminosity, CMS 51%, the difference is mainly a different calibration.

Stable beams with 1752 bunches.

Edit: And gone after 15 minutes :(. Some problem with the electricity.LHC will continue to take data until Tuesday, then make a two-week break from data-collection, one week for machine development (to improve the luminosity later on) and one week for work in the tunnel. More collisions are planned for June 13th.

Edit 2: After 1752 (needs at least one more run for a few hours), the next step is 1824, then 2040. Both still work with the SPS issue. Injection might take longer but the LHC usually gets priority over other uses of the SPS preaccelerator.

Edit 3: 9 hours of stable beams with 1752 bunches over night (Su->Mo). Which probably means we go to 1824 later today. ATLAS and CMS collected a bit more than 1/fb in total now, ~15% of that in the last night. LHCb is at 70/pb.
 
Last edited:
  • Like
Likes ProfuselyQuarky
  • #33
mfb said:
We have a luminosity record!

Although I'm still working on a supporting framework for studying physics later this year (or in other words I just don't comprehend the above very well right now), I still find your enthusiasm just so very adorable and it makes me want to join the celebration too! :smile:

Off to googling some more terms...
:partytime:
 
  • #34
:partytime:

Even more data, and new records.
1752 bunches but with more protons per bunch yesterday afternoon -> 60% of the design luminosity, and 200/pb=0.2/fb more integrated luminosity.
1824 bunches now, initial luminosity was 66% the design value.
The heat load for the magnets due to the high-intensity beam is significant now. The next steps after 2000 bunches will probably take much longer. Heat load goes down over time, slowly allowing to fill in more bunches.

ATLAS and CMS collected 1.4/fb so far, compared to ~4/fb last year.
LHCb doesn't profit that much from the better running conditions this year, most of their analyses will probably wait for the full 2016 dataset - for them a quick ramp up of the collected data rate is not that critical.

The machine development break got shifted to collect more data. The new plan is not fixed yet, but this week will certainly be available for data-taking.
 
Last edited:
  • Like
Likes Lord Crc and ProfuselyQuarky
  • #35
Interesting, thanks for updating us!
What is the integrated luminosity people aim/realistically hope for analyses presented this summer?
 
<h2>1. What is the diphoton excess and why is it significant?</h2><p>The diphoton excess is a statistical anomaly observed in the Large Hadron Collider (LHC) data, where there is an unexpectedly high number of events where two photons are produced. This could potentially be a sign of new physics beyond the Standard Model. It is significant because it could lead to a better understanding of the fundamental building blocks of our universe.</p><h2>2. How is data collected in the LHC?</h2><p>Data is collected in the LHC through the use of detectors, which are specialized instruments that measure the properties of particles produced in collisions. These detectors are made up of different layers that each perform a specific function, such as tracking the paths of particles or measuring their energy and momentum.</p><h2>3. What is the role of monitoring in data collection?</h2><p>Monitoring is crucial in data collection as it allows scientists to continuously check the performance of the detectors and ensure that the data being collected is of high quality. This helps to identify any issues or anomalies that may arise, such as background noise or malfunctioning equipment, and allows for adjustments to be made in real-time.</p><h2>4. How is the diphoton excess being studied?</h2><p>The diphoton excess is being studied through the analysis of the data collected by the LHC detectors. Scientists are looking for patterns and trends in the data that could indicate the presence of new particles or interactions. This involves using statistical methods and comparing the data to theoretical predictions.</p><h2>5. What are the potential implications of the diphoton excess?</h2><p>If the diphoton excess is confirmed to be a real signal and not just a statistical fluctuation, it could lead to the discovery of new particles or interactions that are not predicted by the Standard Model. This could greatly advance our understanding of the fundamental laws of nature and potentially open up new avenues for research and technology.</p>

1. What is the diphoton excess and why is it significant?

The diphoton excess is a statistical anomaly observed in the Large Hadron Collider (LHC) data, where there is an unexpectedly high number of events where two photons are produced. This could potentially be a sign of new physics beyond the Standard Model. It is significant because it could lead to a better understanding of the fundamental building blocks of our universe.

2. How is data collected in the LHC?

Data is collected in the LHC through the use of detectors, which are specialized instruments that measure the properties of particles produced in collisions. These detectors are made up of different layers that each perform a specific function, such as tracking the paths of particles or measuring their energy and momentum.

3. What is the role of monitoring in data collection?

Monitoring is crucial in data collection as it allows scientists to continuously check the performance of the detectors and ensure that the data being collected is of high quality. This helps to identify any issues or anomalies that may arise, such as background noise or malfunctioning equipment, and allows for adjustments to be made in real-time.

4. How is the diphoton excess being studied?

The diphoton excess is being studied through the analysis of the data collected by the LHC detectors. Scientists are looking for patterns and trends in the data that could indicate the presence of new particles or interactions. This involves using statistical methods and comparing the data to theoretical predictions.

5. What are the potential implications of the diphoton excess?

If the diphoton excess is confirmed to be a real signal and not just a statistical fluctuation, it could lead to the discovery of new particles or interactions that are not predicted by the Standard Model. This could greatly advance our understanding of the fundamental laws of nature and potentially open up new avenues for research and technology.

Similar threads

  • Sticky
  • High Energy, Nuclear, Particle Physics
Replies
28
Views
7K
  • High Energy, Nuclear, Particle Physics
4
Replies
109
Views
16K
  • High Energy, Nuclear, Particle Physics
2
Replies
69
Views
11K
  • High Energy, Nuclear, Particle Physics
2
Replies
49
Views
9K
  • Beyond the Standard Models
Replies
30
Views
7K
  • Programming and Computer Science
Replies
3
Views
2K
  • Sci-Fi Writing and World Building
Replies
21
Views
806
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Astronomy and Astrophysics
Replies
10
Views
5K
Back
Top