I What are the challenges faced by LHC in the initial data-taking of 2017?

  • Thread starter Thread starter mfb
  • Start date Start date
  • Tags Tags
    2017 Lhc
Click For Summary
The LHC has declared "stable beams," but initial collision rates are low at 0.2% of the design rate, requiring weeks of checks before reaching optimal levels. Experiments are beginning to collect data, with current limitations on collision rates affecting data processing capabilities. Safety is a primary concern, as the stored energy in the beams can cause significant heating, necessitating cautious ramp-up procedures. The beam dump, designed to handle high-energy impacts, operates under strict conditions to prevent activation and ensure safety. As commissioning progresses, the LHC aims to increase bunch numbers and collision rates, with scrubbing runs planned to improve beam quality.
  • #31
How did they determine the specific number of stages of rings to build (three I think) and the specific diameter plus length of linac eg why not more smaller rings or fewer big ones. I know its optimal but is there a simple way to explain the physics or just simulation came up with this configuration?

Also what angle do the counter rotating beams collide at, doesn't seem to be head on the way the geometry looks at the beam cross over points.
 
Physics news on Phys.org
  • #32
Most of the accelerators used as boosters today were front-line research machines in the past, now repurposed. IKt's not optimal. But it's a lot cheaper than ripping out the old accelerator and putting in a new one that is 10% bigger or smaller.
 
  • Like
Likes houlahound
  • #33
To put this in perspective, at what time after the big bang would these sort of energies be seen, theoretically?
 
  • #34
houlahound said:
Also what angle do the counter rotating beams collide at, doesn't seem to be head on the way the geometry looks at the beam cross over points.
It depends on the running conditions, typically 300 µrad, or 0.017 degrees. The angle is necessary to avoid collisions with the previous / following bunch (relative to the bunch they should collide with) - see the first image here, marked "long range". 300µrad for half the bunch spacing leads to a separation of 1.1 mm at 3.75 m distance to the collision point.
Adrian59 said:
To put this in perspective, at what time after the big bang would these sort of energies be seen, theoretically?
Somewhere in the first pico- to nanoseconds, depending on the process studied.After some problems with power supplies and other hardware, we had another run with stable beams this morning, 300 bunches, 17% the design luminosity.
We might get collisions with 600 bunches during the night.
 
  • Like
Likes mheslep and houlahound
  • #35
The crossing angles are around 300 microradians. One important aspect of designing an accelerator complex is that you don't want huge increases in energy at a single stage. keeping it a factor of 20 or less is good practice.
 
  • Like
Likes houlahound
  • #36
Thanks explanations and links, most interesting.

If money wasn't a factor what would be the most optimal config to get the beam up to energy, how is this determined. I guess I could ask the same about rocket stages - is it the same physics principles based in thermodynamics?

How does the 20% figure come about?

LHC fanboy here.
 
  • #37
20%? Do you mean the factor 20? The magnets have to adjust their magnetic field according to the beam energy very accurately (10-5 precision) to keep the particles on track, at very low fields (relative to the maximum) that can be challenging. You also have to take into account if your particle speed still changes notably during the acceleration process.

If money wasn't a factor you could build a 15 km long linear accelerator directly leading to the LHC. Then you can fill it in two steps (one per ring), in seconds instead of 20 minutes, and with more bunches. Or, if we get rid of any realism, make the linear accelerator ~300 km long and directly put the protons in at their maximal energy. Then you also save the 20 minutes of ramping up and 20 minutes of ramping down.
The beam dump would need some serious upgrades to handle a higher turnaround.
 
  • #38
Construction, design, beam steering, beam intensity and collision geometry...etc would be optimal with a LINAC in a world of no constraints?

Rings are the compromise solution to real world constraints?

Is there any possibility of building a research facility that would then become a alternative structure post research, eg build a big LINAC straight thru the Alps north and south which could then become a commercial transport tunnel when the research is competed.
 
  • #39
houlahound said:
Construction, design, beam steering, beam intensity and collision geometry...etc would be optimal with a LINAC in a world of no constraints?

Rings are the compromise solution to real world constraints?

Is there any possibility of building a research facility that would then become a alternative structure post research, eg build a big LINAC straight thru the Alps north and south which could then become a commercial transport tunnel when the research is competed.
They have degrees up to 50°C or something in the stones of the new Gotthard base tunnel. I just try to imagine how you would cool the entire tunnel to 0.3K or so, on 57 km! And this is just one mountain. My guess is it would be easier to construct a linear accelerator in Death Valley than under the Alps.
 
  • Like
Likes houlahound
  • #40
A circular machine has two advantages over a linac. The first is cost - it let's you use the small part that actually accelerates again and again on the same proton. Superconducting magnets are expensive, but accelerating structures are even more expensive. The second is beam quality - by requiring each proton to return to the same spot (within microns) every orbit you get a very high quality beam. This is done by setting up a complex negative feedback scheme: if a particle drifts to the left, it feels a force to the right, and vice versa. Linacs don't do this - a beam particle that drifts to the left keeps going to the left, and if your accelerator is long enough to be useful, it's likely that this drifting particle hits a wall.

Proposals for future linacs include something called "damping rings" so that before the final acceleration, you can get the beam going in a very, very straight line.

The factor of ~20 comes about for several reasons. One is, as mfb said, problems with persistent fields. If your magnets are good to 10 ppm at flattop, and the ring has an injection energy 10% of flattop, at injection it's only good to 100 ppm. Make that 5% and now it's 200 ppm. The lower the energy, the harder it is to inject. And even without this problem, it would still be harder to inject because the beam is physically larger (we say it has more emittance). Finally, there is some accelerator physics that makes you want to keep this ratio small. There is something called "transition", where you essentially go from pulling on the beam to pushing on it. At the exact moment of transition, you can't steer the beam, so you lose it after a fraction of a second. The bigger the energy range, the more likely you have to go through transition. The LHC is above transition, but if you injected at a low enough energy, you'd have to go through transition. That number is probably of order 50-100 GeV.
 
  • Like
Likes houlahound
  • #41
Vanadium 50 said:
This is done by setting up a complex negative feedback scheme: if a particle drifts to the left, it feels a force to the right, and vice versa. Linacs don't do this - a beam particle that drifts to the left keeps going to the left, and if your accelerator is long enough to be useful, it's likely that this drifting particle hits a wall.
I guess you mean quadrupole (+potential higher order) magnets? Long linear accelerators do this as well.
They just keep the beam together, they don't reduce the emittance (like damping rings do for electrons), but the LHC doesn't reduce that either.
houlahound said:
Is there any possibility of building a research facility that would then become a alternative structure post research, eg build a big LINAC straight thru the Alps north and south which could then become a commercial transport tunnel when the research is competed.
500 km through the Alps to replace about 35 km of LHC plus preaccelerators, built at convenient spots near Lake Geneva? Even if the tunnels would be wide enough to be used for transport afterwards (they are not), and even if there would be demand for a 500 km tunnel, that project would be way too expensive for particle physics or transportation. And that is just the tunnel - you need 500 km of accelerating structures. There is absolutely no way to fund that.
 
  • Like
Likes houlahound
  • #42
mfb said:
I guess you mean quadrupole (+potential higher order) magnets?

That, plus things like stochastic cooling. Yes, you can add correctors to linear accelerators, but the ratio of corrector lengths/accleration lengths is much higher in a circular accelerator. Perhaps the two most directly comparable accelerators are LEP and SLC at the Z pole. Despite the fact that the electrons underwent significant synchrotron radiation, LEP still ended up with a smaller beam energy spread than SLC.

So I think my statement that the requirement that the beam makes it around the ring at the same point that it started gives you better beam quality is an advantage that a circular design has over a linear design is borne out.
 
  • #43
For electrons, synchrotron radiation is a great cooling method. For protons it is not - protons in the LHC have a cooling time of days but they don't stay in the machine that long. The FCC would be the first proton machine where synchrotron cooling gets relevant.
They tried to get collisions with 600 bunches over the night, but didn't achieve it due to powering issues. The plan is to get 600 bunches next night.
 
  • #44
What does a simple conservation of energy equation look like at LHC at point of collision.

Proton energy = ionisation energy + rest mass + brehmstalung losses + relativistic energy + coulomb energy + nuclear binding energy + ...?
 
  • #45
Apart from the rest mass, all these things don't apply to protons colliding in a vacuum. The rest mass contributes 0.94 GeV to the total proton energy of 6500 GeV. You can call the remaining 6499.06 GeV "kinetic energy" if you like.The machine operators are preparing the machine for collisions with 600 bunches now.
 
  • Like
Likes member 563992
  • #46
Huh?

To have the proton smash surely you need to overcome both coulomb & binding energy at least?
They are not zero.
 
  • #47
Binding energy of what? There is nothing bound.

The coulomb potential between the protons is of the order of 0.001 GeV, completely negligible. Nuclear binding energies, if anything would be bound, would be of the same order of magnitude.
 
  • Like
Likes nikkkom
  • #48
Binding energy to break the nucleus apart in collision.
 
  • #49
There is just one nucleon in the nucleus, there is nothing to break apart.
The protons are not broken into pieces in any meaningful way. Completely new particles are created in the collision.
Stable beams with 600 bunches, 30% of the design luminosity for ATLAS/CMS, 125% of the (lower) design luminosity for LHCb.
 
  • Like
Likes vanhees71
  • #50
houlahound said:
Binding energy to break the nucleus apart in collision.

This would apply only to ion beams (Pb).
But anyway, please realize that at some 3-7 TeV energies per nucleon, any binding energy of nucleus is utterly insignificant. Even the entire rest energy of the nucleus is much lower than the "kinetic" energy of that magnitude (it's about 0.03% of it).
 
  • #51
mfb said:
Stable beams with 600 bunches, 30% of the design luminosity for ATLAS/CMS, 125% of the (lower) design luminosity for LHCb.
Just want to say: I love your running commentary. :oldbiggrin:
 
  • Like
Likes vanhees71, mfb and fresh_42
  • #52
Thanks.
I add it when something happened since the last post - or make a new post if it is a major milestone.The https://lpc.web.cern.ch/lumiplots_2017_pp.htm are now online. LHCb data seems to be missing.

ATLAS had some issues with its magnet in the muon system, it was switched off last night. As long as the luminosity is low, that is not a large loss, and analyses that don't need muons can probably use the data. The magnet is running again now.

We'll probably get some more collisions with 600 bunches next night..
Edit: There they are. 30% design luminosity again.

Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.
 
Last edited:
  • Like
Likes member 563992, fresh_42 and vanhees71
  • #53
Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.

Was just wondering since I noticed the beam went up:

07_June_2017-pp_luminosity_integrated_date_2017.png
 
  • #54
That should be the run from the night to Monday.
 
  • Like
Likes member 563992
  • #55
What units is luminosity measured in?

Graph shows fb, physical splanation please.

Fill number?
 
  • #56
Inverse femtobarn (fb-1). I wrote an Insights article about it.
1/fb corresponds to roughly 1014 collisions.

Fill number is just counting how often protons have been put in the machine. After beams are dumped, the number is increased by 1 for the next protons to go in. "Fill 5750" is more convenient than "the protons we had in the machine at June 5 from 8:23 to 13:34".
 
  • Like
Likes member 563992 and houlahound
  • #57
After a few days of scrubbing, we are back to data-taking.

Scrubbing went well. We had a record number of 3.37*1014 protons per beam, with 2820 bunches per beam (slightly exceeding the design value of 2808).
The heating of the magnets went down by ~40% in the most problematic region, enough to continue operation with higher beam intensities.

Currently with a short (1 hour) run with 10 bunches each, then they'll complete the 600 bunch step (~3 hours), and then go on with 900 and 1200 bunches. Each step gets 20 hours of stable beams to verify nothing goes wrong. These two steps combined should deliver about 0.5/fb worth of data. Progress in the first weeks is always a bit slow, but it starts to get an interesting dataset.

Edit: We got 900 bunches. 68% design luminosity, about 50 inelastic ("destructive") proton-proton collisions per bunch crossing (design: ~25). Unfortunately the beam was dumped after just 20 minutes of data-taking for safety reasons. Now they are working on a cooling issue, that will take several hours.

Edit2: More 900 bunches (980 actually), nearly 0.15/fb of data collected on Wednesday. We'll probably get 1200 late Thursday to Friday.
 
Last edited:
  • Like
Likes Lord Crc and odietrich
  • #58
We had some runs with 900-980 bunches in the last two days, about 65% the design luminosity. Each step gets 20 hours before the number of bunches is increased. 900 is done now, the next step is 1200 bunches, probably this evening.
Edit in the evening: Stable beams with 1225 bunches, 75% the design luminosity. A bit lower than expected.

ATLAS and CMS both reached 0.5/fb of data. Not much compared to last year's 40/fb, but we are still in the very early phase of data-taking.
The machine operators found another way to increase the number of collisions a bit. The bunches have to hit each other with a minimal crossing angle to avoid additional collisions outside the design point. That means the bunches don't overlap completely (see this image). With the HL-LHC in 2025+ it is planned to "rotate" the bunches, but that needs additional hardware not available now.
In long runs (many hours), the number of protons per bunch goes down over time - some are collided, some are lost elsewhere in the machine. That means the long-range interactions get less problematic, and the crossing angle can be reduced. This increases the number of collisions by a few percent. It does not change the maximal luminosity, but it reduces the drop of the luminosity over time.
The LHC could get a very unusual record this year: The luminosity record for any type of collider.
Electrons and positrons are much lighter than protons. That means they emit more synchrotron radiation when they travel around in a circular collider. Faster electrons radiate more and get slower in the process. That is a very effective "cooling" mechanism, as a result you can put the electrons very close together, increasing the luminosity. KEKB set the world record of 2.11*1034/(cm2*s) in 2009 - with electrons/positrons.
The LHC could reach this value in 2017 - with protons, where it is much harder. As far as I know, it would be the first proton-collider ever to set the absolute luminosity record.
KEKB is currently upgraded, and the new version (SuperKEKB) is supposed to reach 100*1034/(cm2*s), way above everything the LHC can achieve, but it will probably need until late 2018 to beat its old record, and several more years to reach its design value. There is a small time window where LHC could get the record for a while.
 
Last edited:
  • Like
Likes Lord Crc and QuantumQuest
  • #59
The LHC is progressing fast this week. The 20 hours at 1200 bunches were completed today, and the machine switched to 1550 bunches. Collisions in ATLAS and CMS reached ~100% the design luminosity this evening. If everything goes well, we get 1800 bunches on Monday, clearly exceeding the design luminosity.

The luminosity record last year was 140% the design value, with a naive scaling we need 2150 bunches to reach this, and 2820 bunches will give 180% the design luminosity. Similar to last year, I expect that the luminosity goes up more, as they'll implement more and more improvements. The absolute luminosity record is certainly realistic.

Both experiments collected 1.25/fb of data now, and the https://lpc.web.cern.ch/lumiplots_2017_pp.htm is going upwards rapidly.

Edit: They shortened the 1500 bunch step and went directly to 1740 after just ~10 hours. Initial luminosity ~110% the design value.
 
Last edited:
  • Like
Likes member 563992, arivero, vanhees71 and 1 other person
  • #60
2029 bunches!
125% of the design luminosity. Approaching the 2016 record.

ATLAS and CMS now have 2.2/fb, twice the data they had three days ago. It is also half the total 2015 dataset.

The machine operators expect that they can go to 2300 bunches without issues. Afterwards the heat load from the electrons in the beam pipe could get too high. Then we need more scrubbing or simply more data-taking at 2300 bunches - that acts as scrubbing as well.
 
  • Like
Likes vanhees71 and Drakkith

Similar threads

  • · Replies 57 ·
2
Replies
57
Views
15K
  • · Replies 20 ·
Replies
20
Views
3K
Replies
1
Views
1K
  • · Replies 69 ·
3
Replies
69
Views
14K
  • Sticky
  • · Replies 28 ·
Replies
28
Views
11K
  • · Replies 48 ·
2
Replies
48
Views
8K
  • · Replies 17 ·
Replies
17
Views
6K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 49 ·
2
Replies
49
Views
12K
  • · Replies 18 ·
Replies
18
Views
3K