How is the Tevatron able to improve their Higgs boson mass exclusion range?

  • Thread starter humanino
  • Start date
  • Tags
    Higgs
In summary: Basically, they found that the Higgs mass range excluded by the LHC is narrower than the exclusion range predicted by Connes. However, it's worth noting that this doesn't mean that the big desert hypothesis is ruled out. Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.So they were really unlucky with an unlikely statistical fluctuation in their background, which has now disappeared. Besides, it seems they also improved their background simulation.The reason I point out the low mass range, where the Tevatron is less sensistive,
  • #1
humanino
2,527
8
After improving their statistics and analysis, the Tevatron gets a narrower exclusion range for the Higgs boson mass
Combined CDF and D0 Upper Limits on Standard Model Higgs-Boson Production with 2.1 - 5.4 fb-1 of Data
We combine results from CDF and D0 on direct searches for a standard model (SM) Higgs boson (H) in ppbar collisions at the Fermilab Tevatron at sqrt(s)=1.96 TeV. Compared to the previous Tevatron Higgs search combination more data have been added and some previously used channels have been reanalyzed to gain sensitivity. We use the latest parton distribution functions and gg->H theoretical cross sections when comparing our limits to the SM predictions. With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0, the 95% C.L. upper limits on Higgs boson production are a factor of 2.70 (0.94) times the SM cross section for a Higgs boson mass of m_H=115 (165) GeV/c^2. The corresponding median upper limits expected in the absence of Higgs boson production are 1.78 (0.89). The mass range excluded at 95% C.L. for a SM Higgs is 163<m_H<166 GeV/c^2, with an expected exclusion of 159<m_H<168 GeV/c^2.
I am unsure how Alain Connes feels now.
I have difficulties to understand how improved statistics can ever decrease your sensibility. To be honest, this may cast doubt on the Tevatron credibility. It certainly turns the light towards CERN, with a better sensibility at lower mass range.

On the other hand, Alain Connes' prediction relies on the "big desert" hypothesis. Generally speaking, if the Higgs (or whatever plays the Higgs) is at higher mass range (if we reject the big desert), its width will be larger and we're going to face the same problem as with the a0/sigma in QCD.
 
Last edited:
Physics news on Phys.org
  • #2
I am not sure, if I understand.

The LHC energy range (or better: mass range for the Higgs) has an overlap with the Tevatron. But the LHC can search for the Higgs at higher energies. I don't know where the constraints for the higher masses come from, but I would speculate that they are not forbidden by direct experiments but by results sensitive to one- and two-loop corrections with the Higgs mass involved.

Is Connes' program sensitive to some GeV? Does this really rule out the big desert? What happens beyond Tevatrons mass range? Are there any results for alternative theories, e.g. top-quark condensate?
 
  • #3
tom.stoer said:
...

Is Connes' program sensitive to some GeV? Does this really rule out the big desert? What happens beyond Tevatrons mass range? Are there any results for alternative theories, e.g. top-quark condensate?

In rough outline, Connes predicted 170 GeV higgs mass. Then a few months ago Fermilab seemed to rule this out. So Connes was unhappy. His theory of the SM particles seemed to be falsified.

Now Fermilab has taken back that exclusion zone. Further data did not show some trend or other, perhaps the earlier data had some random bias. I don't know why, but now they have a smaller exclusion zone which does no longer contain 170 GeV.

Humanino or someone else will explain the details. I only want to describe the broad outlines, as I remember it. Since nobody else replied yet.

So now it is possible that Connes is feeling happier about his model. His prediction of 170 is not excluded. In a way it puts him back where he was 2 years ago.
 
  • #4
humanino said:
I have difficulties to understand how improved statistics can ever decrease your sensibility. To be honest, this may cast doubt on the Tevatron credibility.

Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.
 
  • #5
Vanadium 50 said:
Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.
So they were really unlucky with an unlikely statistical fluctuation in their background, which has now disappeared. Besides, it seems they also improved their background simulation.

The reason I point out the low mass range, where the Tevatron is less sensistive, is that there is more disagreement there now.

In any case, it all looks to me like statistical effects.

Where is Zz ? :smile:
 
  • #6
Vanadium 50 said:
Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.

The US LHC blog had a brief http://blogs.uslhc.us/?p=3048" on this last week, with some nice plots and a good quick explanation.
 
Last edited by a moderator:
  • #7
Moving this discussion out of "Beyond the standard model" thread amounts to betting that the single scalar Higgs (the one in the standard model) will be found. I think most of the community do not believe into that, and for the record I want to write here that this place was not my initial choice.
 
  • #8
Thank you for mentioning the US LHC blog post, as that was quite helpful.

Forgive me for asking, as I am not a high energy physicist, but what do they mean when they quote the number of collisions/events as:
"With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0"

How can they not know precisely how many fb-1 of data they are analyzing?
 
  • #9
I don't know who moved it or why, but surely a search is part of HEP. Also, the BSM discussions tend to be more "stringy" in nature (or on string alternatives).
 
  • #10
Vanadium 50 said:
I don't know who moved it or why, but surely a search is part of HEP. Also, the BSM discussions tend to be more "stringy" in nature (or on string alternatives).
One thing I was wondering originally was mostly the implications for Connes' model. I would be interested to know whether his 168 GeV is unmovably rigid, or whether it essentially relies on the "big desert" hypothesis (the assumption that no new physics will come around between the electroweak and the Planck scales). If it relies on such an hypothesis, I would like to know whether there is any perspective to remove this hypothesis, say for instance come up with a SUSY SO(10) flavor of his model. I am not asking how the prediction would be changed in this case, but if anybody has information as to whether it is feasible at all.
 
  • #11
JustinLevy said:
Forgive me for asking, as I am not a high energy physicist, but what do they mean when they quote the number of collisions/events as:
"With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0"

How can they not know precisely how many fb-1 of data they are analyzing?

The final results combine lots of different analyses, which have been performed with different amounts of data. Details are in
http://arxiv.org/abs/0911.3930
and references therein (in particular, tables II and III on page 5).
 

1. What is the Tevatron?

The Tevatron was a particle accelerator located at the Fermi National Accelerator Laboratory (Fermilab) in Illinois, United States. It was in operation from 1983 to 2011 and was used to accelerate protons and antiprotons close to the speed of light for high-energy particle collisions.

2. What is a Higgs boson?

The Higgs boson is a subatomic particle that is theorized to give other particles their mass. It was first proposed by physicist Peter Higgs in the 1960s and its existence was confirmed by experiments at the Large Hadron Collider (LHC) in 2012.

3. What were the Tevatron Higgs searches?

The Tevatron Higgs searches were a series of experiments conducted at the Tevatron accelerator to search for evidence of the Higgs boson. These experiments involved colliding protons and antiprotons at high energies and analyzing the resulting data for any signs of the Higgs boson's existence.

4. Did the Tevatron contribute to the discovery of the Higgs boson?

Yes, the Tevatron played a crucial role in the discovery of the Higgs boson. Although the LHC ultimately confirmed its existence, the Tevatron provided valuable data and constraints that helped narrow down the mass range of the Higgs boson and guide the search at the LHC.

5. Why was the Tevatron shut down?

The Tevatron was shut down in 2011 because it had reached its maximum potential for scientific discovery and was no longer cost-effective to operate. It was also becoming obsolete compared to newer and more powerful accelerators, such as the LHC.

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
13
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
19
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
31
Views
8K
  • High Energy, Nuclear, Particle Physics
Replies
8
Views
4K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
10
Views
6K
  • High Energy, Nuclear, Particle Physics
Replies
22
Views
6K
Back
Top