How is the Tevatron able to improve their Higgs boson mass exclusion range?

  • Thread starter Thread starter humanino
  • Start date Start date
  • Tags Tags
    Higgs
humanino
Messages
2,523
Reaction score
8
After improving their statistics and analysis, the Tevatron gets a narrower exclusion range for the Higgs boson mass
Combined CDF and D0 Upper Limits on Standard Model Higgs-Boson Production with 2.1 - 5.4 fb-1 of Data
We combine results from CDF and D0 on direct searches for a standard model (SM) Higgs boson (H) in ppbar collisions at the Fermilab Tevatron at sqrt(s)=1.96 TeV. Compared to the previous Tevatron Higgs search combination more data have been added and some previously used channels have been reanalyzed to gain sensitivity. We use the latest parton distribution functions and gg->H theoretical cross sections when comparing our limits to the SM predictions. With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0, the 95% C.L. upper limits on Higgs boson production are a factor of 2.70 (0.94) times the SM cross section for a Higgs boson mass of m_H=115 (165) GeV/c^2. The corresponding median upper limits expected in the absence of Higgs boson production are 1.78 (0.89). The mass range excluded at 95% C.L. for a SM Higgs is 163<m_H<166 GeV/c^2, with an expected exclusion of 159<m_H<168 GeV/c^2.
I am unsure how Alain Connes feels now.
I have difficulties to understand how improved statistics can ever decrease your sensibility. To be honest, this may cast doubt on the Tevatron credibility. It certainly turns the light towards CERN, with a better sensibility at lower mass range.

On the other hand, Alain Connes' prediction relies on the "big desert" hypothesis. Generally speaking, if the Higgs (or whatever plays the Higgs) is at higher mass range (if we reject the big desert), its width will be larger and we're going to face the same problem as with the a0/sigma in QCD.
 
Last edited:
Physics news on Phys.org
I am not sure, if I understand.

The LHC energy range (or better: mass range for the Higgs) has an overlap with the Tevatron. But the LHC can search for the Higgs at higher energies. I don't know where the constraints for the higher masses come from, but I would speculate that they are not forbidden by direct experiments but by results sensitive to one- and two-loop corrections with the Higgs mass involved.

Is Connes' program sensitive to some GeV? Does this really rule out the big desert? What happens beyond Tevatrons mass range? Are there any results for alternative theories, e.g. top-quark condensate?
 
tom.stoer said:
...

Is Connes' program sensitive to some GeV? Does this really rule out the big desert? What happens beyond Tevatrons mass range? Are there any results for alternative theories, e.g. top-quark condensate?

In rough outline, Connes predicted 170 GeV higgs mass. Then a few months ago Fermilab seemed to rule this out. So Connes was unhappy. His theory of the SM particles seemed to be falsified.

Now Fermilab has taken back that exclusion zone. Further data did not show some trend or other, perhaps the earlier data had some random bias. I don't know why, but now they have a smaller exclusion zone which does no longer contain 170 GeV.

Humanino or someone else will explain the details. I only want to describe the broad outlines, as I remember it. Since nobody else replied yet.

So now it is possible that Connes is feeling happier about his model. His prediction of 170 is not excluded. In a way it puts him back where he was 2 years ago.
 
humanino said:
I have difficulties to understand how improved statistics can ever decrease your sensibility. To be honest, this may cast doubt on the Tevatron credibility.

Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.
 
Vanadium 50 said:
Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.
So they were really unlucky with an unlikely statistical fluctuation in their background, which has now disappeared. Besides, it seems they also improved their background simulation.

The reason I point out the low mass range, where the Tevatron is less sensistive, is that there is more disagreement there now.

In any case, it all looks to me like statistical effects.

Where is Zz ? :smile:
 
Vanadium 50 said:
Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.

The US LHC blog had a brief http://blogs.uslhc.us/?p=3048" on this last week, with some nice plots and a good quick explanation.
 
Last edited by a moderator:
Moving this discussion out of "Beyond the standard model" thread amounts to betting that the single scalar Higgs (the one in the standard model) will be found. I think most of the community do not believe into that, and for the record I want to write here that this place was not my initial choice.
 
Thank you for mentioning the US LHC blog post, as that was quite helpful.

Forgive me for asking, as I am not a high energy physicist, but what do they mean when they quote the number of collisions/events as:
"With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0"

How can they not know precisely how many fb-1 of data they are analyzing?
 
I don't know who moved it or why, but surely a search is part of HEP. Also, the BSM discussions tend to be more "stringy" in nature (or on string alternatives).
 
  • #10
Vanadium 50 said:
I don't know who moved it or why, but surely a search is part of HEP. Also, the BSM discussions tend to be more "stringy" in nature (or on string alternatives).
One thing I was wondering originally was mostly the implications for Connes' model. I would be interested to know whether his 168 GeV is unmovably rigid, or whether it essentially relies on the "big desert" hypothesis (the assumption that no new physics will come around between the electroweak and the Planck scales). If it relies on such an hypothesis, I would like to know whether there is any perspective to remove this hypothesis, say for instance come up with a SUSY SO(10) flavor of his model. I am not asking how the prediction would be changed in this case, but if anybody has information as to whether it is feasible at all.
 
  • #11
JustinLevy said:
Forgive me for asking, as I am not a high energy physicist, but what do they mean when they quote the number of collisions/events as:
"With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0"

How can they not know precisely how many fb-1 of data they are analyzing?

The final results combine lots of different analyses, which have been performed with different amounts of data. Details are in
http://arxiv.org/abs/0911.3930
and references therein (in particular, tables II and III on page 5).
 

Similar threads

Replies
13
Views
4K
Replies
11
Views
3K
Replies
19
Views
3K
Replies
2
Views
2K
Replies
31
Views
8K
Replies
1
Views
2K
Back
Top