Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Tevatron Higgs searches

  1. Nov 25, 2009 #1
    After improving their statistics and analysis, the Tevatron gets a narrower exclusion range for the Higgs boson mass
    Combined CDF and D0 Upper Limits on Standard Model Higgs-Boson Production with 2.1 - 5.4 fb-1 of Data
    I am unsure how Alain Connes feels now.
    I have difficulties to understand how improved statistics can ever decrease your sensibility. To be honest, this may cast doubt on the Tevatron credibility. It certainly turns the light towards CERN, with a better sensibility at lower mass range.

    On the other hand, Alain Connes' prediction relies on the "big desert" hypothesis. Generally speaking, if the Higgs (or whatever plays the Higgs) is at higher mass range (if we reject the big desert), its width will be larger and we're going to face the same problem as with the a0/sigma in QCD.
    Last edited: Nov 26, 2009
  2. jcsd
  3. Nov 26, 2009 #2


    User Avatar
    Science Advisor

    I am not sure, if I understand.

    The LHC energy range (or better: mass range for the Higgs) has an overlap with the Tevatron. But the LHC can search for the Higgs at higher energies. I don't know where the constraints for the higher masses come from, but I would speculate that they are not forbidden by direct experiments but by results sensitive to one- and two-loop corrections with the Higgs mass involved.

    Is Connes' program sensitive to some GeV? Does this really rule out the big desert? What happens beyond Tevatrons mass range? Are there any results for alternative theories, e.g. top-quark condensate?
  4. Nov 26, 2009 #3


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    In rough outline, Connes predicted 170 GeV higgs mass. Then a few months ago Fermilab seemed to rule this out. So Connes was unhappy. His theory of the SM particles seemed to be falsified.

    Now Fermilab has taken back that exclusion zone. Further data did not show some trend or other, perhaps the earlier data had some random bias. I don't know why, but now they have a smaller exclusion zone which does no longer contain 170 GeV.

    Humanino or someone else will explain the details. I only want to describe the broad outlines, as I remember it. Since nobody else replied yet.

    So now it is possible that Connes is feeling happier about his model. His prediction of 170 is not excluded. In a way it puts him back where he was 2 years ago.
  5. Nov 26, 2009 #4

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    Improved statistics decreased the limit, not the sensitivity. (i.e. the expected limit) The expected limit improved, the actual limit got worse, and what this means is there are more events in the second half of the data than in the first half.
  6. Nov 26, 2009 #5
    So they were really unlucky with an unlikely statistical fluctuation in their background, which has now disappeared. Besides, it seems they also improved their background simulation.

    The reason I point out the low mass range, where the Tevatron is less sensistive, is that there is more disagreement there now.

    In any case, it all looks to me like statistical effects.

    Where is Zz ? :smile:
  7. Nov 27, 2009 #6
    The US LHC blog had a brief http://blogs.uslhc.us/?p=3048" on this last week, with some nice plots and a good quick explanation.
    Last edited by a moderator: Apr 24, 2017
  8. Nov 29, 2009 #7
    Moving this discussion out of "Beyond the standard model" thread amounts to betting that the single scalar Higgs (the one in the standard model) will be found. I think most of the community do not believe into that, and for the record I want to write here that this place was not my initial choice.
  9. Nov 30, 2009 #8
    Thank you for mentioning the US LHC blog post, as that was quite helpful.

    Forgive me for asking, as I am not a high energy physicist, but what do they mean when they quote the number of collisions/events as:
    "With 2.0-4.8 fb-1 of data analyzed at CDF, and 2.1-5.4 fb-1 at D0"

    How can they not know precisely how many fb-1 of data they are analyzing?
  10. Nov 30, 2009 #9

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    I don't know who moved it or why, but surely a search is part of HEP. Also, the BSM discussions tend to be more "stringy" in nature (or on string alternatives).
  11. Nov 30, 2009 #10
    One thing I was wondering originally was mostly the implications for Connes' model. I would be interested to know whether his 168 GeV is unmovably rigid, or whether it essentially relies on the "big desert" hypothesis (the assumption that no new physics will come around between the electroweak and the Planck scales). If it relies on such an hypothesis, I would like to know whether there is any perspective to remove this hypothesis, say for instance come up with a SUSY SO(10) flavor of his model. I am not asking how the prediction would be changed in this case, but if anybody has information as to whether it is feasible at all.
  12. Nov 30, 2009 #11
    The final results combine lots of different analyses, which have been performed with different amounts of data. Details are in
    and references therein (in particular, tables II and III on page 5).
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook