Statistical Analysis in High-Energy Physics: Methods and Applications

  • Thread starter Thread starter iibewegung
  • Start date Start date
  • Tags Tags
    Hep Statistical
Click For Summary
In high-energy physics, parameter estimation for the standard model often employs methods like maximum likelihood, though practices can vary among experimental collaborations such as ATLAS and CMS. Particle physicists typically express confidence intervals, such as a 95% confidence range for quark masses, using statistical hypothesis testing. As signals become more challenging to detect, advanced techniques like neural networks and decision trees are increasingly utilized. Different research groups may favor distinct statistical approaches, with some, like UTFit, preferring Bayesian methods. Comprehensive resources like the Particle Data Group provide a centralized reference for comparing these various statistical methods and results.
iibewegung
Messages
14
Reaction score
0
Hi,

Could anyone tell me what type of statistical test is used to estimate the parameters of the standard model?

I hear many particle physicists say, eg. "we have a 95% confidence that this quark mass falls between A and B" and what immediately comes to mind are the methods of statistical hypothesis testing like... z test or t test.

Any explanations or comments about the statistical analysis being done on the data that we get from colliders?
Thanks in advance!
 
Physics news on Phys.org
When I was a grad student in HEP many years ago, we commonly used the maximum likelihood method. Don't press me on the details, though. It's been a long time since I did this stuff. :redface:
 
Different groups can and do use different methods. I (as a theorist) imagine that debating what methods to use is a big part of life in the major experimental collaborations (ATLAS, CMS, CDF, DØ, etc.), which are the ones who directly analyze collider data. I believe standard procedure is for smaller working groups to perform analyses that they then try to get approved by various committees. As signals get harder to see, more elaborate methods (neural nets, decision trees) have to be used.

http://physics.bu.edu/~tulika/Teaching/Spring09/lectures/Lecture8.pdf" are some slides you may find interesting.

At a higher level, various groups (http://www.utfit.org/", etc.) perform statistical analyses of many experimental results, again using different methods. (The UTFit group likes Bayesian methods, for instance.)

The http://pdg.lbl.gov/" is the one-stop spot for compiling and comparing the various approaches and their results.
 
Last edited by a moderator:

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
9
Views
2K
  • Poll Poll
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 49 ·
2
Replies
49
Views
12K
  • · Replies 4 ·
Replies
4
Views
2K