Statistical Analysis in High-Energy Physics: Methods and Applications

  • Thread starter Thread starter iibewegung
  • Start date Start date
  • Tags Tags
    Hep Statistical
iibewegung
Messages
14
Reaction score
0
Hi,

Could anyone tell me what type of statistical test is used to estimate the parameters of the standard model?

I hear many particle physicists say, eg. "we have a 95% confidence that this quark mass falls between A and B" and what immediately comes to mind are the methods of statistical hypothesis testing like... z test or t test.

Any explanations or comments about the statistical analysis being done on the data that we get from colliders?
Thanks in advance!
 
Physics news on Phys.org
When I was a grad student in HEP many years ago, we commonly used the maximum likelihood method. Don't press me on the details, though. It's been a long time since I did this stuff. :redface:
 
Different groups can and do use different methods. I (as a theorist) imagine that debating what methods to use is a big part of life in the major experimental collaborations (ATLAS, CMS, CDF, DØ, etc.), which are the ones who directly analyze collider data. I believe standard procedure is for smaller working groups to perform analyses that they then try to get approved by various committees. As signals get harder to see, more elaborate methods (neural nets, decision trees) have to be used.

http://physics.bu.edu/~tulika/Teaching/Spring09/lectures/Lecture8.pdf" are some slides you may find interesting.

At a higher level, various groups (http://www.utfit.org/", etc.) perform statistical analyses of many experimental results, again using different methods. (The UTFit group likes Bayesian methods, for instance.)

The http://pdg.lbl.gov/" is the one-stop spot for compiling and comparing the various approaches and their results.
 
Last edited by a moderator:

Similar threads

Replies
3
Views
2K
Replies
7
Views
2K
Replies
13
Views
2K
Replies
5
Views
3K
Replies
11
Views
3K
Replies
1
Views
3K
Replies
49
Views
12K
Back
Top