Boosted Decision Trees algorithm

Click For Summary
The Boosted Decision Trees (BDT) algorithm is utilized in particle physics to enhance the identification of signals, such as neutral pions, by combining correlated discriminating variables into a more effective discriminant. The training process begins with a one-dimensional cut on the variable that best differentiates signal from background samples, followed by recursive checks on subsequent variables until a minimum event threshold is reached in subsamples. This threshold is crucial to avoid drawing unreliable conclusions from small sample sizes, which can lead to misleading signal-to-background ratios. The algorithm assigns classifications based on whether objects fall into signal or background-like subsamples, and it continues to boost wrongly classified candidates until a predefined number of trees is constructed. Understanding the stopping criteria related to subsample size is essential for ensuring statistical validity in the classification results.
ChrisVer
Science Advisor
Messages
3,372
Reaction score
465
I am not sure whether it should be here, or in statistical mathematics or in computers thread...feel free to move it. I am using it here because I am trying to understand the algorithm when it's used in particle physics (e.g. identification of neutral pions in ATLAS).

As I read it:

In general we have a some (correlated) discriminating variables and we want to combine them into one more powerful discriminant. For that we use the Boost Decision Tree (BDT) method.
The method is trained with an input of signal (cluster closest to each \pi^0 with some selection of the cluster-pion distance \Delta R <0.1) and background samples (the rest).
The training starts by applying a one-dimensional cut on the variable that provides the best discrimination of the signal and background samples.
This is subsequently repeated in both the failed and succeeded sub-samples using the next more powerful variable, until the number of events in a certain subsample has reached a minimum number of objects.
Objects are then classified as signal or background dependent on whether they are in a signal or background-like subsample. The result defines a tree.
The process is repeated weighting wrongly classified candidates higher (boosting them), and it stops when the number of pre-defined trees has been reached. And the output is the likelihood estimator of whether the object is signal or background.

My questions:
Taking a discrimination for the best discriminating variable, the algorithm will check whether some variable cut is satisfied and will lead to YES or NO.
Then to the resulting two boxes, another check is going to be done with some other variable. And so on... However I don't understand how this can stop somewhere and not keep going until all the variables are checked (I don't understand then the: "until the number of events in a certain subsample has reached a minimum number of objects").
A picture of what I am trying to explain is shown below (the weird names are just the names of the variables, S:signal, B:background):
planck_cmb.jpg
 
Physics news on Phys.org
If you just have 100 events (random number) left in a category, your statistical fluctuations are too large to draw reasonable conclusions. You would just define your cuts based on random fluctuations and the resulting S/(S+B) values would look much better than they really are.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 8 ·
Replies
8
Views
720
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
17
Views
4K
  • · Replies 100 ·
4
Replies
100
Views
10K