I would like to hear a second opinion about this solution that was presented to us. As a background, we have a system that will compute for the total number of records for a given a file. The arrival time for each file is set for 5 minutes. For monitoring purposes, the problem now is how to determine if the computed record for the newly arrived file is a low-volume file, meaning the number of record is very low compared to the other files. The solution that was presented to us was this. First, the mean A for the total number of records from 6 months ago will be computed. Then, the data will be sorted from lowest to highest, get the top 30 lowest data and compute for the mean B. The mean B will then be divided by mean A and the resulting value will be called a minimum index point. Afterwards, the total record for the newly arrived file will be divided by mean A. The value for this computation will then be compared against the minimum index point. If the value is lower than the minimum index point, an alarm will be generated informing the user that the current file is a low volume file. Is there any flaw with this method? Or is there a more efficient solution for this particular problem?