Statistical assessment of the quality of event detention

StarWars
Messages
2
Reaction score
0
Hello.

I developed an algorithm to detect events in time domain and I want to know the efficiency of the algorithm.

The problem is related with the time duration of the data.

Each file has data with a time duration of hundreds of minutes and I have dozens of files.

Instead of calculate the specificity and the sensitivity of this algorithm for the entire data set, I was thinking to choose random samples.

My question is:

What is the correct approach to have a valid statistical analysis?

Thank you.
 
Physics news on Phys.org
Unfortunately, applications of statistics involve subjective judgements. If you want practical advice about a valid statistical approach, you need to give more practical details of the situation. For example, what are you concerned about? - the number of events in the file? - the exact time when an event occurs? Do you have information about when an event "really" happened vs when the algorithm said it happened?
 
I am studying sounds in time domain. Usually this sound has a low amplitude profile, just noise. Sometimes, a sound is generated and there is an increase in the signal amplitude.

The goal of the algorithm is to detect this increase in the signal amplitude. Unfortunately, the generation of a sound can be interpreted as random. It is possible that a low amplitude profile lasts for minutes or even hours without a single sound being generated. On the other hand, it is possible that there is a sequence of sounds for several minutes with a time difference between the sound n+1 and the sound n of just a few seconds.

I am concerned with the quality of the detection, sensitivity and specificity. This means a desire of knowing if a generated sound is either detected or not detected and if there is a "detected" sound when no sound is generated.

I do not have any prior information, just the one given by the algorithm.

Thank you
 
StarWars said:
I do not have any prior information, just the one given by the algorithm.

If that means that you have no way to compare the detections from the algorithm to real events then I think you should resort to simulating data from "real" events and seeing how well the algorithm detects them. To simulate data from events, you need algorithms to do the simulation.

If you want to do statistical hypothesis testing on each set of data, you need a "null hypothesis", which could be that no sounds are present and that the data is generated by some specific random process. You need a way to compute the probability of getting similar data when those assumptions are true. If you have no algorithm or formula to compute this probability then you can't do hypothesis testing.

There may be situations in engineering and science where people have developed standard methods of dealing with your problem. You can try asking about your problem in the engineering or science sections of the forum and give more details.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...

Similar threads

Back
Top