- #1
xeon123
- 90
- 0
I've a set of tests, and for each test, they have different size.
I'll show an example.
For each test, I took the average.
And for each entry of the Test1, I took the standard deviation. Does't in make sense calculate the mean of the standard deviation?
Doing an average of standard deviation can proof that all examples ran at similar time? For example, having low average in the standard deviation means that all the 3 results of 100 were similar among themselves, the 3 results of 200 were similar results among themselves, and the 3 results of 300 were similar among themselves?
I'll show an example.
Code:
Test1
size - Time(seconds)
100 - 10
100 - 23
100 - 17
200 - 37
200 - 42
200 - 47
300 - 53
300 - 53
300 - 53
For each test, I took the average.
Code:
Average1
size - average
100 - 16
200 - 42
300 - 53
And for each entry of the Test1, I took the standard deviation. Does't in make sense calculate the mean of the standard deviation?
Doing an average of standard deviation can proof that all examples ran at similar time? For example, having low average in the standard deviation means that all the 3 results of 100 were similar among themselves, the 3 results of 200 were similar results among themselves, and the 3 results of 300 were similar among themselves?