Median of a terabyte dataset

  • #1
Hi all

I am working with a huge dataset that it to large to store in memory all at the same time.

I have been looking at estimating the median of this dataset and how found things such as the median of medians etc but these methods still require me to store a lot of the data.

For efficiency, I wanted to be able to derive something as basic as the median as the data is being processed. I found this paper:

http://www.cs.wustl.edu/~jain/papers/ftp/psqr.pdf

From way back but it seemed to be able to do what I wanted it to do.

Basically this method only stores 5 variables from the data set, the maximum, the minimum, a point below the median and a point above it.

A parabolic fit is made between the median and the two points above and below it and and these entities are updated as the dataset gets processed. In the examples the paper gives they find a solution to one of the equations they present.

Its on page 1085 - equation 5. The example they provide is on 1081. When using their forumla I get 4.46 for the second row fourth column entry made in to the new height matrix. Does any know where this extra 0.1 came from ?

Thanks for thinking!
 

Answers and Replies

  • #2
335
44
I've never done anything like this, so this idea might be silly, and it also requires that you know a bit about the data distribution...

Anyway, the idea I had was to use an online clustering algorithm, and then compute the median of the clusters in the end. That is, each cluster would have a "point count". When adding a point (a sample value), if the point is close enough to an existing cluster, it would add 1 to the closest cluster's point count. If it's not close to any existing cluster, a new cluster is added.

When computing the approximated median the point count is used to find the cluster which approximates the median.

The point distance function (computing the distance between two points) needs to be adjusted depending on your expected data set, such as the effective "cluster radius" and maybe even make it non-linear (logarithmic for example).

From some quick tests here it seems to work quite OK as long as you get the "cluster radius" approximately right, however it shouldn't be too hard to perform re-clustering using a larger cluster radius, thus making the algorithm somewhat adaptive. I tested it on uniformly distributed data, log-normal distributed data and spike-y data (90% uniform[0,1] and 10% exponentially distributed data multiplied by 100 to simulate a large spike).

Again, might be silly and might not work with the data distribution.
 
  • #3
jim mcnamara
Mentor
4,303
2,914
Knuth Vol 2 has algorithms for reading data linearly, calculating a mean, variance, std deviation, etc. all on the fly. The algorithm reads a datastream or an array or whatever.

It is called the online variance algorithm. See:

http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
 
  • #4
Hey all,

Turns out linear data approximation is a bad idea! Takes the code 6 times as long to run as would eitherwise be. Im using just the quantiles with the p2 to see what I end up with - severly reduces run time (and my head ache hopefully).

Thank you for your replys and suggestions

Duane
 

Related Threads on Median of a terabyte dataset

Replies
6
Views
3K
  • Last Post
Replies
2
Views
2K
Replies
1
Views
2K
  • Last Post
Replies
5
Views
7K
Replies
5
Views
17K
  • Last Post
Replies
5
Views
4K
Replies
18
Views
26K
Replies
1
Views
525
Replies
7
Views
394
  • Last Post
Replies
1
Views
911
Top