Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Median of a terabyte dataset

  1. Sep 29, 2013 #1
    Hi all

    I am working with a huge dataset that it to large to store in memory all at the same time.

    I have been looking at estimating the median of this dataset and how found things such as the median of medians etc but these methods still require me to store a lot of the data.

    For efficiency, I wanted to be able to derive something as basic as the median as the data is being processed. I found this paper:

    http://www.cs.wustl.edu/~jain/papers/ftp/psqr.pdf

    From way back but it seemed to be able to do what I wanted it to do.

    Basically this method only stores 5 variables from the data set, the maximum, the minimum, a point below the median and a point above it.

    A parabolic fit is made between the median and the two points above and below it and and these entities are updated as the dataset gets processed. In the examples the paper gives they find a solution to one of the equations they present.

    Its on page 1085 - equation 5. The example they provide is on 1081. When using their forumla I get 4.46 for the second row fourth column entry made in to the new height matrix. Does any know where this extra 0.1 came from ?

    Thanks for thinking!
     
  2. jcsd
  3. Oct 3, 2013 #2
    I've never done anything like this, so this idea might be silly, and it also requires that you know a bit about the data distribution...

    Anyway, the idea I had was to use an online clustering algorithm, and then compute the median of the clusters in the end. That is, each cluster would have a "point count". When adding a point (a sample value), if the point is close enough to an existing cluster, it would add 1 to the closest cluster's point count. If it's not close to any existing cluster, a new cluster is added.

    When computing the approximated median the point count is used to find the cluster which approximates the median.

    The point distance function (computing the distance between two points) needs to be adjusted depending on your expected data set, such as the effective "cluster radius" and maybe even make it non-linear (logarithmic for example).

    From some quick tests here it seems to work quite OK as long as you get the "cluster radius" approximately right, however it shouldn't be too hard to perform re-clustering using a larger cluster radius, thus making the algorithm somewhat adaptive. I tested it on uniformly distributed data, log-normal distributed data and spike-y data (90% uniform[0,1] and 10% exponentially distributed data multiplied by 100 to simulate a large spike).

    Again, might be silly and might not work with the data distribution.
     
  4. Oct 3, 2013 #3

    jim mcnamara

    User Avatar

    Staff: Mentor

    Knuth Vol 2 has algorithms for reading data linearly, calculating a mean, variance, std deviation, etc. all on the fly. The algorithm reads a datastream or an array or whatever.

    It is called the online variance algorithm. See:

    http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
     
  5. Oct 4, 2013 #4
    Hey all,

    Turns out linear data approximation is a bad idea! Takes the code 6 times as long to run as would eitherwise be. Im using just the quantiles with the p2 to see what I end up with - severly reduces run time (and my head ache hopefully).

    Thank you for your replys and suggestions

    Duane
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Median of a terabyte dataset
Loading...