How can I calculate the median of a terabyte dataset efficiently?

  • Context: Graduate 
  • Thread starter Thread starter physical101
  • Start date Start date
  • Tags Tags
    Median
Click For Summary

Discussion Overview

The discussion revolves around methods for efficiently calculating the median of a terabyte-sized dataset that cannot be fully stored in memory. Participants explore various approaches, including online algorithms and clustering techniques, while addressing the challenges associated with processing large datasets.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant discusses a method from a paper that estimates the median by storing only five variables and updating them as data is processed, but questions an unexpected value in their calculations.
  • Another participant proposes using an online clustering algorithm to compute the median of clusters, suggesting that the method could adapt based on the data distribution and cluster radius.
  • A third participant references algorithms from Knuth's work for calculating variance and other statistics on the fly, indicating that these might be relevant for the median calculation.
  • A later reply critiques linear data approximation as inefficient, suggesting that using quantiles may improve runtime significantly.

Areas of Agreement / Disagreement

Participants express differing views on the effectiveness of various methods for calculating the median, with no consensus on the best approach. Some methods are challenged as inefficient, while others are proposed as alternatives.

Contextual Notes

Participants mention specific algorithms and methods without resolving the limitations or assumptions inherent in their approaches, such as the dependence on data distribution and the computational efficiency of different techniques.

physical101
Messages
41
Reaction score
0
Hi all

I am working with a huge dataset that it to large to store in memory all at the same time.

I have been looking at estimating the median of this dataset and how found things such as the median of medians etc but these methods still require me to store a lot of the data.

For efficiency, I wanted to be able to derive something as basic as the median as the data is being processed. I found this paper:

http://www.cs.wustl.edu/~jain/papers/ftp/psqr.pdf

From way back but it seemed to be able to do what I wanted it to do.

Basically this method only stores 5 variables from the data set, the maximum, the minimum, a point below the median and a point above it.

A parabolic fit is made between the median and the two points above and below it and and these entities are updated as the dataset gets processed. In the examples the paper gives they find a solution to one of the equations they present.

Its on page 1085 - equation 5. The example they provide is on 1081. When using their formula I get 4.46 for the second row fourth column entry made into the new height matrix. Does any know where this extra 0.1 came from ?

Thanks for thinking!
 
Physics news on Phys.org
I've never done anything like this, so this idea might be silly, and it also requires that you know a bit about the data distribution...

Anyway, the idea I had was to use an online clustering algorithm, and then compute the median of the clusters in the end. That is, each cluster would have a "point count". When adding a point (a sample value), if the point is close enough to an existing cluster, it would add 1 to the closest cluster's point count. If it's not close to any existing cluster, a new cluster is added.

When computing the approximated median the point count is used to find the cluster which approximates the median.

The point distance function (computing the distance between two points) needs to be adjusted depending on your expected data set, such as the effective "cluster radius" and maybe even make it non-linear (logarithmic for example).

From some quick tests here it seems to work quite OK as long as you get the "cluster radius" approximately right, however it shouldn't be too hard to perform re-clustering using a larger cluster radius, thus making the algorithm somewhat adaptive. I tested it on uniformly distributed data, log-normal distributed data and spike-y data (90% uniform[0,1] and 10% exponentially distributed data multiplied by 100 to simulate a large spike).

Again, might be silly and might not work with the data distribution.
 
Hey all,

Turns out linear data approximation is a bad idea! Takes the code 6 times as long to run as would eitherwise be. I am using just the quantiles with the p2 to see what I end up with - severly reduces run time (and my head ache hopefully).

Thank you for your replys and suggestions

Duane
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
7
Views
10K
  • · Replies 2 ·
Replies
2
Views
3K