- #1
feynomite
- 22
- 0
Hello all. This problem is mostly math based, but is being applied to a programming algorithm.
I'm working on a problem with a massive amount of normalized vectors in a massive amount of dimensions and need to quickly compute the nearest neighbors of anyone vector. More precisely, I need to find the closest 20 or so vectors in as close to possible as the correct order. I was able to come up with two approximate solutions, but do not know how to figure out the margin of error of them, or really what they mean. They just happen to give decent results.
The vectors occupy a total space of about 15,000 dimensions, but (thankfully) any given vector is:
1) normalized
2) has positive non-zero values only
3) exists in only up to 10 dimensions
To find the nearest neighbor, I understand that I'm looking for the vector which has the shortest distance to another vector. To do this, I computer the distance for all vectors which have at least 1 dimension in common with the search vector. All other vectors are orthogonal and therefore sqrt(2) far away. I then sort by the distances.
Simple geometry tells me the distance between vectors a and b is:
let S(v) be the set of dimensions that vector v has non-zero values
[tex]Distance(a, b) = \sqrt{\sum_{n}^{S(a) \bigcup S(b)} (a_n - b_n)^2 }[/tex]
Unfortunately this takes a large amount of calculations: for each of the up to 19 dimensions (I'm ignoring the trivial case where a and b are orthogonal), I have to take the difference between the vectors, square it, and add it. That's a lot of work.
My approximation is as follows... I just sum the minimums of each common dimensional component, and treat it as "commonality":
[tex]ApproxCommonality(a, b) = \sum_{n}^{S(a) \bigcap S(b)} min(a_n, b_n) [/tex]
This is much faster because [tex]S(a) \bigcap S(b)[/tex] is at worst case 10, and the work on each iteration is only the "min()" function. I take these results and sort from highest to lowest. Problem is, I do not know how far off they are from the correct results... how well does this algorithm work?
Yet another approximation I've found is simply the dot product between a and b:
[tex]ApproxCommonality2(a, b) = \sum_{n}^{S(a) \bigcap S(b)} a_n * b_n [/tex]
This is very fast as well. But is the order of "a" dotted with all vectors (sorted from highest magnitude to lowest) the same order as all vectors sorted by distance (lowest to highest)? If not, what's the margin of error?
I appreciate your input on this.. and other suggested algorithms would be great as well.
I'm working on a problem with a massive amount of normalized vectors in a massive amount of dimensions and need to quickly compute the nearest neighbors of anyone vector. More precisely, I need to find the closest 20 or so vectors in as close to possible as the correct order. I was able to come up with two approximate solutions, but do not know how to figure out the margin of error of them, or really what they mean. They just happen to give decent results.
The vectors occupy a total space of about 15,000 dimensions, but (thankfully) any given vector is:
1) normalized
2) has positive non-zero values only
3) exists in only up to 10 dimensions
To find the nearest neighbor, I understand that I'm looking for the vector which has the shortest distance to another vector. To do this, I computer the distance for all vectors which have at least 1 dimension in common with the search vector. All other vectors are orthogonal and therefore sqrt(2) far away. I then sort by the distances.
Simple geometry tells me the distance between vectors a and b is:
let S(v) be the set of dimensions that vector v has non-zero values
[tex]Distance(a, b) = \sqrt{\sum_{n}^{S(a) \bigcup S(b)} (a_n - b_n)^2 }[/tex]
Unfortunately this takes a large amount of calculations: for each of the up to 19 dimensions (I'm ignoring the trivial case where a and b are orthogonal), I have to take the difference between the vectors, square it, and add it. That's a lot of work.
My approximation is as follows... I just sum the minimums of each common dimensional component, and treat it as "commonality":
[tex]ApproxCommonality(a, b) = \sum_{n}^{S(a) \bigcap S(b)} min(a_n, b_n) [/tex]
This is much faster because [tex]S(a) \bigcap S(b)[/tex] is at worst case 10, and the work on each iteration is only the "min()" function. I take these results and sort from highest to lowest. Problem is, I do not know how far off they are from the correct results... how well does this algorithm work?
Yet another approximation I've found is simply the dot product between a and b:
[tex]ApproxCommonality2(a, b) = \sum_{n}^{S(a) \bigcap S(b)} a_n * b_n [/tex]
This is very fast as well. But is the order of "a" dotted with all vectors (sorted from highest magnitude to lowest) the same order as all vectors sorted by distance (lowest to highest)? If not, what's the margin of error?
I appreciate your input on this.. and other suggested algorithms would be great as well.