- #1
daviddoria
- 97
- 0
I am trying to match little square patches in an image. You can imagine that these patches have been "vectorized" in that the values are reordered consistently into a 1D array. At first glance, it seems reasonable to simply do a Euclidean distance style comparison of two of these arrays to get a "similarity" measure. This works fine in many cases (the "best" patch according to this metric looks very much like the query patch). However, there is a degenerate case (that actually happens quite often in practice) where a patch matches extremely well to a near-constant patch that has values near the average of the query patch. For example, say I have a picture of a house - I would expect a patch of grass to match really well to other patches of grass. But what I see happening is that ocassionally a patch of grass will match to a "smooth/solid" patch say of the side of the house (if the smooth patch happens to have values near the average of the grass patch). This match is *obviously* wrong to a human observer, so I want my distance metric to account for this. These cases can be detected by comparing the variance of the query patch to the variance of the best matching patch. If the variances are very different, but the Euclidean distance is very low, then we probably have this case. What I am looking for is some "real/official" metric that takes this into account inside the metric. Of course I do a simple:
Difference = EuclideanDistance + lambda * VarianceDistance
but then I would have to learn lambda, which would require me to have training data (known good pairs of patches, etc) - which is very annoying :)
Any thoughts/comments/suggestions?
Thanks!
David
Difference = EuclideanDistance + lambda * VarianceDistance
but then I would have to learn lambda, which would require me to have training data (known good pairs of patches, etc) - which is very annoying :)
Any thoughts/comments/suggestions?
Thanks!
David