This question was triggered by the fact that Adam Riess is making his lecture rounds at a local University. So, wanting to be prepared, I pulled out his old Nobel lecture, which nicely described the techniques used for the high-z s/n measurements. I was particularly interested in how they estimated the s/n "real" apparent magnitude, applying correction factors such as K-correction, host galaxy dust extinction, Milky Way dust extinction, and others. He emphasized, in his lecture, the need to subtract host galaxy light from the s/n itself, so that the s/n brightness is not over-estimated. He ended up using some custom software developed by a grad student, Brian Schmidt. All well and good, but he didn't explain how it was calibrated. It occurred to me that if his software preferentially subtracted too much light from dimmer galactic images, the s/n would appear "too faint" relative to what would be expected from a non-accelerating universe. An erroneous "acceleration" could be concluded. But there were other teams who must have encountered a similar problem. Surely they didn't use the same software to subtract the host galaxy's light. But the technique still must be calibrated (to eliminate apparent magnitude-based bias). I couldn't find any details in the literature. Anybody know how this is done?