I am interested in evaluating light intensity variation in a digital image. A colleague wants to apply an inverse square law correction to account for distance variation. I am trying to justify that in this case, the inverse square law does not apply. Treating each pixel as a detector, it has a fixed acceptance angle and the sensing area will vary as the square of the distance from the detector. The source is much larger than the area covered by a pixel at the source distance. If the distance is doubled the intensity would be 1/4 but the source area covered by a pixel would also increase by a factor of 4 so the two effects cancel as long as the focused detector pixel area is completely on the source in both cases. My colleague is a very smart guy but I can't see the flaw in my logic. The concept of a focused detector in the context of the inverse square law doesn't seem to be covered in any of the references I've found. I'd be grateful for a second (third?) opinion.