Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I How does the inverse square law apply to a focused detector?

  1. Nov 28, 2018 #1
    I am interested in evaluating light intensity variation in a digital image. A colleague wants to apply an inverse square law correction to account for distance variation. I am trying to justify that in this case, the inverse square law does not apply.

    Treating each pixel as a detector, it has a fixed acceptance angle and the sensing area will vary as the square of the distance from the detector.

    The source is much larger than the area covered by a pixel at the source distance. If the distance is doubled the intensity would be 1/4 but the source area covered by a pixel would also increase by a factor of 4 so the two effects cancel as long as the focused detector pixel area is completely on the source in both cases.

    My colleague is a very smart guy but I can't see the flaw in my logic. The concept of a focused detector in the context of the inverse square law doesn't seem to be covered in any of the references I've found. I'd be grateful for a second (third?) opinion.
     
  2. jcsd
  3. Nov 28, 2018 #2

    PeterDonis

    Staff: Mentor

    Can you be more specific about what is in the image, and what sort of distance variation is present?
     
  4. Nov 28, 2018 #3

    Nugatory

    User Avatar

    Staff: Mentor

    And what is doing the focusing?
     
  5. Nov 28, 2018 #4

    russ_watters

    User Avatar

    Staff: Mentor

    This is a common source of arguments in amateur astronomy circles, but I too am not clear on the setup you are describing.

    A hint/guess though:
    It is notable that galaxy intensity is not strongly correlated with distance due to the effect you describe. E.G., a galaxy that is further away sends less light to a telescope/camera, but it is projected on a smaller area, largely cancelling out the reduction. That enables Hubble (and amateurs) to take photos of galaxies with vastly different distances in one frame.
     
  6. Nov 28, 2018 #5
    The source would typically be a large slab of hot steel, 20 - 30 m from the camera. Focussing is done with a commercial camera lens. The light source is many pixels wide at all of the distances in question.
     
  7. Nov 28, 2018 #6
    I would think if the object is more than three or four times the distance of the diameters of the object and light gathering aperture from the camera that the ISL would be a good approximation. This is a rule of thumb for radiation detectors. The reasoning being that any point on the observed object sees about the same subtended solid angle of the aperture which follows the ISL.
     
  8. Nov 28, 2018 #7

    russ_watters

    User Avatar

    Staff: Mentor

    I think you are (correctly) describing the total amount of light hitting the detector, but the OP is describing the intensity of the light.
     
  9. Nov 28, 2018 #8

    russ_watters

    User Avatar

    Staff: Mentor

    I would say your interpretation of how the photography works is correct. The inverse square law applies to the total amount of light hitting the detector, not the intensity (pixel brightness value).
     
  10. Nov 28, 2018 #9
    I am thinking of two extreme cases and ignoring everything in between:
    1. a point source with a detector with a very wide field of view. ISL clearly applies. In the lighting industry they use a rule of thumb that for distances greater than 5x source size ISL is a good enough approximation but their detector is typically averaging incident light over a half sphere.
    2. a large source that is filling the entire detector sensing area. If the distance increases enough then the source would no longer fill the detector and ISL would start to apply again.

    I guess another way to think about it would be to say that a focussed detector changes the boundary between near and far field conditions?
     
  11. Nov 28, 2018 #10

    jbriggs444

    User Avatar
    Science Advisor

    So if the image illuminates multiple pixels, the inverse square law manifests in terms of the number of pixels illuminated. Double the distance and quarter the image area and, accordingly, the number of illuminated pixels. But if the image illuminates only a fraction of a pixel, the inverse square law manifests in terms of the intensity at that pixel. Double the distance and quarter the intensity.
     
  12. Nov 28, 2018 #11

    Nugatory

    User Avatar

    Staff: Mentor

    But to maintain constant intensity at a single pixel as the distance increases we would have to focus the decreased amount of light on a corresponding smaller area of the pixel array, no? It's not clear to me from the original post and followup what the geometry is and whether it has this property.
     
  13. Nov 28, 2018 #12

    russ_watters

    User Avatar

    Staff: Mentor

    Agreed. For astronomy, what you describe manifests in a difference between imaging individual stars vs entire galaxies.
     
  14. Nov 28, 2018 #13

    russ_watters

    User Avatar

    Staff: Mentor

    Yes.
    My read is that the imaging system is not changing, but the subject is moving relative to the camera. So if you double the distance, it covers 1/4 as many pixels.
     
  15. Nov 29, 2018 #14
    Thanks all for your input.

    I like jbriggs444 explanation "So if the image illuminates multiple pixels, the inverse square law manifests in terms of the number of pixels illuminated."

    I think I'm satisfied enough to move forward on an experiment to test it!
     
  16. Dec 1, 2018 #15

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    A pixel is a Sample and sampling theory says that, as long as you are sampling at a high enough rate, the original signal can be reconstituted. So I am a bit reluctant to accept that an argument based just on number of pixels is fully valid. There are so many false assumptions about resolution being based just on pixel number and I am always wary. I think that you can (should) ignore the pixels and also ignore their acceptance angle for any distant source.
    For an unfocussed point source (i.e. no lens) the energy flux through a given area will follow ISL exactly. So, in terms of pixels, there are the same number all the time and you just integrate. When you focus with a lens, the total flux through the lens will also follow ISL and the sensor array will measure the total as long as all the light from lens to image is intercepted. (Which is what light from a distant point will be.) That argument would apply to a single large detector (bolometer, for instance) and I think a large array would deliver the same answer.
     
  17. Dec 1, 2018 #16

    russ_watters

    User Avatar

    Staff: Mentor

    While I'm not entirely sure what you are getting at, I'm not, strictly speaking, using pixels themselves, except as a proxy for area or angle. Intensity can be measured in photons per unit area or angular area. With the imaging system fixed, they all vary together by the appropriate relations.

    As I and others noted, that assumption gets inaccurate when what you are imaging is smaller than a pixel, but that is not an issue here.
     
    Last edited: Dec 1, 2018
  18. Dec 1, 2018 #17

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    I tend to shy away from photons and pixels because they can cloud the issue at times. But, now they have been introduced, the following argument should apply. The lens gathers a number of photons per second. The CMOS or CCD elements are pretty well linear. The overall photon count will be the same if one or sixteen elements record it.
    I know that’s not necessarily the full answer to the question but I think it at least nails part of it.
     
  19. Dec 1, 2018 #18
    To think about this another way:
    You can easily demonstrate that if you have a square source that occupies 16 pixels in your detector field of view and you double the viewing distance it will occupy 4 pixels. If the ISL applies then the total energy hitting the entire detector array also drops by a factor of 4 so that the intensity of the light hitting the remaining 4 pixels must remain constant.
     
  20. Dec 2, 2018 #19

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    If we are discussing "pixels" then the implication is that a lens is involved (camera or telescope). The aperture of the lens is what determines how much incident light power reaches the sensor or array. There's no doubt (?) that the energy into the lens aperture is subject to the ISL for large distances. A small image must be assumed if we have already accepted that the object is far enough away for ISL. All the photons going into the lens will fall on sensor elements. (Quantum efficiencies in the region of 90%+ are typical of good sensors) Sensor elements have a linear response (volts per received photon is constant)) so, whether the image falls on just one sensor or several, the integral over the image will give the same result. So the photons / pixels bit will not affect the result compared with the classical idea of power through the lens.
    This is not an answer to the whole question, of course, but it does clear up a potential misunderstanding. (Intuition and experience with normal photography can get in the way.) If you use high magnification on a star image, you will see the image become large and appear less bright because it's spread out but that's another issue, to do with the non-linear way the eye (plus our perception) works (CMOS elements are not subjective). There is always an optimum magnification (for viewing) and we tend to choose an image size that's just bordering on the diffraction limit. Pale fuzzy stars and planets are 'no fun'.

    The actual size of image from a 'point' source will be diffraction limited and there will be an airy disc distribution of the light falling on the sensor. The size in mm on the sensor will depend on the focal length of the lens and the subtended angle (in terms of the object) will be
    [​IMG] (From Wiki)
    and if the scope records the whole of the disc, the total flux will be recorded

    The f Number of a lens ("focal ratio") doesn't affect the brightness of a star; it's just affected by the aperture. However, for regular photographs, the f number accounts for what the final image looks like.

    If we take an extended but distant object, the above argument will apply to the whole input energy flux if it's considered point by point. In fact. If your telescope field is filled by the Moon and then you go half way there, the ISL will not appear to apply because light from outer parts of the moon image will not hit the sensor or your eye.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted