Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Can better sensors extract unlimited info from old lens?

  1. Feb 12, 2012 #1
    I placed this in quantum department but since no comments came either it belongs here or is too trivial.
    Please comment.

    This problem I encountered in my photography forum.
    A new model of camera comes out with a much higher megapixel sensor (Nikon D800, 36mpix). I argue that such sensor will need new lenses to be meaningful. Others claim that a given lens performance (here resolution) is always utilized by a new higher resolution sensor. They call up Fourier transforms, modulation transfer functions etc. I am not an engineer, more of an amateur philosopher. To me it smells of Zeno paradox (a "quantum Zeno effect" ?). There sure must be a limit on the amount of information a new ever improving recording medium (sensor) can extract from the same old lens? Or not?
    Or does this issue belong to classical physics?
    I mean, in my mind I see a lemon running out of juice at some point of the squeeze.
    Isn't the information coming through any given lens limited then? No more photons to be converted into electrons? On one hand it's the issue of quantum efficiency of the CMOS sensor, but also the physical information limit of the light field?
    So are those Fourier transforms of MTFs accurate descriptions or we are at the point where technology stumbles on quanta?


    "... As I wrote the formula is an approximation. For accurate results one would need to multiply the MTFs by means of Fourier transforms.
    Every image will show more resolution on the D800 with any lens at any aperture than on previous Nikon's.
    The resolution of a system is a function of the resolution of it's parts, not only a single component.
    The resolution of a lens plus sensor system can be approximated by:

    1/Total resolution = 1/lens resolution + 1/sensor resolution. Or

    Total resolution = (lens resolution)x(sensor resolution) / [ (lens resolution) + (sensor resolution) ]. "....


    "Resolution figures are meaningless without accompanying contrast data.
    One has to put somewhere the cutoff line: what is acceptable. In cinematography Zeiss Master Prime delivering 70lp/mm at 70% contrast is the standard bearer (at 25k U$ and several pounds)
    In photography I am willing to accept 50% contrast as a minimum performance measure.
    If somebody wants to go down to MTF 25 or even 10--his choice. He will be able to point to "extra resolution" on his cranked up monitor viewed at 100%.
    The concept of a lens delivering an ever-increasing stream of visual information as the recording medium becomes more and more capable, is philosophically intriguing, to say the least. A "quantum Zeno effect" of sorts."


    "If you would have read the thread I linked to further, you would have found that the approximation is based on multiplications of MTFs by means of Fourier transforms, You can pick any frequency (resolution) you want. Higher frequencies are generally used as a measure of resolution and lower as contrast.

    The approximate formula is still valid and serves as a good first approximation to explain the principles of what can be expected for a system of lens plus sensor.The optical perfomance of a system of two components (lens and sensor) is not determined just by a single component but by both."

    You can think of it as two MTFs stacked on top of each others."
  2. jcsd
  3. Feb 12, 2012 #2

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    It's hard for me to figure out what you are asking, but the sentence I pulled out from the post is false. A blurry image will still look blurry at increased sampling. I haven't seen any side-by-side comparisons of the new D800 and (say) D7000, but the claim is that some high-quality lenses will show improved performance on the D800.

    What exactly are you asking?
  4. Feb 12, 2012 #3
    There is a strong belief among some photo pundits that a higher resolution sensor will always extract extra resolution from all existing lenses.This is supposedly based on Fourier transforms of modulation transfer functions of the imaging chain.
    Is it true?
    Do math models of imaging sciences give support to such claims?

    I have had an impression that there is something like a lens performance limits independent of the measuring (recording) component of the imaging chain. If a lens delivers 70lp/mm at MTF70 as measured on the MTF equipment that should be it. No sensor should extract 90lp/mm at MTF70 from this lens.
    Am I wrong?
  5. Feb 12, 2012 #4

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor


    On the contrary, the results demonstrate the claims are false. See, for example, "Analysis of Sampled Imaging Systems":


    If I understand you, then you are correct- the lens has become the limiting factor in your system performance.
  6. Feb 13, 2012 #5
    Thanks. I ordered the book from. Time to brush up in optoelectrical engineering.
  7. Feb 13, 2012 #6


    User Avatar

    Staff: Mentor

    Note, this is a common problem with astrophotography. There are online and downloadable calculators for matching a camera with a telescope based on resolution. Short version, though, is that it does you no good to have a camera with much higher a resolution than your telescope.
  8. Feb 13, 2012 #7
    Thank you for the info. Could you pass me any link to such calculators?
    It's good to know that the issue is understood and long resolved among professionals.
    Out of curriosity, how do you define resolving power of astrophotography glass? In lp/mm?
  9. Feb 13, 2012 #8

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    There's no one single best metric to define 'resolving power', because that concept is ill-defined. One could specify the point spread function over the exit pupil, but in terms of adaptive optics, it likely makes more sense to specify the actual wavefront at the exit pupil.
  10. Feb 13, 2012 #9
    Like counting all the photons and their energies?
  11. Feb 13, 2012 #10

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor

    I don't understand what you mean- typically, photon counting is not used as a wide-field imaging method (although it is used as a confocal-type imaging method).
  12. Feb 13, 2012 #11


    User Avatar
    Staff Emeritus
    Science Advisor
    2018 Award

    In a "perfect" system it would be the smallest spot the telescope or camera is capable of making based on the diameter of the objective and the wavelength of light. Attempting to decrease pixel sizes to less than about half the size of the airy disc would be a waste, as it would not get any more detail. Note that real systems will have imperfections and aberrations such as dispersion, coma, astigmatism, and others that will degrade the image even further.

  13. Feb 14, 2012 #12
    Excuse my unscientific language, I am just an amateur.
    I was thinking of a model to describe resolution in terms of photons registered vs photons sent out from a strict pattern. How many photons travel the lens and arrive at the proper positions in space.
    Or maybe it's total nonsense.
  14. Feb 14, 2012 #13


    User Avatar

    Staff: Mentor

    http://www.newastro.com/downloads/index.php [Broken]

    There is some debate in these links about just how many pixels you need to cover the telescope's resolution limit, but they all agree that the the resolution limit is not dependent on the number of pixels. In other words: One common method for assessing resolution is by being able to "split" a double star (see both stars). No amount of additional pixels (beyond a well-matched camera/scope) will change whether a certain telescope can split a certain double star.

    Also, in astrophotography there is a very serious downside to too many pixels: the pixels don't actually touch each other, so the more pixels you have the more of the imaging surface is covered with borders between pixels instead of pixels. That reduces the sensitivity of the camera, in addition to the loss in sensitivity caused by the smaller pixels themselves. This issue of course also exists for regular photography, it just isn't as important.
    Last edited by a moderator: May 5, 2017
  15. Feb 14, 2012 #14


    User Avatar
    Science Advisor
    Homework Helper

    There is a counter-argument for more pixels, as a way of reduce the effect of noise in the sensor, by spreading the noise over a wider spatial frequency range and filtering out the part that can't possibly be "signal" because it is beyond the resolving power of the optics.

    But for "normal" photography I don't think that has much relevance, especally considering that many digital images are filtered through lossy compression schemes (e.g. jpg) in any case.

    FWIW this principle is applied in other digital data aquisition systems - e.g. pro audio recording and processing often uses sampling rates much higher than human hearing can process, and much higher than what finally stored on CDs or DVDs, to minimize noise.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook