# Optics-maximum lens resolution and difraction limit

Hi guys,
I have a very simple question that arises in a photo forum in Brazil.
Some guy put a formula to calculate the maximum resolution possible in a camera lens based on their aperture. The formula is 1800/N = (maximum resolution in lpmm)
Were 1800 é a constant and N is the F stop of the camera.
This means that any lens in F 22 will have the maximum resolution of 1800/22= 81.8lpmm.
By looking at this formula I think to myself that they can’t be real because the diffraction that will limit the maximum resolution is in a function were the wavelength of light and the real aperture size are needed to calculation, since the wavelength have a real physical value and lens aperture F stop don’t.
Based on this formulae a 5mm focal length lens in F22 that have a real aperture of 0.23mm will have the same maximum resolution that a 2000mm lens in F22 that have a 9cm aperture real diameter.
This can be correct?
They claim to have a factor were the focal distance will influence on the diffraction, making a 0,23mm hole have the same effect on light that a 9cm hole.
Can you help me and point were are the right or wrong of this assumptions.
Alex
Sorry if the question is to simple.
PS-I really work on wikipedia to make sense on this formula but can't find anything.

Andy Resnick
There's a lot of misinformation about this subject, unfortunately. The most important thing to know is that "maximum resolution" is not a well-defined (i.e. quantitative) metric of performance. Ok, now then...

There are two imaging limits that can be easily analyzed- one is the closest separation between two mutually incoherent point sources, which leads to the Rayleigh limit or Rayleigh criterion.

The other limit corresponds to the spatial frequency cutoff. "Line Pairs" is a poor measure of this limit because (1) a line pair is not a periodic object, and (2) line pairs are not sinusoidally varying in space- there are multiple spatial frequencies present. However, the use of 3- and 4-bar targets is used because of the trade-off between manufacturing ease and accuracy. Lastly, there is some real performance differences between coherent and incoherent imaging systems (just as the Rayleigh criterion varies depending on the degree of mutual coherence between the two points).

Another subtlety is that oftentimes, the resolution limits correspond to *angles* rather than separation distance (or spatial wavelength). However, the role of the f/# and numerical aperture is the same, and the spatial frequency cutoff is related to the Rayleigh criterion.

That's enough for now: do you follow? I can provide a lot more information, but I need to know more about where you are coming from.

Hi Andy,
Thanks for your response, yes I can follow you and undestand that if analyzed in deep this is a really complex questiom. If you are asking what is my background, I have a major in Biology, since in graduation course I worked a little on x-ray protein cristalography, so difraction is not a so clouded subject. I have some background on physics since my dad was a solid state physicist and since I was a child they told me and teach me playing on their lab. I have calculus I & II but most physics especific equations like general relativity is really hard to me. This is were a I came from. Im a photographer for 25 years, since living exclusively on science in Brazil isn't easy because people don't think that you have a life between grants.
So I came with a very simple pratical question:
For the formula above any lens in F 32 will be limited to 56,25lpmm(1800/32) and that's the case for a 2000mm lens (62,5mm hole) and a 5mm( 0,15mm hole) lens. Any one that have a simple compacta camera knows that from F5,6 and F8 the aperture have a huge impact on image, but for F 8 on the formula above the limit resolution will be 225lpmm, very far from any limitation due to aperture.
Really thanks for responding, sorry by my english. Someone told me that bad english is the universal language of science...
Best,
Alex

Andy Resnick
Alex,

I'm not exactly sure what you are asking, but consider:

Typically, imaging systems are specified by the f/#, which for a single lens, is the lens diameter divided by the focal length (or something like that). It's worth stating that the lens itself is the aperture in this case. For an adjustable aperture stop, you can generate a range of f/#s.

In the days of film, magnification was not really considered, because the film grain was not a limiting factor (usually). Now with digital images, things get really complicated really quickly. I'll assume that's not your intent.

In terms of "image quality", people will judge an image to be good or bad based not on the cutoff frequency, but having high contrast in the mid range. It's possible to have a low-contrast image with a very high cutoff frequency, and people will typically call that image 'worse' than an image with a lower cutoff frequency but higher contrast. In terms of information, the 'worse' image has more, but that's not how we judge 'image quality'.

http://www.schneiderkreuznach.com/knowhow/opt_quali_e.htm [Broken]
http://www.schneiderkreuznach.com/knowhow/digfoto_e.htm [Broken]

Am I helping?

Last edited by a moderator:
Andy,
You are helping a lot just to remember that some terms like “lpmm” and “image quality “ don’t have a precise physical definition, breaking some criteria that photographers always take for granted is good in favor of a more scientific approach.
I’m not interested in enter the debate between “film x digital”. There is always too much misinformation on the subject, the two links that you provided I already read then a long time ago, and consider one of the most elusive texts explaining that digital sensors and film have different needs on lens terms. A good film lens necessarily isn’t good for digital.
My question is more practical, all experienced photographers know that a lens have a best performance under some aperture and if the lens is stepped down image will be worse. People are already aware that small focal distances worsen image fast when stepped down than long lenses.
In the formula 1800/F the results show that in F2.8 resolution will be max 642lpmm, F4-450 lpmm, f5.6-321, f225, f11- 163, f16-112,5, f22-81 lpmm.
This limiting resolution hardly will be seen on film since 160lpmm is the best that can be achieved on BW ISO100 film on extreme fine grain developer. Most common development doesn’t pass 80 lpmm. And, digital sensors on bayer arrays and antialiasing filters will be far away from the limits that the formula(1800/F) above point.
Considering that when you make a test you use a heavy tripod, and correct exposition time to compensate for loss of light on the aperture. Changing only the aperture will result in a very noticeable difference in the image, even on “normal” development” and that can’t be caused by the raleight criteria since their limits are far from what we will perceive.
Since the only variable that we are changing will be the lens aperture, what is the real cause of this worsening in image?
The case is in their extreme if we think on small compact (1/2,5 sensor) cameras because they use very small focal distances an small real apertures.

http://www.luminous-landscape.com/tutorials/understanding-series/u-diffraction.shtml

Thanks a lot for your help,
Best,
Alex

Andy Resnick
I'm still not sure I am understanding you.

Let me try this: "best performance" for a lens often means stopping it down below full aperture, around 25% or so- again, that's not a hard number. Stopping down the lens can increase the contrast at the expense of decreasing the cutoff frequency. So for slight vignetting, the trade-off works out. As you continue to stop down the lens, the cutoff frequency becomes more noticeable.

I wonder if you are talking about the relationship between magnification and resolution? For example, you state that the best film resolution is 160 lp/mm (note, it's line pairs PER millimeter- it's a frequency), but if my lens system magnifies a 1000 lp/mm target, I can still image it quite clearly. The "maximum usable magnification" is often taken to be about (1000 x numerical aperture).

One other note- the decreased resolution at high f/# is not caused by scattering by the aperture stop diaphragm blades: that is 'glare' or 'glint'. The resolution is properly attributed to the point spread function, the size of which is related to the size of the aperture. Decrease the stop, increase the size of the point spread function (Airy disc), which leads to increased blurring and decreased cutoff frequency.

Hi!

I also participate in the Brazilian photography forum where this question originally cropped up, and hope the original poster doesn't mind my breaking the subject into a few smaller questions, which may be easier to answer. Hopefully the problem becomes clearer this way. I have also included some questions that were not explicitly mentioned by Alex, but which were present in the original topic at our forum.

So...assuming a hipothetical perfect lens or mirror, and ignoring for the time being any limitations of the sensor or film onto which the image is being projected:

1) Is the 1800/N a valid approximation for diffraction-limited resolution (given in lp/mm)?

2) Assuming #1 above is correct, where does the "1800" constant come from?

3) If the formula is an approximation, what variable or variables are being "fixed" or "approximated" so that such a simple formula can be derived from more complex ones? What is the validity range for this formula?

4) Does the formula give an equally good approximation at any given focal length? For instance, would a 20mm and a 2000mm lenses have comparable (same?) hipothetical diffraction limits at the same f-stop, say f/32 for instance (please note that, at the same f-stop the 2000mm lens would have a 100 times wider physical aperture than the 20mm one)? How about a visible light several-metres-wide mirror telescope at f/32?

5) How about lenses of same focal length (say 50mm) but designed for different systems (say 35mm and 645 medium-format). Ignoring the obvious difference in the enlargement factor for both systems, which may scale down the net effect of diffraction for the larger format, would both lenses still have the same hypothetical diffraction limit (in lp/mm) at the same f-stop?

Of course in the real world, at least in what concerns everyday photography, diffraction is just one factor (and rarelly the most important one) which affects the resolution of a system. I understand lenses imperfections, anti-alias filter, the sensor pixels, noise, algorithms in the camera, maybe other factors in the optical and imaging chain, are usually more important than diffraction in determining the useful resolution of a camera and lens system. Also "lp/mm" may be an old-fashioned way to describe resolution - MTF give us much more information - but I think it may still be valid as a simple figure of merit.

Marcos

mgb_phys
Homework Helper
1) Is the 1800/N a valid approximation for diffraction-limited resolution (given in lp/mm)?
2) Assuming #1 above is correct, where does the "1800" constant come from?
In the diffraction limit of a lens, the angular distance between two resolvable points
is 1.22 wavelength / Diameter
To make an image of infinity the lens is one focal length away from the image so from simple triangles the distance between the points on the film is
1.22 wavelength * focal length/diameter,
and since focal length/diameter is N we have
smallest feature on film = 1.22 wavelength * N

Assuming wavelength is in nm, then there are
1,000,000 / (1.22 wavelength N) lines/mm, this gives 1800/N for a wavelength of 455nm

This is roughly the photographic 'B' band, so about the shortest wavelength and so maximum resolution possible with photographic film

3) If the formula is an approximation, what variable or variables are being "fixed" or "approximated" so that such a simple formula can be derived from more complex ones? What is the validity range for this formula?
As Andy said, resolution and line pairs is a manufacturers unit - it doesn't really apply to the real world.
It's main use is telling you that the company selling you an f16 inspection system is probably lying.

4) Does the formula give an equally good approximation at any given focal length? For instance, would a 20mm and a 2000mm lenses have comparable (same?) hipothetical diffraction limits at the same f-stop, say f/32 for instance (please note that, at the same f-stop the 2000mm lens would have a 100 times wider physical aperture than the 20mm one)? How about a visible light several-metres-wide mirror telescope at f/32?
If they are both diffraction limited then yes.
In practice other optical aberations increase quickly with aperture. It's very easy to make a 2mm diameter lens for a phone camera diffraction limited, more difficult for an SLR and very very difficult for a 10m telescope.

5) How about lenses of same focal length (say 50mm) but designed for different systems (say 35mm and 645 medium-format). Ignoring the obvious difference in the enlargement factor for both systems, which may scale down the net effect of diffraction for the larger format, would both lenses still have the same hypothetical diffraction limit (in lp/mm) at the same f-stop?
Again if both diffraction limited there is no difference.
In practice larger format lenses are built for more expensive cameras and to preserve image quality from other aberations (which get must worse with larger image size) you tend to make them slower (ie larger N), so a 50mm f1.2 for a 35mm camera will be similar to a 50mm f2.8 for a 6x6 Hasselblad and a 50mm f5.6 for a plate camera

Last edited:
Thanks Andy and MGB,
Were can I find in real world a diffraction limited lens? There is many of then out there? Some Leica lenses or rodenstock that claimed to be diffraction limited really are?
Andy, I am not considering magnification, just the effect of aperture ring on real world lenses.
I am just trying to figure out why the lens aperture worse the image. As you say most lenses gain in "image quality" when you step then down for two or three stops but after that point they start to loose, is possible to calculate this "worsening" in real world lenses?
Thanks,
Alex

mgb_phys
Homework Helper
Were can I find in real world a diffraction limited lens?
Most modern lenses will be diffraction limited at a couple of stops below minimum aperture, but all this means is that the diffraction gets worse and at small apertures it's big enough to outweigh all the other aberations. Very few lenses are diffraction limited wide open!

Ironically most cell phone camera lenses are diffraction limited simply because their aperture and image size are so small.

As you say most lenses gain in "image quality" when you step then down for two or three stops but after that point they start to loose, is possible to calculate this "worsening" in real world lenses?
Yes, but you would need the optical design of the lens and a ray tracing package like Zemax or OSLO

MGB,
Thanks for pointing the OSLO package, I download the OSLO-EDU version to play a little on the subject on some very well know lens designs (elmarits, sonnars).
For some modern lenses that designs is not very well known, the analysis only will be possible using the real designers project or can they be made on lens schematics?
Really, really thanks,
Best,
Alex

Andy Resnick
Thanks Andy and MGB,
Were can I find in real world a diffraction limited lens? There is many of then out there? Some Leica lenses or rodenstock that claimed to be diffraction limited really are?
Andy, I am not considering magnification, just the effect of aperture ring on real world lenses.
I am just trying to figure out why the lens aperture worse the image. As you say most lenses gain in "image quality" when you step then down for two or three stops but after that point they start to loose, is possible to calculate this "worsening" in real world lenses?
Thanks,
Alex

As MGB points out, any lens becomes diffraction-limited as the aperture is reduced: camera obscuras are diffraction-limited. Modern lens manufacturers can produce very fast lenses that are extremely close to the diffraction limit- high end oil immersion microscope objectives, for example, have very low aberrations.

I need to explicitly state (again) that "image quality" and "maximum resolution" have nothing to do with each other.

mgb_phys
Homework Helper
As you said you can get details for classic lenses. I doubt you will find the optical design of the latest Nikon/Canon zoom lens, unless it's in the patent - but Japanese patents are fairly hard to read.

As andy said, resolution is only really the deciding factor for somethign like a microscope or a wafer printer.
For a camera lens chromatic aberation is probably the most important, then geometric distortion and coma.
Interestingly another of the significant design features of a camera lens is how an out of focus point looks, called 'bokeh' it's a difficult to quantify but subjectively makes a great lens - especially in portraits.

Thanks Andy and MBG, I think it's all much clearer now!
As Andy said, resolution and line pairs is a manufacturers unit - it doesn't really apply to the real world. It's main use is telling you that the company selling you an f16 inspection system is probably lying.
Anyway, at least I think the formula tells us that we can't get past that value(*), so it's probably not much use to stop a lens down below the point where the diffraction limit predicted by the formula meets the pixel pitch of the sensor(**), right?

(*): ok, I've done a bit of web searching, and found out that the Rayleigh criterion is somewhat arbitrary, and that modern equipment may be a bit better than the human eye at distinguishing overlapping Airy discs. Advanced algorithms and techniques are used in astrophysics and in chemistry to try to resolve information past the Rayleigh limit.

(**): Bayer array sensors used in digital photography can make it a bit difficult to characterise "pixel pitch". Some references say we should use the actual pixel pitch/2, but I think the current demosaic algorithms are so complex, and some seem to be so good at exploiting the reasonably high colour channel correlation found in everyday photography, that maybe "pixel pitch/2" may be an underestimation of the actual spacial resolution achievable by these sensors.

One last question from me, if I may. Going back to this passage in particular:
4) Does the formula give an equally good approximation at any given focal length? For instance, would a 20mm and a 2000mm lenses have comparable (same?) hipothetical diffraction limits at the same f-stop, say f/32 for instance (please note that, at the same f-stop the 2000mm lens would have a 100 times wider physical aperture than the 20mm one)?
If they are both diffraction limited then yes.
In each case above the diffraction was generated by approximatelly circular openings of very different sizes, by lenses of very different focal lengths. How come the the Airy disc (and therefore the diffraction limit) at the image plane is the same?

Last edited:
mgb_phys
Homework Helper
Anyway, at least I think the formula tells us that we can't get past that value(*), so it's probably not much use to stop a lens down below the point where the diffraction limit predicted by the formula meets the pixel pitch of the sensor(**), right?
Yes, except it's much more complicated for a digital SLR.
In a CCD there is cross talk between pixels so if the beam arrives at the sensor at too steep an angle it will spread out into neighboring pixels under the surface. on a CMOS detector there is a tiny little lens over each pixel which gives different problems.
Then (as you say) there is a Bayer filter so each pixel is really a group of four. Except it's not as simple as that because the reconstruction algorithm understands about the spatial scale of the image and averages accross adjacent pixels to fill in the color.
Then there is an anti-aliasing filter to blur the image slightly so that a small dot doesn't land on a single bayer filter color - otherwise all your diffraction limited points would be R,G,b depending where they hit. This also stops the moire effect of stripes that are at the sensor pitch.
In high cameras the lens also tells the camera about it's aberations so the computer can correct the image.

One victim of the lpm figure is Olympus (IMHO the best optical engineering maker) they chose a sensor for their digital SLRs that is only half the size of a 35mm film. But because optical aberations (astigmatism,coma ,spherical, etc) go as the ^2,^3,^4 power of image size they can make lenses with much lower aberations - however these smaller lenses hit the diffraction limit sooner. Since the pixel size is also smaller this doesn't matter - except in list of specs on a website.
They also do little touches like make the output of the lens telecentric - ie the light hits the sensor at right angles so you don't ave any of the effects of a fast lens and the micro-lenses on the chip. They also used to refuse to use sensors with microlenses although that's changed in the newer models to get video

I've done a bit of web searching, and found out that the Rayleigh criterion is somewhat arbitrary, and that modern equipment may be a bit better than the human eye at distinguishing overlapping Airy discs. Advanced algorithms and techniques are used in astrophysics and in chemistry to try to resolve information past the Rayleigh limit.
I once had an research proposal to do a kind of speckle interferometry imaging turned down by a technical committee who informed me that everyone should know a telescope contains no information below the Rayleigh limit - idiots!
If you know what you are looking for, eg a binary star, then you can use information from a very blurred image at much less than the rayleigh limit to tell you the binary separation.

The approach is to start with the picture you expect, calculate the blurred image from the theoretical lens design, compare it to the image from the camera and see how close they are - then change your expected picture slightly - and repeat.
(ps it still doesn't do what you see on CSI !)

Last edited:
mgb_phys
Homework Helper
In each case above the diffraction was generated by approximatelly circular openings of very different sizes, by lenses of very different focal lengths. How comes the the Airy disc (and therefore the diffraction limit) at the image plane is the same?
Because the focal length is also bigger.
So a larger aperture gives a smaller diffraction "angle" resolution - but when you go a greater distance (larger focal length) to the image plane this has grown to the same "linear" size.

Another way to look at it - the larger lens has a smaller diffraction spot, but you are looking at it with greater magnification.

One victim of the lpm figure is Olympus (IMHO the best optical engineering maker) they chose a sensor for their digital SLRs that is only half the size of a 35mm film. But because optical aberations (astigmatism,coma ,spherical, etc) go as the ^2,^3,^4 power of image size they can make lenses with much lower aberations - however these smaller lenses hit the diffraction limit sooner. Since the pixel size is also smaller this doesn't matter - except in list of specs on a website. They also do little touches like make the output of the lens telecentric - ie the light hits the sensor at right angles so you don't ave any of the effects of a fast lens and the micro-lenses on the chip. They also used to refuse to use sensors with microlenses although that's changed in the newer models to get video
Quite a few interesting points! I was aware 4/3 lenses were near telecentric, and that this is a good thing for digital sensors for the reason you stated, but had no idea optical aberrations went down for smaller formats. I had a vague understanding that optical aberrations would scale up and down proportionately to the format the lenses were designed for. Also, didn't know the sensors used in Olympus 4/3 cameras didn't use to have microlenses.
however these smaller lenses hit the diffraction limit sooner. Since the pixel size is also smaller this doesn't matter - except in list of specs on a website.
At the wide aperture end, however, it may be more difficult for 4/3 lenses to achieve the same level of background blurring as in larger formats, since much wider apertures would be needed!

I have a micro-4/3 camera (Panasonic Gf1), and I understand the near telecentric feature of the traditional 4/3 lenses has been lost in lenses designed for the new format. Distortion, vigneting and chromatic aberration are "corrected" in software for m-4/3 lenses with surprisingly usable results, but I know it's not the "real thing" one would get with a properly corrected lens.

The approach is to start with the picture you expect, calculate the blurred image from the theoretical lens design, compare it to the image from the camera and see how close they are - then change your expected picture slightly - and repeat.
Great! Very interesting!

In each case above the diffraction was generated by approximately circular openings of very different sizes, by lenses of very different focal lengths. How comes the the Airy disc (and therefore the diffraction limit) at the image plane is the same?
Because the focal length is also bigger.
So a larger aperture gives a smaller diffraction "angle" resolution - but when you go a greater distance (larger focal length) to the image plane this has grown to the same "linear" size.

Another way to look at it - the larger lens has a smaller diffraction spot, but you are looking at it with greater magnification.
Makes perfect sense to me!

Last edited:
I once had an research proposal to do a kind of speckle interferometry imaging turned down by a technical committee who informed me that everyone should know a telescope contains no information below the Rayleigh limit - idiots!
If you know what you are looking for, eg a binary star, then you can use information from a very blurred image at much less than the rayleigh limit to tell you the binary separation.

I hope you don't abandon the idea, the possibilities of use of this kind of research will be big. Think that, if Einstein depend on a "technical committee" in germany, the relativity theory will never come out.
I am realy enjoing playing in OSLO-EDU unfortunately they don't have the features of the full version to work on wavelenght and are restricted to 9 elements, since I will never make professional use of the package buying the full version will be to much indulging my curiosity.
How close can be the theoretical raytrace model to a real lens?
Thanks,
Alex

mgb_phys
Homework Helper
I hope you don't abandon the idea, the possibilities of use of this kind of research will be big.
Was a long time ago, we took the first images of the surface of another star - the technique is now being used on a lot of large telescopes.

I am realy enjoing playing in OSLO-EDU
There is also a book from SPIE by Bruce somebody that teaches optical design using OSLO.
The standard desktop package in industry is ZEMAX which has a much nicer gui than OSLO.
They do an edu version but it is pretty tightly controlled to institutes.
Then the big-gun is a system called CODE-V, we used to use it to model microwave stuff because it can do fields as well as ray tracing.

You can essentially perfectly model all the geometric effects of a real lens, although modelling the effects in the detector is trickier. This is how digital cameras can fix distortions in software.
Higher end packages will even do tolerencing where they give you the effect on image quality for different grades of lens surface finish and small errors in positioning.
For really precise stuff like wafer mask printers or astronomy cameras the lens maker will give you the exact details for the particular melt batch of glass your lenses are made from - so you can fine tune the design