Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Resolving Power

  1. Jul 20, 2005 #1
    I want theoretical reason . When using microscopes , we can magnify the image as much as we want by adjascently using lenses in such a way that all aberrations are removed. But magnifying is not the solution , because there is something called 'Resolving' , like ability of the microscope to differentiate between two ends of a bacterium . As we magnify further , the two end points that smoothly define the boundaries of the image are smeared up and it is rather difficult to make out the two points. I think the reason is that as per Fermat's P. , the rays from both end points of bacterium take the approx. the same time to reach the focussing point , so they give approx. the same smeared images.So solution lies in making the rays from both end points reach the focussing point at different intervals.


    I found in a book that this difference in time interval for both rays should be more than one time period.

    But what wonders me is that the wavelength of light is very small as compared to the instrument used, we should study light using 'geometrical optics' and not 'wave character', so what do they exactly mean by "one time period difference"?

    Last edited: Jul 20, 2005
  2. jcsd
  3. Jul 20, 2005 #2
    When you want to assemble an image using waves, you can't do it using
    waves that are longer (crest-to-trough) that the image you want to assemble.

    This is a very crude analogy but since you are using a computer it should make
    sense- if your computer monitor only has 100 dots/inch, it can't display a picture
    which is smaller than 1/100th of an inch.
  4. Jul 21, 2005 #3

    Claude Bile

    User Avatar
    Science Advisor

    If you had an infinite numerical aperture, you could resolve an object with infinite precision. The problem is, the best numerical apertures available to us are around 1.5. This means that the resolution we are able to acheive in the far-field is approximately [itex] \lambda/2 [/itex], so if we are imaging something using a 500 nm source, the maximum resolution we can acheive is 250 nm.

    There are a few theories as to why this is so, essentially the theory depends on what criterion you use to define an object as being resolved.

    Basically if your Numerical Aperture is finite you cannot image something with infinite precision because you have lost some of the scattered light and hence some of the information about the object (This is commonly referred to as Abbe's theory of imaging).

    Note that these restrictions only apply only to the far-field. In the near-field (roughly defined as distances smaller than [itex] \lambda [/itex]), resolution is only limited by the aperture of our detector and the distance from the source. Provided the signal we are trying to detect is reasonably stable with time, we can obtain images with resolutions that exceed the maximum resolution allowed in the far-field. For more info, I suggest doing a google on SNOM (or NSOM) which stands for Scanning Near-field Optical Microscopy.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook