T Q: Resolving Power - Theory & Wave Character

  • Thread starter Thread starter Dr.Brain
  • Start date Start date
  • Tags Tags
    Power
AI Thread Summary
The discussion focuses on the concept of resolving power in microscopy, emphasizing that magnification alone does not improve resolution due to the limitations of light waves. It highlights that the ability to distinguish between two points, such as the ends of a bacterium, is affected by the time it takes for light rays from those points to converge, suggesting a need for different time intervals for improved resolution. The conversation references the finite numerical aperture of microscopes, which limits resolution to approximately half the wavelength of light used, exemplified by a maximum resolution of 250 nm for a 500 nm light source. Additionally, it notes that in the near-field, resolution can exceed far-field limits, depending on the detector's aperture and proximity to the source. The discussion encourages further exploration of techniques like Scanning Near-field Optical Microscopy (SNOM) for enhanced imaging capabilities.
Dr.Brain
Messages
537
Reaction score
2
I want theoretical reason . When using microscopes , we can magnify the image as much as we want by adjascently using lenses in such a way that all aberrations are removed. But magnifying is not the solution , because there is something called 'Resolving' , like ability of the microscope to differentiate between two ends of a bacterium . As we magnify further , the two end points that smoothly define the boundaries of the image are smeared up and it is rather difficult to make out the two points. I think the reason is that as per Fermat's P. , the rays from both end points of bacterium take the approx. the same time to reach the focussing point , so they give approx. the same smeared images.So solution lies in making the rays from both end points reach the focussing point at different intervals.

--------------------------------------------------------------------

I found in a book that this difference in time interval for both rays should be more than one time period.

But what wonders me is that the wavelength of light is very small as compared to the instrument used, we should study light using 'geometrical optics' and not 'wave character', so what do they exactly mean by "one time period difference"?

BJ
 
Last edited:
Physics news on Phys.org
When you want to assemble an image using waves, you can't do it using
waves that are longer (crest-to-trough) that the image you want to assemble.

This is a very crude analogy but since you are using a computer it should make
sense- if your computer monitor only has 100 dots/inch, it can't display a picture
which is smaller than 1/100th of an inch.
 
If you had an infinite numerical aperture, you could resolve an object with infinite precision. The problem is, the best numerical apertures available to us are around 1.5. This means that the resolution we are able to achieve in the far-field is approximately \lambda/2, so if we are imaging something using a 500 nm source, the maximum resolution we can achieve is 250 nm.

There are a few theories as to why this is so, essentially the theory depends on what criterion you use to define an object as being resolved.

Basically if your Numerical Aperture is finite you cannot image something with infinite precision because you have lost some of the scattered light and hence some of the information about the object (This is commonly referred to as Abbe's theory of imaging).

Note that these restrictions only apply only to the far-field. In the near-field (roughly defined as distances smaller than \lambda), resolution is only limited by the aperture of our detector and the distance from the source. Provided the signal we are trying to detect is reasonably stable with time, we can obtain images with resolutions that exceed the maximum resolution allowed in the far-field. For more info, I suggest doing a google on SNOM (or NSOM) which stands for Scanning Near-field Optical Microscopy.

Claude.
 
I have recently been really interested in the derivation of Hamiltons Principle. On my research I found that with the term ##m \cdot \frac{d}{dt} (\frac{dr}{dt} \cdot \delta r) = 0## (1) one may derivate ##\delta \int (T - V) dt = 0## (2). The derivation itself I understood quiet good, but what I don't understand is where the equation (1) came from, because in my research it was just given and not derived from anywhere. Does anybody know where (1) comes from or why from it the...

Similar threads

Replies
9
Views
2K
Replies
54
Views
10K
Replies
7
Views
4K
Replies
226
Views
15K
Replies
4
Views
1K
Replies
3
Views
131
Back
Top