Optics - Obtaining structure of an object using a laser

AI Thread Summary
The discussion revolves around using laser diffraction patterns to determine the microscopic structure of objects through Fourier transforms. Participants express confusion about converting pixel measurements from the Fourier transform into real-world distances, particularly regarding the relationship between pixel size, scattering angles, and the wavelength of light used. They highlight that the Fourier transform primarily captures amplitude, necessitating phase information for accurate reconstruction of the original object. Techniques like coherent diffraction imaging and ptychography are mentioned as methods that can achieve high resolutions. The conversation emphasizes the importance of understanding the geometry of the setup, including sensor distance and pixel dimensions, to correctly interpret the data.
The_Foetus
Messages
2
Reaction score
0
Hi,

Recently I did an experiment to try and discover what some objects look like microscopically, using a laser and looking at their diffraction patterns. We used the fact that the intensity profile you obtain is the Fourier transform of the object you're shining it through, so we can recover what the original object looks like by taking the Fourier transform of the image.

However looking back over it, I'm getting very confused about the mathematics converting the distances in our Fourier transform to real distances.

We used a camera and MATLAB to do the Fourier transform, and we got a conversion of about 50 pixels/mm for the camera. The difficulty comes when using the co-ordinates in matlab. For instance, for one of our objects, when we took the Fourier transform it returned that the co-ordinate distance between peaks was 100 pixels (or 1/pixels?). We used k = (2pi/λf)x, but do we get the real distance between peaks to be 100*(λf/2pi) or what? Apologies if this sounds a bit confusing, will try and compose my sentences better if it's too unclear
 
Science news on Phys.org
I'm not exactly sure about your setup and data analysis, but if I understand correctly, your image is better thought of as 'reciprocal space' (or angle) rather than 'pixels/mm'- and 50 pixels/mm sounds like your have ginormous pixels... Anyhow, it should be 'pixels/steradian' (or 'pixels/radian' in one dimension), and the FFT image would then have units of length (and 1/pixels).

I forget the exact conversion factor, but IIRC, say your sensor subtends 0.01 radian (the actual number will depend on the sensor size and distance from the object). Then, each pixel of the FFT image will represent 2π/0.01 *λ units of length. I think- I need to verify this, but my references are at the office.
 
Believe it or not, this technique is frequently used with x-rays. It is called coherent diffraction imaging. You can get resolutions down to 10 or 20 nanometers with this.

http://en.wikipedia.org/wiki/Coherent_diffraction_imaging

A related technique is Ptychography

http://en.wikipedia.org/wiki/Ptychography

The main problem with the Fourier transform is that you only measure the amplitude of the Fourier signal. The phase is unknown.
To do the back-transformation from the diffraction signal to the real space object you need both amplitude and phase.

One way of recovering the phase is to oversample, i.e. to measure many more amplitudes than are needed for the reconstruction,
and use this redundancy to find a phase/amplitude solution that is consistent with all observed amplitudes.
 
  • Like
Likes Andy Resnick
M Quack said:
<snip>
  1. The main problem with the Fourier transform is that you only measure the amplitude of the Fourier signal. <snip>
I think you mean 'intensity' instead of 'amplitude'.

Yes, with the increasing availability of area detectors, this technique (also called static light scattering) is gaining popularity. Unfortunately, I couldn't find the scale factor that provides the proper units- there's the scattering vector q = 4π sin(θ/2)/λ and the corresponding length l = 2π/q, but none of my image processing books provide a clear relationship between a digital image of (say) a Laue pattern and the corresponding real-space length scale obtained by a DFT.
 
Yes, intensity of the light which gives you the amplitude of the electric field by taking the square root...

BTW, in Bragg's law, the scattering angle (angle between incident and exit beams) is usually denote by 2theta (theta is the Bragg angle),
then q= 4pi sin(theta)/lambda.

To get the scattering angle 2theta you need to know the sample-to-detector distance.

tan(2theta)=(number of pixels)(pixel size)/(sample-to-detector distance).

"number of pixels" is the distance, in pixels, of your measurement point to the point where the incident beam hits the detector, assuming the detector is perpendicular to the incident beam.
 
Thread 'A quartet of epi-illumination methods'
Well, it took almost 20 years (!!!), but I finally obtained a set of epi-phase microscope objectives (Zeiss). The principles of epi-phase contrast is nearly identical to transillumination phase contrast, but the phase ring is a 1/8 wave retarder rather than a 1/4 wave retarder (because with epi-illumination, the light passes through the ring twice). This method was popular only for a very short period of time before epi-DIC (differential interference contrast) became widely available. So...
I am currently undertaking a research internship where I am modelling the heating of silicon wafers with a 515 nm femtosecond laser. In order to increase the absorption of the laser into the oxide layer on top of the wafer it was suggested we use gold nanoparticles. I was tasked with modelling the optical properties of a 5nm gold nanoparticle, in particular the absorption cross section, using COMSOL Multiphysics. My model seems to be getting correct values for the absorption coefficient and...
Back
Top