Optics - Obtaining structure of an object using a laser

  • Context: Graduate 
  • Thread starter Thread starter The_Foetus
  • Start date Start date
  • Tags Tags
    Laser Optics Structure
Click For Summary

Discussion Overview

The discussion revolves around the use of laser diffraction patterns to obtain microscopic structures of objects, focusing on the mathematical conversion of Fourier transform results into real-world distances. Participants explore the implications of their experimental setup, data analysis, and the theoretical foundations of coherent diffraction imaging.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes an experiment using laser diffraction to recover the structure of objects through Fourier transforms, expressing confusion about converting pixel measurements to real distances.
  • Another participant suggests that the image should be interpreted in terms of reciprocal space rather than pixels/mm, proposing that the units should be pixels/steradian or pixels/radian.
  • A third participant introduces the concept of coherent diffraction imaging, noting its application in achieving high resolutions and mentioning the challenge of recovering phase information from amplitude measurements in Fourier transforms.
  • Further clarification is provided regarding the distinction between intensity and amplitude in the context of Fourier transforms, with a focus on the relationship between scattering angles and pixel measurements.
  • Another participant emphasizes the need to know the sample-to-detector distance to accurately determine the scattering angle and its relation to pixel measurements.
  • One participant references a related technique, ptychography, and shares a link to additional resources on coherent diffraction imaging.

Areas of Agreement / Disagreement

Participants express varying interpretations of the mathematical relationships involved in their experiments, particularly regarding the conversion of measurements and the significance of amplitude versus intensity. There is no consensus on the correct approach to these conversions or the implications for their experimental results.

Contextual Notes

Participants note limitations in their understanding of the relationships between digital images and real-space lengths, as well as the need for specific parameters such as the sensor subtending angle and sample-to-detector distance, which remain unresolved.

The_Foetus
Messages
2
Reaction score
0
Hi,

Recently I did an experiment to try and discover what some objects look like microscopically, using a laser and looking at their diffraction patterns. We used the fact that the intensity profile you obtain is the Fourier transform of the object you're shining it through, so we can recover what the original object looks like by taking the Fourier transform of the image.

However looking back over it, I'm getting very confused about the mathematics converting the distances in our Fourier transform to real distances.

We used a camera and MATLAB to do the Fourier transform, and we got a conversion of about 50 pixels/mm for the camera. The difficulty comes when using the co-ordinates in matlab. For instance, for one of our objects, when we took the Fourier transform it returned that the co-ordinate distance between peaks was 100 pixels (or 1/pixels?). We used k = (2pi/λf)x, but do we get the real distance between peaks to be 100*(λf/2pi) or what? Apologies if this sounds a bit confusing, will try and compose my sentences better if it's too unclear
 
Science news on Phys.org
I'm not exactly sure about your setup and data analysis, but if I understand correctly, your image is better thought of as 'reciprocal space' (or angle) rather than 'pixels/mm'- and 50 pixels/mm sounds like your have ginormous pixels... Anyhow, it should be 'pixels/steradian' (or 'pixels/radian' in one dimension), and the FFT image would then have units of length (and 1/pixels).

I forget the exact conversion factor, but IIRC, say your sensor subtends 0.01 radian (the actual number will depend on the sensor size and distance from the object). Then, each pixel of the FFT image will represent 2π/0.01 *λ units of length. I think- I need to verify this, but my references are at the office.
 
Believe it or not, this technique is frequently used with x-rays. It is called coherent diffraction imaging. You can get resolutions down to 10 or 20 nanometers with this.

http://en.wikipedia.org/wiki/Coherent_diffraction_imaging

A related technique is Ptychography

http://en.wikipedia.org/wiki/Ptychography

The main problem with the Fourier transform is that you only measure the amplitude of the Fourier signal. The phase is unknown.
To do the back-transformation from the diffraction signal to the real space object you need both amplitude and phase.

One way of recovering the phase is to oversample, i.e. to measure many more amplitudes than are needed for the reconstruction,
and use this redundancy to find a phase/amplitude solution that is consistent with all observed amplitudes.
 
  • Like
Likes   Reactions: Andy Resnick
M Quack said:
<snip>
  1. The main problem with the Fourier transform is that you only measure the amplitude of the Fourier signal. <snip>
I think you mean 'intensity' instead of 'amplitude'.

Yes, with the increasing availability of area detectors, this technique (also called static light scattering) is gaining popularity. Unfortunately, I couldn't find the scale factor that provides the proper units- there's the scattering vector q = 4π sin(θ/2)/λ and the corresponding length l = 2π/q, but none of my image processing books provide a clear relationship between a digital image of (say) a Laue pattern and the corresponding real-space length scale obtained by a DFT.
 
Yes, intensity of the light which gives you the amplitude of the electric field by taking the square root...

BTW, in Bragg's law, the scattering angle (angle between incident and exit beams) is usually denote by 2theta (theta is the Bragg angle),
then q= 4pi sin(theta)/lambda.

To get the scattering angle 2theta you need to know the sample-to-detector distance.

tan(2theta)=(number of pixels)(pixel size)/(sample-to-detector distance).

"number of pixels" is the distance, in pixels, of your measurement point to the point where the incident beam hits the detector, assuming the detector is perpendicular to the incident beam.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 2 ·
Replies
2
Views
10K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 0 ·
Replies
0
Views
3K
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K