I am reading some introduction on Fourier optics. In the text, the simplest example would be a single len system (focal length f) with an object f(x,y) sitting on the front focal plane of the len while the image is the corresponding spatial Fourier transformation [tex]F(\frac{x}{\lambda f}, \frac{y}{\lambda f})[/tex] where [tex]\lambda[/tex] is the wavelength. This spatial transformation is quit confusing while being compared temporal case. In temporal case, Fourier transform of f(t) is given as [tex]F(\omega)[/tex] which is defined in frequency domain. Hence, we cannot directly observe that in physical space, we need to put [tex]F(\omega)[/tex] into some instrument to observe the frequency spectrum. But for the spatial case, it seems that the Fourier transform is nothing but just a rearrangement or mapping and scaling from the source to the image, but anyway the image are still in the physical space because the image can be directly observed on the rear focal plane with screen or film. I wonder if my understand is correct? Another question on the complex function. In most cases, the Fourier transform will be complex-valued. And for len system, [tex]F(\frac{x}{\lambda f}, \frac{y}{\lambda f})[/tex] could also be complex, but I think physically the film will only record the intensity instead. But what about the phase? What's the role of the phase in this case? Or I ask in this way. If I keep the intensity for every point on the image unchanged but I shift the phase by a number or randomly change the phase, how difference the image will be? in what way? Thanks.
Yes... what's happening is that what is *physically* on the screen is a scaled Fourier transform, with the scaling depending on the distance to the screen and wavelength. (The frequency in the Fourier transform here isn't a temporal frequency... it's a spacial frequency.) The phase is exactly what you would expect... if you had a measurement device at the screen that was capable of precisely measuring the electric field in the light, you would find that the fields at two different points on the screen would be out of phase by the amount predicted by the imaginary part of the Fourier transform. Films and screens can't measure to this precision though, so we end up with the intensity instead, as you noted.
Good questions. First, the spatial transform: if you like, the fourier-conjugate variables are *angles*: absent a screen, the far-field diffraction pattern (the fourier transform of the aperture) is in terms of angle. The scaling factors relate the image size (in distance) to the angles when a screen is present. Second, you are correct- the Fourier transform/point spread function/optical transfer function is complex; (nonabsorptive) aberrations are modeled as changes to the phase of the wavefront. This gets into the difference between coherent and incoherent imaging, the reason why the cutoff frequency for imaging is either given by the pupil function directly or the autocorrelation of the pupil function, differences between optical transfer function and modulation transfer function, etc. An instructive example is to work the Fourier transform for two pure sine apertures: 1 in amplitude (transmission) only and 1 in phase only. The far-field diffraction pattern is quite different.
Thanks both of you. That explains what I doubt. And it makes sense that "far-field diffraction pattern (the fourier transform of the aperture) is in terms of angle", thanks Andy. I still have one question. In the text, it either use single len system as example or consider a field propagating in free space. In both case, far field sitiutation is considered. So does it mean Fourier Optics in only valid in case of far field? I am curious how to apply Fourier transformation if I place a very complicated "transparency" or media in-between object and image!?
Fourier optics analysis is based on the assumption of a linear shift-invariant system. So you can model a very complex optical system under some conditions- Fourier optics will fail for the near field, for example, and must be extended to a vector formalism when the numerical aperture becomes large. But perhaps the most practical complication is the use of CCD detector arrays- then shift invariance is lost. Aliasing, spurious frequency responses, etc. will occur. However, some analysis is still possible.