A basic question: jinc function in coherent and incoherent system

In summary: Is that because the coherent system has a higher cutoff frequency?Thanks for all your help!In summary, the conversation discusses the use of the jinc function in Fourier optics and the differences between coherent and incoherent systems. It is noted that the function h(x,y) should be normalized in both systems, and that the intensity of the image is calculated differently for each system. The idea of energy conservation is also brought up, and it is concluded that h_i must be real and positive valued everywhere. The
  • #1
Accidently
37
0
I am learning Fourier optics recently and I have a problem of the jinc function.

In optical systems, digital image is blurred with the kernel of jinc function
[itex]h(x,y)=jinc(r)=\frac{J_1(2\pi r / \lambda)}{ 2\pi r / \lambda}[/itex]

in the coherent system, the blurred image
[itex] g(x,y) = |h(x,y) \star f(x,y)|^2 [/itex]
where f(x,y) is the unblurred image and [itex]\star[/itex] indicates the convolution.

I assume we should normalize h, so that we have
[itex] \sum_{x,y} h(x,y) = 1[/itex]
in a discrete form.

And in the incoherent system, the blurred image
[itex] g(x,y) = |h|^2 * |f(x,y)|^2 [/itex]
Do we need to normalize h differently as?
[itex] \sum_{x,y} |h(x,y)|^2 = 1[/itex]

if doing so, it seems that the blurred image is darker in the coherent system. if not, the blurred image in the incoherent system is darker.

which one is correct?

thanks
 
Physics news on Phys.org
  • #2
I'm not an optics guy, so take my comments for what they're worth.

Rather than imposing an ad hoc normalization on the jinc function, I would think that the normalization is imposed on the system level by conservation of energy--that is, all of the light energy entering the aperture, minus losses in the optical system, must be present in the image. If you use a point source (uniform aperture illumination) and assume that losses are negligible, then you should get a normalization similar to your second expression but with scale factors that include the light gathering power (aperture area) of your system. After all, intensity should increase with aperture area. The expression should involve an integral, from which you can take the appropriate discrete limit.

As for the intensity of a coherent system, remember that intensity is the squared magnitude of the amplitude--which takes you back to the incoherent expression. It doesn't make sense to talk of them separately, then, since the intensity expressions are the same for both.
 
  • #3
Accidently said:
I am learning Fourier optics recently and I have a problem of the jinc function.<snip>

The function h(x,y), the point spread function, is normalized as [itex] \sum_{x,y} h(x,y) = 1[/itex]. I don't understand why you think one form of imaging is 'darker' than the other?
 
  • #4
Andy Resnick said:
The function h(x,y), the point spread function, is normalized as [itex] \sum_{x,y} h(x,y) = 1[/itex]. I don't understand why you think one form of imaging is 'darker' than the other?

think about a simplified version of the point spread function,
[itex]h =\left(\begin{array}{ccc}
1/12 & 1/12 & 1/12 \\
1/12 & 1/3 & 1/12\\
1/12 & 1/12 & 1/12 \\
\end{array}\right)
[/itex]
which is normalized with [itex] \sum_{xy} h(x,y)=1 [/itex]

and assume we have two points of light source, say, located in (0,0) and (0,1) and both with intensity value 1. Then the intensity of the aerial image at point (1,1), blurred due to the diffraction effect, is
[itex] (1/12+1/12)^2 = 1/36 [/itex] for a coherent system
and
[itex] (1/12)^2 + (1/12)^2 = 1/72 [/itex] for an incoherent system

so it seems that the aerial image for a incoherent system is always darker than the coherent system. And the energy conservation is not guarantted in the incoherent system. That is why I am thinking we should normalize the point spread function (jinc function) for the incoherent system in a different way.

still puzzled...
 
  • #5
That post was very helpful, thanks. It is often surprising that incoherent imaging yields a 'better' image than coherent imaging- in your case, the amount of blur is less for incoherent imaging than coherent imaging. Similarly, the cutoff frequency for incoherent imaging is twice that for coherent imaging.

Even so, comparing incoherent and coherent imaging is not trivial. Rather than be unclear, I'll simply point to the 'gold standard' explanation, in section 6.5

http://books.google.com/books?id=ow...w#v=onepage&q=psf coherent incoherent&f=false

Does this help?
 
  • #6
Andy Resnick said:
That post was very helpful, thanks. It is often surprising that incoherent imaging yields a 'better' image than coherent imaging- in your case, the amount of blur is less for incoherent imaging than coherent imaging. Similarly, the cutoff frequency for incoherent imaging is twice that for coherent imaging.

Even so, comparing incoherent and coherent imaging is not trivial. Rather than be unclear, I'll simply point to the 'gold standard' explanation, in section 6.5

http://books.google.com/books?id=ow...w#v=onepage&q=psf coherent incoherent&f=false

Does this help?

Thanks. That does help.

So when I do simulations for incoherent system, I need to normalize the point spread function as
[itex] \sum_{x,y} |h(x,y)|^2 = 1 [/itex]
which make sense because it conserve the total energy. Is that correct?

thanks
 
  • #7
I think so- part of my confusion results from applying the concept to a discretized system.

To summarize my understanding, if the coherent PSF h_c(x,y) = D/λ jinc(Dr/λ) then h_c is normalized. The incoherent PSF h_i(x,y) = (D/λ)^2 jinc^2(Dr/λ) is also normalized.

The image intensity isI_i = |h_c * U_o|^2 for the coherent case (U_o is the object *field*) and I_i = h_i * I_o (* = convolution) for the incoherent case.

I think that makes everything self-consistent. h_c may be complex- and negative-valued, but h_i must be real and positive valued everywhere.
 
  • #8
Andy Resnick said:
I think so- part of my confusion results from applying the concept to a discretized system.

To summarize my understanding, if the coherent PSF h_c(x,y) = D/λ jinc(Dr/λ) then h_c is normalized. The incoherent PSF h_i(x,y) = (D/λ)^2 jinc^2(Dr/λ) is also normalized.

The image intensity isI_i = |h_c * U_o|^2 for the coherent case (U_o is the object *field*) and I_i = h_i * I_o (* = convolution) for the incoherent case.

I think that makes everything self-consistent. h_c may be complex- and negative-valued, but h_i must be real and positive valued everywhere.

That makes sense. But it seems that the coherent system does not conserve energy... consider two points of light source separated by 1 pixel and 2 pixels. The sums of intensity of the blurred image are not equal...
 
Last edited:
  • #9
I'm confused as well- I ran a quick test to compare the coherent and incoherent case, using a 3x3 blur kernel: [[0.09, 0.09, 0.09],[0.09, 0.3, 0.09],[0.09, 0.09, 0.09]] and what I assumed was the square: [[0.01, 0.01, 0.01],[0.01, 0.9, 0.01],[0.01, 0.01, 0.01]]. The convolution was done on a 10 x 10 pixel array with one central pixel set white against a black background. For the coherent case, the unblurred object white value was 255, while for the incoherent unblurred, it was 65025.

The unblurred and incoherent blurred images gave the same integrated density = 65025, while the coherent case gave an integrated density of 9700.

The only thing I can think of is that the coherent blur is really a complex-valued convolution and the phase information is lost during the computation. I say this because the coherent blur h*U conserves brightness while (h*U)^2 does not.

Curious...
 
  • #10
Andy Resnick said:
I'm confused as well- I ran a quick test to compare the coherent and incoherent case, using a 3x3 blur kernel: [[0.09, 0.09, 0.09],[0.09, 0.3, 0.09],[0.09, 0.09, 0.09]] and what I assumed was the square: [[0.01, 0.01, 0.01],[0.01, 0.9, 0.01],[0.01, 0.01, 0.01]]. The convolution was done on a 10 x 10 pixel array with one central pixel set white against a black background. For the coherent case, the unblurred object white value was 255, while for the incoherent unblurred, it was 65025.

The unblurred and incoherent blurred images gave the same integrated density = 65025, while the coherent case gave an integrated density of 9700.

The only thing I can think of is that the coherent blur is really a complex-valued convolution and the phase information is lost during the computation. I say this because the coherent blur h*U conserves brightness while (h*U)^2 does not.

Curious...

I also tried to repeat some calculations in other literature. It seems that the blurred images are darker in coherent system that incoherent system, just like your calculation. (although I can not guarantee the calculations are correct)

But I don't think the phase can explain this phenomena because the problem is still there if you considre a "mono-phase" system...
 
  • #11
Andy Resnick said:
I think so- part of my confusion results from applying the concept to a discretized system.

To summarize my understanding, if the coherent PSF h_c(x,y) = D/λ jinc(Dr/λ) then h_c is normalized. The incoherent PSF h_i(x,y) = (D/λ)^2 jinc^2(Dr/λ) is also normalized.

The image intensity isI_i = |h_c * U_o|^2 for the coherent case (U_o is the object *field*) and I_i = h_i * I_o (* = convolution) for the incoherent case.

I think that makes everything self-consistent. h_c may be complex- and negative-valued, but h_i must be real and positive valued everywhere.
Not quite. Intensity has units of power flux. Your h_i*I_o is an amplitude (yes it can be complex), and you must take its squared magnitude to get intensity.
Andy Resnick said:
I'm confused as well- I ran a quick test to compare the coherent and incoherent case, using a 3x3 blur kernel: [[0.09, 0.09, 0.09],[0.09, 0.3, 0.09],[0.09, 0.09, 0.09]] and what I assumed was the square: [[0.01, 0.01, 0.01],[0.01, 0.9, 0.01],[0.01, 0.01, 0.01]]. The convolution was done on a 10 x 10 pixel array with one central pixel set white against a black background. For the coherent case, the unblurred object white value was 255, while for the incoherent unblurred, it was 65025.
If you square the coherent amplitude to get intensity, it matches the incoherent intensity you observed.

Andy Resnick said:
The unblurred and incoherent blurred images gave the same integrated density = 65025, while the coherent case gave an integrated density of 9700...
What is integrated density? Have you taken a spatial integral of the PSF?
 
  • #12
marcusl said:
Not quite. Intensity has units of power flux. Your h_i*I_o is an amplitude (yes it can be complex), and you must take its squared magnitude to get intensity.

Not according to how things are defined- for incoherent imaging, I_o is already |U|^2, just as h_i = |h_c|^2. I should have written I_i = |h_c|^2 * |U_o|^2 = h_i * I_o.

marcusl said:
What is integrated density? Have you taken a spatial integral of the PSF?

The integrated density is calculated two ways- one by simply adding together the grey values of all the pixels, the other by multiplying the number of pixels by the average grey value of the pixel array. It may be useful to mention that these two values are different (don't know why).
 
  • #13
Andy Resnick said:
Not according to how things are defined- for incoherent imaging, I_o is already |U|^2, just as h_i = |h_c|^2. I should have written I_i = |h_c|^2 * |U_o|^2 = h_i * I_o.
If "white value" refers to the value of the peak intensity, then shouldn't they be equal for coherent and incoherent according to the normalization you are using?
 
Last edited:

What is the jinc function in a coherent system?

The jinc function, also known as the Bessel function of the first kind of order one, is a mathematical function that describes the diffraction pattern in a coherent system. It is used to calculate the amplitude of the diffraction pattern at different distances from the center.

What is the jinc function in an incoherent system?

The jinc function in an incoherent system describes the point spread function, which is the distribution of light intensity at different points in an image. It is used to calculate the blur caused by optical aberrations in the system.

How is the jinc function related to the Fourier transform?

The Fourier transform of the jinc function is the circular aperture function, which is used to describe the diffraction pattern in a coherent system. This means that the jinc function is the inverse Fourier transform of the circular aperture function.

What are the applications of the jinc function?

The jinc function is commonly used in optical systems, such as telescopes and microscopes, to describe the diffraction patterns and point spread functions. It is also used in signal processing to model the frequency response of a system.

How is the jinc function calculated?

The jinc function can be calculated using various mathematical methods, such as the Hankel transform or the Fourier transform. It is also available as a built-in function in many mathematical software packages, making it easily accessible for scientists and engineers.

Similar threads

Replies
4
Views
1K
Replies
2
Views
818
  • Classical Physics
Replies
1
Views
1K
Replies
1
Views
930
Replies
2
Views
705
  • Calculus and Beyond Homework Help
Replies
8
Views
343
  • Topology and Analysis
Replies
1
Views
336
Replies
1
Views
845
  • Programming and Computer Science
Replies
9
Views
1K
  • General Math
Replies
2
Views
686
Back
Top