What is the Intensity Distribution in Young's Experiment with Light?

Click For Summary
SUMMARY

The intensity distribution in Young's experiment with light, specifically using two slits, is characterized by a cos² function with a constant amplitude, indicating infinite intensity. This phenomenon arises from the assumption that light intensity does not diminish with distance, despite the spherical nature of the source. Additionally, the diffraction pattern plays a crucial role, suggesting that idealized point sources may not accurately represent physical reality. The derived intensity function is I = I_max cos²(phi/2), where phi is defined as 2Pi*d*sin(theta)/lambda, with d being the slit distance, theta the observation angle, and lambda the wavelength.

PREREQUISITES
  • Understanding of wave optics principles
  • Familiarity with the concept of intensity in wave phenomena
  • Knowledge of diffraction patterns and their significance
  • Basic mathematical skills for interpreting trigonometric functions
NEXT STEPS
  • Explore the implications of spherical wavefronts in optics
  • Investigate the relationship between slit width and diffraction patterns
  • Learn about the mathematical derivation of intensity distributions in multi-slit experiments
  • Utilize simulation tools like the provided Java applet to visualize intensity distributions
USEFUL FOR

Students of physics, optical engineers, and researchers interested in wave optics and the behavior of light in interference experiments.

nonequilibrium
Messages
1,412
Reaction score
2
If you disregard the diffraction pattern, the intensity distribution of the Young experiment with light (take two slits) gives an infinite intensity (in the sense that it doesn't decrease for an infinite height: the intensity distribution is given by a cos² function with a constant amplitude)

This could have two reasons:

a) It assumes the intensity of a light wave doesn't decrease with distance, even though it has a spherical source and since I = P/A and P = constant, I should decrease;

b) The diffraction pattern is basically essential and this is a case where a mathematical idealization like "point sources" fail in a physical theory.

It could possibly be even a combination: The light originally was a parallel bundle, in which case indeed (cf. a) the I doesn't diminish with distance, and you presume (cf. b) that a (fraction of a) parallel light beam bundle can be 'turned into' a spherical source, but this is impossible in relation to the fact that I was distance-independent.

Any thoughts?

mr. vodka

EDIT: the distribution fuction I have derived in my book is (for the averaged) I = I_max cos²(phi/2) with phi = 2Pi*d*sin(theta)/lambda with d = slit distance, theta = angle you're looking at, lambda = wavelength. However, I can't get any clear info on what I_max is, my book seems to be avoiding the matter ánd applying ambiguous logic (when going to #slits = 3, it chooses for I_max the intensity as if there were one slit, while this interpretation isn't compatible for #slits = 2). Is it hard to say? Is it connected with my above question about the nature of spherical intensity?
 
Last edited:
Science news on Phys.org
Now that I took a closer look, I noticed that if you take the slit width smaller than the wavelength, the single-slit diffraction pattern is completely negligible (there isn't even a first minimum), so even wíth diffraction pattern you get an infinite spread of intensity peaks in this simulation. What am I overlooking?

Here's a fun java applet for those who want to get some feeling of it:

http://www.physics.uq.edu.au/people/mcintyre/applets/grating/grating.html
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 17 ·
Replies
17
Views
6K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K