If you disregard the diffraction pattern, the intensity distribution of the Young experiment with light (take two slits) gives an infinite intensity (in the sense that it doesn't decrease for an infinite height: the intensity distribution is given by a cos² function with a constant amplitude) This could have two reasons: a) It assumes the intensity of a light wave doesn't decrease with distance, even though it has a spherical source and since I = P/A and P = constant, I should decrease; b) The diffraction pattern is basically essential and this is a case where a mathematical idealization like "point sources" fail in a physical theory. It could possibly be even a combination: The light originally was a parallel bundle, in which case indeed (cf. a) the I doesn't diminish with distance, and you presume (cf. b) that a (fraction of a) parallel light beam bundle can be 'turned into' a spherical source, but this is impossible in relation to the fact that I was distance-independent. Any thoughts? mr. vodka EDIT: the distribution fuction I have derived in my book is (for the averaged) I = I_max cos²(phi/2) with phi = 2Pi*d*sin(theta)/lambda with d = slit distance, theta = angle you're looking at, lambda = wavelength. However, I can't get any clear info on what I_max is, my book seems to be avoiding the matter ánd applying ambiguous logic (when going to #slits = 3, it chooses for I_max the intensity as if there were one slit, while this interpretation isn't compatible for #slits = 2). Is it hard to say? Is it connected with my above question about the nature of spherical intensity?