Ok, so my book explains single-slit diffraction like this: For a single slit with width a, for every point on the wave front there is another point a/2 distance away that it can be paired with. 1) If the distance (a2)*sin(theta) is half a wavelength, the interference is destructive. So equivalently the condition for this destructive interference is sin(theta)= lambda That's fine with me, however, here's the part that is a little confusing to me: We can extend this idea to find other angles of perfect away. destructive interference. Suppose each wavelet is paired with another wavelet from a point a/4 away. Replacing a/2 with a/4in the above equation, the condition for destructive interference becomes a*sin(theta)= 2lambda. So the general condition for dest. interference is a*sin(theta)=p*lambda , p=1,2,3... But, you cannot replace the a/2 with anything with an odd denominator, such as a/3. Could someone explain the whole logic of picking a point a/2 or a/4 away? Also, in my lab section, we produced double-slit diffraction and traced it on paper. We observed two less noticeable diffraction minima on each side. We used the positions of these to calculate the width of our slit (a) with the equation asin(theta)=p*lambda. Why did these diffraction minima occur, and would their p value in the above general condition be 1?