Crazy Tosser said:
But then, if light as a wave... how come we can see everything perfectly defined (assuming perfect vision/photography), even stars that are light years away are not blurry at all, and visible as a dot at any given point, whereas waves are supposed to spread
I'm going to confine myself to what we know from wave/EM optics, since I'm not really qualified to comment on the nature of the photon.
Light from a point source* is emitted in all directions, and the total energy emitted per unit time is evenly spread out over a sphere of radius r, where r is the distance between the observer and the star.
*Yeah, it's a gigantic ball of luminous gas, but it's far enough away that we can consider it a point source.
So if we just had a detector (e.g. a CCD), with no imaging optics (lenses) in front of it to focus the light back down to an image of the star, then all we would detect is a certain flux (power per unit area) attributable to that star. (The total power of the signal would then of course depend on the area of the detector). It is this flux that decreases with the square of the distance from the source, because the same amount of energy is being spread out over a larger and larger sphere, whose surface area grows with the distance squared.
If we add a lens, all the light from that star that is captured by the lens is focused down to a point. Geometric optics says that it would be a perfect point (because the spherical wavefronts are actually flat to a good approximation due to the distance of the source. Plane wavefronts correspond to parallel rays, which are focused down to a point by a lens). You're RIGHT though. Because of wave nature of light (which geometric optics ignores), it's not focused down to a perfect point. Diffraction makes the light spread into a disc called the Airy disc. This is true even if nothing else (such as atmospheric turbulence) disrupts the incoming light. That's why if you look at images of stars in astronomical photographs, they appear as discs, not perfect points. Indeed, if two stars are really close together, then our imaging resolution (our ability to "resolve" them or distinguish them as two separate sources), depends on the size of their Airy discs and whether they overlap. Even under perfect imaging conditions (e.g. a space telescope: no atmospheric turbulence etc), diffraction puts a fundamental limit on the resolution; the size of the Airy disc is proportional to the wavelength and inversely proportional to the diameter of your "aperture." So this is a fundamental limitation of the physics. This is called diffraction-limited imaging and it is the best case. Unless you have an infinitely wide telescope and can capture ALL of the light from the source that is being emitted in your direction, perfect imaging is impossible. That's one of the reasons why larger and larger telescopes are being built.
You should look up diffraction-limited imaging and Airy discs for more information. If you're familiar with Fourier analysis, you might want to look this up in the context of Fourier optics and optical transfer functions. Otherwise, don't worry about it.
EDIT: I think that in the case of human vision, the Airy disc is too small for us to see, so we observe stars as points. You could do the math...for a given wavlength and an average pupil size.