Does it have an aura, or is the aura just a refraction defect of air?
It depends upon your vision. I see 'auras' around any light source because my eyesight is a bit blurry. Assuming that you don't have that problem, the filament would be a clearly defined source of light. (Or the bulb as a whole, if it's the frosted type.)
But then, if light as a wave... how come we can see everything perfectly defined (assuming perfect vision/photography), even stars that are light years away are not blurry at all, and visible as a dot at any given point, whereas waves are supposed to spread
Light is both particles and waves (well, actually neither, but it's an easy way to think of it), and subject to the inverse square rule for propogation. What you see as a star is only a tiny fraction of the light that it actually produced.
I'm really not at all well-versed in optics, though. Someone like Light Arrow or Space Tiger will be much more helpful to you.
oh well :(
They do spread. Consider this: the sun output's about 4E26 W of power so if it didn't spread out you wouldn't even survive 1 ms exposed to its light.
The wavelengths are very small, so it looks like particles to your eyes, i.e. it's photons hitting your eyes and not waves.
The light spreads, but you can think of it as photons spreading. So they still are the same, it's just that there is less and less of them the further away you are, since they spread out. That's why lights that are far away look dim.
I'm gonna confine myself to what we know from wave/EM optics, since I'm not really qualified to comment on the nature of the photon.
Light from a point source* is emitted in all directions, and the total energy emitted per unit time is evenly spread out over a sphere of radius r, where r is the distance between the observer and the star.
*Yeah, it's a gigantic ball of luminous gas, but it's far enough away that we can consider it a point source.
So if we just had a detector (e.g. a CCD), with no imaging optics (lenses) in front of it to focus the light back down to an image of the star, then all we would detect is a certain flux (power per unit area) attributable to that star. (The total power of the signal would then of course depend on the area of the detector). It is this flux that decreases with the square of the distance from the source, because the same amount of energy is being spread out over a larger and larger sphere, whose surface area grows with the distance squared.
If we add a lens, all the light from that star that is captured by the lens is focused down to a point. Geometric optics says that it would be a perfect point (because the spherical wavefronts are actually flat to a good approximation due to the distance of the source. Plane wavefronts correspond to parallel rays, which are focused down to a point by a lens). You're RIGHT though. Because of wave nature of light (which geometric optics ignores), it's not focused down to a perfect point. Diffraction makes the light spread into a disc called the Airy disc. This is true even if nothing else (such as atmospheric turbulence) disrupts the incoming light. That's why if you look at images of stars in astronomical photographs, they appear as discs, not perfect points. Indeed, if two stars are really close together, then our imaging resolution (our ability to "resolve" them or distinguish them as two separate sources), depends on the size of their Airy discs and whether they overlap. Even under perfect imaging conditions (e.g. a space telescope: no atmospheric turbulence etc), diffraction puts a fundamental limit on the resolution; the size of the Airy disc is proportional to the wavelength and inversely proportional to the diameter of your "aperture." So this is a fundamental limitation of the physics. This is called diffraction-limited imaging and it is the best case. Unless you have an infinitely wide telescope and can capture ALL of the light from the source that is being emitted in your direction, perfect imaging is impossible. That's one of the reasons why larger and larger telescopes are being built.
You should look up diffraction-limited imaging and Airy discs for more information. If you're familiar with Fourier analysis, you might want to look this up in the context of Fourier optics and optical transfer functions. Otherwise, don't worry about it.
EDIT: I think that in the case of human vision, the Airy disc is too small for us to see, so we observe stars as points. You could do the math...for a given wavlength and an average pupil size.
Things DO seem blurry if you magnify them enough. I don't know if you have ever used a really good optical microscope? If you try looking at something fairly small, say a line 1-2 um across, you will notice that the line is "blurry" at the edges even if you have perfect vision. The most fundamental reason for this is that what you are looking at is about the same size as the wavelength of the light hitting your eyes (another reason can of course be problems with the optics).
The way around this is of course the decrease the wavelength (which is what they do in the optical lithography used to make microelectronic circuits), but then you can't see it directly since it will be outside the visible spectrum.
You are forgetting what lenses do. A useful rule of thumb is that the optical field at the back focal plane of a lens is the Fourier transform of the field at the front pupil plane (and vice-versa). Thus, the light impinging on the Earth from a star is indeed nearly a plane wave, so a lens will focus the light to a point (more accurately, an Airy function, but that's too advanced right now).
The optical field entering your eye is the Fourier transform of the image projected onto the retina (approximately).
So this purely depends on the focus of your eyes?
One more mystery of my personal universe gone.
Thanks for awesome explanations guys =)
Depending on what "this" means, yes. Each star is point spread function of your eye.
Separate names with a comma.