I'm really curious as to how it's possible for image sensors to capture scenes that look so accurate. The tiny color filters placed over the photosites are dye-based and due to that and their thinness I assume this means that a green filter must let in quite a bit of red and blue light which would contaminate its readings and lead to inaccurate color capture... yet this doesn't seem to happen. Why? Also, something that really puzzles me: todays camera sensors utilizing the Bayer array only have red, green, and blue photosites, so they should not be able to capture spectral violet. Even if the wavelength sensitivity of the blue photosites extends into the violet range, it would just register it as blue and when viewed on a standard RGB monitor something violet should only look light highly saturated blue, right? Well, I tested this theory out with a point-and-shoot camera by taking a photo of a spectrum made from sunlight with a prism and what looked to my eye as definitely violet beyond the blue band was indeed captured by the camera as nothing more than a shade of blue. So this all makes sense... But here's where I get puzzled: I then took a photo of the same spectrum with my DSLR (which utilizes the same RGB Bayer array sensor as the point-and-shoot), and the camera captured the violet light and represented it on the monitor as purple (a mix of blue and red which to our eyes gives a similar impression as violet light). Obviously there was no red light in the violet band and the camera's sophisticated image processor recognized that violet was being captured and encoded it as purple to give a similar impression within the limitations of an RGB system. But how is it that the camera was able to capture the violet light in the first place and distinguish it from blue light? Thanks so much for illuminating this for me (sorry for the pun, I couldn't resist)!