Andy has elaborated about photo sensors here. The essential part: This gives the impression that a lot is wasted. If a "red" photon happens to hit a blue filter, it's lost. So every "pixel" containing two green and a red and a blue part, only use their surface area of the pixel to detect that color. The rest is lost. Can we do better than that? So that question was explored by Foveon, and the idea was that photons with different wavelenghts have a different penetration depth in a silicon sensor, so if you could measure this depth you could reconstruct the corresponding 'color'. That way you could use all photons that hit the sensor. Nothing is lost and you'd have a superior signal to noise ratio, as each pixel is using its full surface for all colors. That's the idea behind the X3 sensor, which was used in several cameras, like the Sigma SD1, however the system was very expensive, only aimed at the professional market. So there is a successor now, the Sigma SD1 Merril, which is actually in the price range of the Canon 7D/Nikon 7000/Sony A77. And indeed it's image quality is stunning compared to those peers, as can be seen here You may want to move the crop around on the overview pic, (in the lower part of the Martini bottle label) to view different parts. For instance on the tree in the Bayleys bottle label, the portret in line drawings above Mickey mouse or the feathers to the right. I wonder why the system, whilst using all the photons, does not hold up at higher sensitivities, where it lags behind, especially on color noise. Thoughts? edit: sorry typo, the title should read "The X3 sensor and the Sigma SD1"