- #1
Andre
- 4,311
- 74
Andy has elaborated about photo sensors https://www.physicsforums.com/showpost.php?p=3091595&postcount=2. The essential part:
This gives the impression that a lot is wasted. If a "red" photon happens to hit a blue filter, it's lost. So every "pixel" containing two green and a red and a blue part, only use their surface area of the pixel to detect that color. The rest is lost. Can we do better than that?
So that question was explored by Foveon, and the idea was that photons with different wavelenghts have a different penetration depth in a silicon sensor, so if you could measure this depth you could reconstruct the corresponding 'color'. That way you could use all photons that hit the sensor. Nothing is lost and you'd have a superior signal to noise ratio, as each pixel is using its full surface for all colors.
That's the idea behind the X3 sensor, which was used in several cameras, like the Sigma SD1, however the system was very expensive, only aimed at the professional market.
So there is a successor now, the Sigma SD1 Merril, which is actually in the price range of the Canon 7D/Nikon 7000/Sony A77. And indeed it's image quality is stunning compared to those peers, as can be seen here
You may want to move the crop around on the overview pic, (in the lower part of the Martini bottle label) to view different parts. For instance on the tree in the Bayleys bottle label, the portret in line drawings above Mickey mouse or the feathers to the right.
I wonder why the system, whilst using all the photons, does not hold up at higher sensitivities, where it lags behind, especially on color noise. Thoughts?
edit: sorry typo, the title should read "The X3 sensor and the Sigma SD1"
Bayer filter: the pixels only detect the total amount of light incident; they do not distinguish colors. In order to generate a color image, sensor companies coat the sensor with an array of color filters, and the particular pattern has been standardized to a 'Bayer filter': Every other pixel sees green, and the other pixels alternate between red and blue. One important result from this is that the final image (say a color Jpeg file) has been produced by interpolating between pixels in order to appear that each image pixel has full color information. RAW images consist of the actual individual pixels and are used in more advanced photography, because each pixel retains its original identity and the photographer/print shop has more control over the final color print.
This gives the impression that a lot is wasted. If a "red" photon happens to hit a blue filter, it's lost. So every "pixel" containing two green and a red and a blue part, only use their surface area of the pixel to detect that color. The rest is lost. Can we do better than that?
So that question was explored by Foveon, and the idea was that photons with different wavelenghts have a different penetration depth in a silicon sensor, so if you could measure this depth you could reconstruct the corresponding 'color'. That way you could use all photons that hit the sensor. Nothing is lost and you'd have a superior signal to noise ratio, as each pixel is using its full surface for all colors.
That's the idea behind the X3 sensor, which was used in several cameras, like the Sigma SD1, however the system was very expensive, only aimed at the professional market.
So there is a successor now, the Sigma SD1 Merril, which is actually in the price range of the Canon 7D/Nikon 7000/Sony A77. And indeed it's image quality is stunning compared to those peers, as can be seen here
You may want to move the crop around on the overview pic, (in the lower part of the Martini bottle label) to view different parts. For instance on the tree in the Bayleys bottle label, the portret in line drawings above Mickey mouse or the feathers to the right.
I wonder why the system, whilst using all the photons, does not hold up at higher sensitivities, where it lags behind, especially on color noise. Thoughts?
edit: sorry typo, the title should read "The X3 sensor and the Sigma SD1"