How Does the X3 Sensor Technology Compare in Image Quality and Sensitivity?

Click For Summary
SUMMARY

The discussion focuses on the X3 sensor technology developed by Foveon, which utilizes a unique approach to color detection by measuring the penetration depth of photons in silicon. This method allows for a superior signal-to-noise ratio and maximizes the use of incoming light, unlike traditional Bayer filter sensors that lose color information. The Sigma SD1 and its successor, the Sigma SD1 Merrill, demonstrate impressive image quality compared to competitors like Canon 7D and Nikon 7000. However, challenges remain in high sensitivity scenarios, particularly regarding color noise and saturation.

PREREQUISITES
  • Understanding of Bayer filter technology in image sensors
  • Familiarity with RAW image formats and their advantages
  • Knowledge of photon behavior and penetration depth in silicon
  • Experience with digital camera models, specifically Sigma SD1 and Sigma SD1 Merrill
NEXT STEPS
  • Research the technical specifications and performance of the Sigma SD1 Merrill
  • Explore advancements in sensor technology beyond Bayer filters
  • Investigate color noise reduction techniques in high-sensitivity photography
  • Learn about the impact of sensor stacking on light attenuation and image quality
USEFUL FOR

Photographers, camera enthusiasts, and image quality analysts interested in advanced sensor technologies and their implications for professional photography.

Andre
Messages
4,296
Reaction score
73
Andy has elaborated about photo sensors https://www.physicsforums.com/showpost.php?p=3091595&postcount=2. The essential part:

Bayer filter: the pixels only detect the total amount of light incident; they do not distinguish colors. In order to generate a color image, sensor companies coat the sensor with an array of color filters, and the particular pattern has been standardized to a 'Bayer filter': Every other pixel sees green, and the other pixels alternate between red and blue. One important result from this is that the final image (say a color Jpeg file) has been produced by interpolating between pixels in order to appear that each image pixel has full color information. RAW images consist of the actual individual pixels and are used in more advanced photography, because each pixel retains its original identity and the photographer/print shop has more control over the final color print.

This gives the impression that a lot is wasted. If a "red" photon happens to hit a blue filter, it's lost. So every "pixel" containing two green and a red and a blue part, only use their surface area of the pixel to detect that color. The rest is lost. Can we do better than that?

So that question was explored by Foveon, and the idea was that photons with different wavelenghts have a different penetration depth in a silicon sensor, so if you could measure this depth you could reconstruct the corresponding 'color'. That way you could use all photons that hit the sensor. Nothing is lost and you'd have a superior signal to noise ratio, as each pixel is using its full surface for all colors.

That's the idea behind the X3 sensor, which was used in several cameras, like the Sigma SD1, however the system was very expensive, only aimed at the professional market.

So there is a successor now, the Sigma SD1 Merril, which is actually in the price range of the Canon 7D/Nikon 7000/Sony A77. And indeed it's image quality is stunning compared to those peers, as can be seen here

You may want to move the crop around on the overview pic, (in the lower part of the Martini bottle label) to view different parts. For instance on the tree in the Bayleys bottle label, the portret in line drawings above Mickey mouse or the feathers to the right.

I wonder why the system, whilst using all the photons, does not hold up at higher sensitivities, where it lags behind, especially on color noise. Thoughts?

edit: sorry typo, the title should read "The X3 sensor and the Sigma SD1"
 
Computer science news on Phys.org
I've been watching the Foveon guys since, like, 2001. I have no idea why they never caught on.
 
It seems to be very lacking on color saturation and dynamic range. I wonder if stacking the sensors on top of each other leads to significant amounts of attenuation of green and red light.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
7K
Replies
8
Views
5K
  • · Replies 1 ·
Replies
1
Views
13K
  • · Replies 152 ·
6
Replies
152
Views
11K