Color\Light efficiency in imaging systems

Click For Summary

Discussion Overview

The discussion focuses on the efficiency of imaging systems, particularly CCD and CMOS cameras, in utilizing ambient light. Participants explore the implications of using RGB color filtering versus other methods, such as vertical filtering technologies, and question the adequacy of RGB in capturing the full spectrum of visible light.

Discussion Character

  • Exploratory
  • Debate/contested
  • Technical explanation

Main Points Raised

  • One participant argues that RGB filtering may lead to underutilization of incident visible light, suggesting that less than 15% of light is captured by RGB subpixel systems.
  • Another participant counters that the color bands are tuned to human sensitivity and that RGB does a reasonable job at reproducing colors, referencing previous discussions about alternative technologies.
  • Concerns are raised about the exclusion of other wavelengths, such as yellow, orange, and violet, in RGB filtering, questioning the completeness of color representation.
  • Participants discuss the use of different technologies, such as the Foveon camera and three-chip color cameras, which do not rely solely on RGB filtering.
  • One participant mentions the importance of fill factor and quantum efficiency in light collection, noting that these factors are critical at low signal levels.
  • Another participant highlights the practical challenges of changing from the RGB scheme due to its widespread use in display systems and image control software.

Areas of Agreement / Disagreement

Participants express differing views on the efficiency of RGB filtering and its ability to capture the full spectrum of visible light. There is no consensus on the adequacy of RGB filtering or the effectiveness of alternative technologies.

Contextual Notes

Participants note limitations regarding the fill factor and quantum efficiency, suggesting that these factors may influence the overall efficiency of light collection in imaging systems. The discussion does not resolve these complexities.

gillwill
Messages
10
Reaction score
0
In making comparisons on efficiency in acquired ambient light for imaging devices, whether they be CCD and CMOS cameras or reflective displays, it seems the factoring is always based on Red, Green and Blue as if those are the only bands of visible light. For example, in acquiring\reflecting the color red at a pixel, if a pixel consists of three parallel subpixels of red, green and blue filters respectively, it would be claimed that around 1/3 of light striking a pixel is utilized; thereby other technologies with vertical filtering methods such as the Foveon camera and IMOD displays (still only RGB) can provide a 3x increase in efficiency, (implying a 90+ % utilization when using a 30-something % comparison ).

Isn't it much less than that in both cases. If I understand correctly "white" light, such as from sunlight, household lighting, etc…, consists of approximately equal proportions of all the bands of visible light, and if only implementing RGB filtering, what about the yellow, orange and violet wavelengths of incident light?

What about the ability of light combinations that are not exclusively RGB to create apperances of a color? For example, I've read some places where the "red" skin on an apple doesn't actualy reflect light in the red band of visibile light, but rather reflects combinations of other wavelenghths that make it appear "red".

At the very optimal, wouldn't a RGB subpixel system utilize less than 15% of the incident visible light, and a vertical filtering system less than half?
 
Science news on Phys.org
The color bands aren't that narrow and they are tuned reasonably well to the sensitivity of our eyes, so they do a reasonably good job at reproducing the colors we see. We had some discussion in this thread, where we discussed Sharp's technology that adds a yellow to its palette: https://www.physicsforums.com/showthread.php?t=390666
 
gillwill said:
In making comparisons on efficiency in acquired ambient light for imaging devices, whether they be CCD and CMOS cameras or reflective displays, it seems the factoring is always based on Red, Green and Blue as if those are the only bands of visible light. For example, in acquiring\reflecting the color red at a pixel, if a pixel consists of three parallel subpixels of red, green and blue filters respectively, it would be claimed that around 1/3 of light striking a pixel is utilized; thereby other technologies with vertical filtering methods such as the Foveon camera and IMOD displays (still only RGB) can provide a 3x increase in efficiency, (implying a 90+ % utilization when using a 30-something % comparison ).

Isn't it much less than that in both cases. If I understand correctly "white" light, such as from sunlight, household lighting, etc…, consists of approximately equal proportions of all the bands of visible light, and if only implementing RGB filtering, what about the yellow, orange and violet wavelengths of incident light?

What about the ability of light combinations that are not exclusively RGB to create apperances of a color? For example, I've read some places where the "red" skin on an apple doesn't actualy reflect light in the red band of visibile light, but rather reflects combinations of other wavelenghths that make it appear "red".

At the very optimal, wouldn't a RGB subpixel system utilize less than 15% of the incident visible light, and a vertical filtering system less than half?

I can't figure out what you are asking- you neglected the fill factor, for example.

RGB is often used because single-chip color sensors use a "Bayer filter" to generate color information:

http://en.wikipedia.org/wiki/Bayer_filter

That's not the only way to generate color images: Three-chip color cameras don't use this, and monochrome CCDs with rotating filters also use the full CCD.

Often, CCD manufacturers use microlenses to increase the collection efficiency of light, and the Foveon chip is another approach- I was very excited when I heard about it 10 years ago, and I still wonder why it's not more commonplace.

If you are concerned with scavenging every last photon, you should not use a single-chip color CCD.
 
Two comments;

1. The RGB scheme is widely used among display systems and also image control software. Moving to a different scheme would require these systems to be replaced.

2. Quantum efficiency (for light collection) is only an issue when signal levels are so low that quantum noise becomes a problem. For every day light levels, measures typically need to be taken so as to not saturate the detectors, either using hardware (filters etc.) or software.

Claude.
 

Similar threads

  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 21 ·
Replies
21
Views
5K
  • · Replies 9 ·
Replies
9
Views
9K
Replies
8
Views
6K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K