Color\Light efficiency in imaging systems

AI Thread Summary
The discussion centers on the efficiency of imaging systems, particularly CCD and CMOS cameras, in utilizing ambient light. It critiques the reliance on RGB filtering, suggesting that this method significantly underutilizes the full spectrum of visible light, potentially capturing less than 15% of incident light. The conversation highlights that white light contains a broader range of wavelengths, including yellow, orange, and violet, which RGB systems fail to account for. Additionally, it raises the point that colors can be perceived through combinations of wavelengths, not solely through RGB. The thread concludes by emphasizing the need for alternative filtering methods to improve light collection efficiency in imaging technologies.
gillwill
Messages
10
Reaction score
0
In making comparisons on efficiency in acquired ambient light for imaging devices, whether they be CCD and CMOS cameras or reflective displays, it seems the factoring is always based on Red, Green and Blue as if those are the only bands of visible light. For example, in acquiring\reflecting the color red at a pixel, if a pixel consists of three parallel subpixels of red, green and blue filters respectively, it would be claimed that around 1/3 of light striking a pixel is utilized; thereby other technologies with vertical filtering methods such as the Foveon camera and IMOD displays (still only RGB) can provide a 3x increase in efficiency, (implying a 90+ % utilization when using a 30-something % comparison ).

Isn't it much less than that in both cases. If I understand correctly "white" light, such as from sunlight, household lighting, etc…, consists of approximately equal proportions of all the bands of visible light, and if only implementing RGB filtering, what about the yellow, orange and violet wavelengths of incident light?

What about the ability of light combinations that are not exclusively RGB to create apperances of a color? For example, I've read some places where the "red" skin on an apple doesn't actualy reflect light in the red band of visibile light, but rather reflects combinations of other wavelenghths that make it appear "red".

At the very optimal, wouldn't a RGB subpixel system utilize less than 15% of the incident visible light, and a vertical filtering system less than half?
 
Science news on Phys.org
The color bands aren't that narrow and they are tuned reasonably well to the sensitivity of our eyes, so they do a reasonably good job at reproducing the colors we see. We had some discussion in this thread, where we discussed Sharp's technology that adds a yellow to its palette: https://www.physicsforums.com/showthread.php?t=390666
 
gillwill said:
In making comparisons on efficiency in acquired ambient light for imaging devices, whether they be CCD and CMOS cameras or reflective displays, it seems the factoring is always based on Red, Green and Blue as if those are the only bands of visible light. For example, in acquiring\reflecting the color red at a pixel, if a pixel consists of three parallel subpixels of red, green and blue filters respectively, it would be claimed that around 1/3 of light striking a pixel is utilized; thereby other technologies with vertical filtering methods such as the Foveon camera and IMOD displays (still only RGB) can provide a 3x increase in efficiency, (implying a 90+ % utilization when using a 30-something % comparison ).

Isn't it much less than that in both cases. If I understand correctly "white" light, such as from sunlight, household lighting, etc…, consists of approximately equal proportions of all the bands of visible light, and if only implementing RGB filtering, what about the yellow, orange and violet wavelengths of incident light?

What about the ability of light combinations that are not exclusively RGB to create apperances of a color? For example, I've read some places where the "red" skin on an apple doesn't actualy reflect light in the red band of visibile light, but rather reflects combinations of other wavelenghths that make it appear "red".

At the very optimal, wouldn't a RGB subpixel system utilize less than 15% of the incident visible light, and a vertical filtering system less than half?

I can't figure out what you are asking- you neglected the fill factor, for example.

RGB is often used because single-chip color sensors use a "Bayer filter" to generate color information:

http://en.wikipedia.org/wiki/Bayer_filter

That's not the only way to generate color images: Three-chip color cameras don't use this, and monochrome CCDs with rotating filters also use the full CCD.

Often, CCD manufacturers use microlenses to increase the collection efficiency of light, and the Foveon chip is another approach- I was very excited when I heard about it 10 years ago, and I still wonder why it's not more commonplace.

If you are concerned with scavenging every last photon, you should not use a single-chip color CCD.
 
Two comments;

1. The RGB scheme is widely used among display systems and also image control software. Moving to a different scheme would require these systems to be replaced.

2. Quantum efficiency (for light collection) is only an issue when signal levels are so low that quantum noise becomes a problem. For every day light levels, measures typically need to be taken so as to not saturate the detectors, either using hardware (filters etc.) or software.

Claude.
 
Thread 'A quartet of epi-illumination methods'
Well, it took almost 20 years (!!!), but I finally obtained a set of epi-phase microscope objectives (Zeiss). The principles of epi-phase contrast is nearly identical to transillumination phase contrast, but the phase ring is a 1/8 wave retarder rather than a 1/4 wave retarder (because with epi-illumination, the light passes through the ring twice). This method was popular only for a very short period of time before epi-DIC (differential interference contrast) became widely available. So...
I am currently undertaking a research internship where I am modelling the heating of silicon wafers with a 515 nm femtosecond laser. In order to increase the absorption of the laser into the oxide layer on top of the wafer it was suggested we use gold nanoparticles. I was tasked with modelling the optical properties of a 5nm gold nanoparticle, in particular the absorption cross section, using COMSOL Multiphysics. My model seems to be getting correct values for the absorption coefficient and...
Back
Top