tris_d said:
Ok, let me define brightness as captured light on some photo that we open in Photoshop, turn it to gray-scale image, and then those pixels will have some value from 0 to 100, where 0 is black, 100 is white, and in between are shades of gray.
Let me use a real world example that I commonly deal with when using my CCD camera for astrophotography. My telescope focuses light down onto the CCD sensor. The charge on each pixel is built up during exposure by photons being absorbed by the pixels. Each pixel can hold a certain number of electrons before reaching "saturation". The CCD's charge amplifier reads the amount of charge during readout and converts that into a voltage signal. This voltage signal, being 16 bits, is capable of being represented by 65,535 different numbers. Some CCD pixels can hold, say 55,000 electrons, while others can hold around 80,000.
The actual number is generally different for different sensors and doesn't matter that much really. It only matters that these electrons are converted into a 16-bit signal which is then displayed as a certain color or grayscale pixel on your screen. Since we can't detect a difference of 1/65,535 in brightness with our eye, we have to "stretch" the values upon display, which only means that we make our high and low points, IE the value of each pixel that represents black or white, different. So I could bring my white point down to 5,000 if I have a very low light picture to make it possible to even see anything. If I just left the white point at 65,535 the whole picture would just look black.
So let's say a pixel is readout and the charge is converted into a signal that measures as 65,000. What does this tell us about the actual light itself? Can it tell us the wavelength of the light? No. It can only tell us how many electrons were built up in each pixel.
So how do we get nice color pictures? We use filters and we take multiple exposures. OR we use a single exposure, but every single pixel has it's own filter in front of it of either red, blue, or green. This is known as a Bayer Array. So, if your software doesn't know you are using a color CCD sensor with a bayer array, it will simply display your image as a grayscale monochrome image. I personally use a monochrome CCD, which means that my sensor doesn't have a bayer array. Instead I use a separate filter wheel with RGB or other filters to get my different color frames and then combine them into a color image.
So then having all the photons be the same wavelength, brightness of each pixel will be proportional to the amount of photons that impacted that particular pixel?
Nope. ALL photons, of ANY frequency with enough energy to excite an electron will be able to contribute to the pixel's final value. This is why filters are important. We reject the wavelengths that we DON'T want to see. An unfiltered CCD typically has a range of wavelengths that it responds to, with light of around 1,000 nm being the lowest energy capable of exciting an electron, to 300 nm being the highest energy light that it can respond to. Higher energy light is usually absorbed in in the small features of the CCD pixels before it can reach the photosensitive layer.
If frequency defines intensity too, would that mean that one blue photon with higher frequency could produce more bright pixel than the other blue photon with lower frequency?
One more thing on frequency. While CCD's have a range of wavelengths they respond too, they do not respond to all of these wavelengths equally well. See page 5 of the following link:
http://www.stargazing.net/david/QSI/KAI-2020LongSpec.pdf
The graph in the lower left of page 5 represents the Quantum Efficiency of the chip. The QE is the percent of light that reaches the CCD that will end up being converted to photoelectrons. The graph is labeled from 0.0-0.6, with 0.6 being 60% efficiency. As you can see, the graph peaks in the 450 nm range, which is the blue-green region, for a monochrome CCD. The colored lines represent the 3 different color filters of a bayer filter that come with the color chips. So even light of the best wavelength for this particular CCD is only being read with a 56-57% efficiency, and much of the spectrum is far worse.
Not that this CCD is bad, this is actually a very good CCD efficiency for the price range. For comparison, your average QE for photographic film is well under 10%. The human eye is far harder to quantify a QE for, as vision is not just a mechanical process of detecting light and turning it into a value, but an extremely complicated process involving multiple receptors, timing of these receptors firing, and dozens if not hundreds of other things. Personally I would venture a guess and say that the QE of normal color vision in the daytime is generally less than 1%. But don't quote me on that.
I thought frequency defines color, that frequency is just another face of the wavelength and that they are always in constant relation.
You are correct.