Optical Low Pass Filter, how does it hit CCD?

In summary: Yes, Wiki has a good introduction to the concept of an optical low pass filter and the various ways they can be implemented. It appears that the one described in the article is a simple example of a double layer of birefringent material which in effect splits light into 4: (2*25%) green, 25% red, and 25% blue (for a standard bayer matrix). The part I'm not sure about is if all 4 fall on the same CCD element, or if they are directed to 1 element each.
  • #1
mishima
561
34
I'm confused about optical low pass filters (found on CCD imaging devices). As I understand it is a double layer of birefringent material which in effect splits light into 4: (2*25%) green, 25% red, and 25% blue (for a standard bayer matrix). The part I'm not sure about is if all 4 fall on the same CCD element, or if they are directed to 1 element each.

If the latter, are the 4 computationally combined as a pixel?
 
Engineering news on Phys.org
  • #2
Afaik, the optical filter is just a 'frosted' layer with the appropriate width of corrugations to blur the image appropriately. I can't think it would be in any way synchronised with the positions of the sensor elements. It should't need to be, as the cut off spatial frequency is less than half the frequency of the sensor array (it's there for anti aliasing, of course). There would be serious registration issues between the pixel array and the lpf elements. I can't think that it would be advantageous.
Did you have a reason for your suggestion or were you just 'thinking aloud' (like we all do)?
 
  • #3
Well, I had been looking at simplified pictures like the first one on this page, and similar. It sounded like 2 birefringent plates for a total of 4 images, which would then correspond to the 4 elements of the bayer matrix. I suppose I'm trying to understand the size relationship between an element of the matrix and an element of the CCD.
 
  • #4
mishima said:
I'm confused about optical low pass filters (found on CCD imaging devices). As I understand it is a double layer of birefringent material which in effect splits light into 4: (2*25%) green, 25% red, and 25% blue (for a standard bayer matrix). The part I'm not sure about is if all 4 fall on the same CCD element, or if they are directed to 1 element each.

If the latter, are the 4 computationally combined as a pixel?

mishima said:
Well, I had been looking at simplified pictures like the first one on this page, and similar. It sounded like 2 birefringent plates for a total of 4 images, which would then correspond to the 4 elements of the bayer matrix. I suppose I'm trying to understand the size relationship between an element of the matrix and an element of the CCD.

The article at HowStuffWorks has a reasonable (but simple) introduction to the various ways you can get the color images from digital sensor arrays. Check out this page in particular:

http://electronics.howstuffworks.com/cameras-photography/digital/digital-camera5.htm

:smile:
 
  • #5
  • #6
I have read that Wiki about the birefringent filter and it appears to spread each incident 'ray' into a set of rays, around the main direction a discrete filtering process - very clever. From the description,I don't think that is particularly related to the colour sensor grouping; Each set of sensors is subsampling the projected image and the anti alias filter will present it with contributions from around the nominal central point - giving a LP effect. It is pretty crude - but effective. It is probably limiting the resolution more than necessary.

I also found that the more modern Digital Cameras are able to do without any anti aliasing filter because the pixel density is higher than the spatial frequencies in the image arriving from the lens. There is a fundamental limit (diffraction) for the resolution of the optics (it's always LP filtered by the optics) so all that's necessary is to sample well within this limit. The digital processing can be a lot smarter than bits of birefringent material, placed in front. It should be able to squeeze even better resolution from the optics than merely increasing the pixel density.
 
  • #7
sophiecentaur said:
But wasn't the original question about the spatial anti aliasing filter? By definition, it is much coarser than the pixel pitch.

I'm not sure. He seemed to be mixing concepts, so I responded to this part of his question:

The part I'm not sure about is if all 4 fall on the same CCD element, or if they are directed to 1 element each.

:smile:
 
  • #8
Right, so light goes through the low pass and is split into four. Each one of the four falls into a different element of the bayer matrix, and each element of the bayer matrix corresponds to one element of the CCD in a raw image. However, in a processed image, because of a demosaicing algorithm there is no longer a one to one relationship. Is any of that grossly incorrect?
 
  • #9
sophiecentaur said:
I have read that Wiki about the birefringent filter and it appears to spread each incident 'ray' into a set of rays, around the main direction a discrete filtering process - very clever. From the description,I don't think that is particularly related to the colour sensor grouping;
The birefringent filter is quite carefully chosen to fit with the color sensor grouping. Ideally one wants the ray at a point to be sampled by all color elements nearest the point, so that full RGB color information can be recorded.

I also found that the more modern Digital Cameras are able to do without any anti aliasing filter because the pixel density is higher than the spatial frequencies in the image arriving from the lens. There is a fundamental limit (diffraction) for the resolution of the optics (it's always LP filtered by the optics) so all that's necessary is to sample well within this limit. The digital processing can be a lot smarter than bits of birefringent material, placed in front. It should be able to squeeze even better resolution from the optics than merely increasing the pixel density.
It remains a trade off. Good optics on current DSLRs are still quite capable of producing visible moire effects at "sharp" apertures. Digital anti-aliasing software has certainly improved, but as you know there is no "surefire" way of suppressing aliasing after a signal has been sampled.

Further increasing the pixel density would be a fairly straightforward way to push the AA filter beyond the limits of the optics entirely.
 
  • #10
mishima said:
Right, so light goes through the low pass and is split into four. Each one of the four falls into a different element of the bayer matrix, and each element of the bayer matrix corresponds to one element of the CCD in a raw image. However, in a processed image, because of a demosaicing algorithm there is no longer a one to one relationship. Is any of that grossly incorrect?

That's more or less correct.

In practice the LP (these days) tends to be slightly "weak," so that there is less loss of resolution at spatial frequencies of interest, but with the disadvantage of possibly aliasing artifacts under some circumstances.
 
  • #11
olivermsun said:
That's more or less correct.

In practice the LP (these days) tends to be slightly "weak," so that there is less loss of resolution at spatial frequencies of interest, but with the disadvantage of possibly aliasing artifacts under some circumstances.
Is it correct? It sounds to me that he is saying that one pixel gets just one pixel's worth of image - just distributed over the colour sensors. That doesn't correspond to any filtering - just splitting the light so that it falls on each of the colour sensors. I am not sure what that will achieve. . Surely the point of splitting the incident light is to spread it over several pixels to constitute spatial low pass filtering (blurring).
Is that not just because the sensor resolution is nearer that of the optics - which has not changed since 6Mpx arrays, with all good SLR cameras?
 
  • #12
sophiecentaur said:
It sounds to me that he is saying that one pixel gets just one pixel's worth of image - just distributed over the colour sensors. That doesn't correspond to any filtering - just splitting the light so that it falls on each of the colour sensors.
Each of the 4 colored squares in the Bayer pattern (RGBG) has a separate CCD (or CMOS) sensor site underneath it. The "raw image" just contains the readout from all of the (color filtered) sites. That's the 1-to-1 correspondence I was agreeing with.

To get a full color image, you'd then have to combine information from neighboring sites in some sort of interpolation since you only measured G at 2 positions and R and B at one position each, but you want to output an image with RGB information at all 4 sites.

That doesn't correspond to any filtering - just splitting the light so that it falls on each of the colour sensors. I am not sure what that will achieve. . Surely the point of splitting the incident light is to spread it over several pixels to constitute spatial low pass filtering (blurring).
Well, as you know, the light (that would have arrived) at a point has to be split so that it is sampled by all 4 color-filtered sites in order to record RGB information. The combination of splitting and the area integration of the sensor sites effectively gives a spatial lowpass filter. Avoiding color aliasing requires a stronger lowpass since the effective sampling rate for color is only 1/2 that for luminance. Also, the optical lowpass isn't an ideal filter—there's always some rolloff below the frequency where you actually want to cut off. In some cases people view this as an unnecessary "waste" of potential resolution.

Is that not just because the sensor resolution is nearer that of the optics - which has not changed since 6Mpx arrays, with all good SLR cameras?
Well, aliasing definitely becomes less of a problem as the sensor sampling frequencies get higher. Then lens flaws and/or diffraction, plus any imperfect technique, tend to provide enough low passing, which case the physical LPF is redundant (and detrimental to resolution).

The optics have slowly changed since the 6 MP days, but even with good "old" lenses you can still see aliasing artifacts in certain scenes shot with 36 MP full frame sensors or even the 24 MP APS-C sensors.
 
  • #13
Most of this makes sense to me but any LP filtering must involve spreading the received image over more than one pixel. The fact that there are multiple colour sensors and that there is some summing of the colour sensor outputs is not relevant to that because that processing is after the sampling (too late to remove artefacts). The anti aliasing must be before the sampling - either by the transfer function of the lens or by a diffusing layer on top. I cannot see how the light can be split selectively between the three sensors (except in as far as it is diffused in some way). I am presumably mis reading some of this but the OP seems to be assuming that only long wavelength light lands on the long wavelength sensor. I don't see how this can be achieved when light arrives at all angles - it's not like a shadow mask in an old colour TV display tube where RG and B phosphors are only hit with the appropriate electron beam. Can you resolve this for me?
 
  • #14
sophiecentaur said:
I am presumably mis reading some of this but the OP seems to be assuming that only long wavelength light lands on the long wavelength sensor. I don't see how this can be achieved when light arrives at all angles - it's not like a shadow mask in an old colour TV display tube where RG and B phosphors are only hit with the appropriate electron beam. Can you resolve this for me?
There's a Color filter array (CFA) in front of the sensor, so the sensor is only picking up light in the desired passband at each of the four locations (r/g/b/g).

It is a little like having RGB phosphors, but in reverse!
 
  • #15
olivermsun said:
There's a Color filter array (CFA) in front of the sensor, so the sensor is only picking up light in the desired passband at each of the four locations (r/g/b/g).

It is a little like having RGB phosphors, but in reverse!

Yes - but no.
The phosphors are hit by electrons from just one direction (the shadow mask and electron optics ensures this) so the systems are not comparable; there is no pre-filtering and the electron projection system is relatively 'dumb'. A colour filter just in front of each sensor will do the basic colour analysis but there is no directivity there and it has nothing to do with spatial filtering - which must be at pre-sampling level. This is my problem with what the OP appears to say and what you seem to be agreeing with. My issue has nothing to do with the colour analysis and would equally apply to a monochrome array, which would still need an anti alias filter. The birefringent system could work on a monochrome array as long as its output spans more than one pixel; the fact that, in a colour array, the different filters are offset by up to a pixel spacing, doesn't have anything fundamental to do with the basic spatial filter. The "how stuff works' link doesn't actually deal with pre filtering (afaics) but just talks about the matrixing.
 
  • #16
sophiecentaur said:
Yes - but no.
The phosphors are hit by electrons from just one direction (the shadow mask and electron optics ensures this) so the systems are not comparable;
The light that passes through the camera lens has a relatively small incidence angle at the sensor, so there is some analogy.. Recent lenses designed for digital sensors tend to deliberately limit this angle.

there is no pre-filtering and the electron projection system is relatively 'dumb'. A colour filter just in front of each sensor will do the basic colour analysis but there is no directivity there and it has nothing to do with spatial filtering - which must be at pre-sampling level. This is my problem with what the OP appears to say and what you seem to be agreeing with. My issue has nothing to do with the colour analysis and would equally apply to a monochrome array, which would still need an anti alias filter. The birefringent system could work on a monochrome array as long as its output spans more than one pixel; the fact that, in a colour array, the different filters are offset by up to a pixel spacing, doesn't have anything fundamental to do with the basic spatial filter.
The color filters explain why, e.g., only red light is assumed to be detected at the “red” photo sites, which is something you seemed to question in your previous post. The arrangement of color sites is significant because it sets the 1-pixel width of the birefringent split that is needed to avoid color aliasing.

Meanwhile, the pixel integration is effectively a filter with a null at the sampling frequency Fs. The 1-pixel-wide birefringent split produces a null at Fs/2. Combine them and you have a reasonable physical realization of a low pass filter. The downside is the rolloff below Fs/2.

This could also work in a monochrome system, but since "jaggies" tend to less objectionable than color aliasing, the few monochrome cameras out there along with the Foveon X3 sensor, which has stacked color-sensitive photo sites, tend to eliminate the LP filter completely and just accept the artifacts in return for the extra response below Fs/2, relative to a filtered sensor.
 
  • #17
I am getting there slowly.
I understand about the wavelength selective 'colour' filters (obviously I didn't make my point well enough) but I still can't get the idea of the optics pushing all light from one direction in a scene into one direction of arrival on the sensor. It certainly doesn't fit in with the elementary lens stuff we start with - which gives a cone of arrival, stretching from edge to edge of a single lens. I know that there is an internal aperture in a multi element lens but even that subtends an angle of more than just a few degrees. I guess I need a reference to this that's better than a Howstuffworks source.
A filter with such a small ' aperture' as you describe seems to be very limiting (i.e. the rolloff you mention). Think of the trouble that they go to to get good anti-aliassing filters in normal signal sampling. I guess it's all down to practicalities and what can be achieved under the circumstances but the roll off must be seriously relevant at high frequencies.
I appreciate this tutorial, you're giving me. I'd give you a thanks if I could find the right button lol
 
  • #18
Thanks, this is an amazing (super useful) conversation. I'm mainly interested in astronomical CCDs and using photon counts (photon flux) to infer stellar properties. Exactly how the photons reach the CCD is of course central, yet most (undergrad) sources ignore these practical components.
 
  • #19
mishima said:
Thanks, this is an amazing (super useful) conversation. I'm mainly interested in astronomical CCDs and using photon counts (photon flux) to infer stellar properties. Exactly how the photons reach the CCD is of course central, yet most (undergrad) sources ignore these practical components.
I am not sure how relevant that is. The image will surely just be formed by straightforward wave diffraction (as in all optical instruments). The pattern formed by low power images (the statistics of sparse photon numbers) would still follow the basic shape of a higher intensity image (surely?).
 
  • #20
My understanding is that the double birefringent layer splits each point of the image into four different points spaced slightly apart. The light then hits the color filters and is filtered appropriately. Since real images are composed of a continuous set of an infinite amount of points, this has the effect of making some of the light from each point fall on multiple sensors instead of just one. In other words, the image is blurred slightly, with the amount of blurring dependent on the amount of separation between each of the four optical points.

For example, if the resolution of an imaging system gives us an airy disk for light exactly equal to the size of each pixel on the sensor, then practically all of the light from some of the points of the image will fall only on one of these pixels. The number of these "special" points will be equal to the number of pixels on the sensor. Points of the image in between these special points will have their light split between two or more pixels. Because our resolution is limited, spatial frequencies higher than our resolution are lost since the sensor can't tell them apart no matter how small the sensor's pixels are.

When the resolution of the imaging system gives us an airy disk significantly smaller than each pixel, then we can come across aliasing since our optical system can separate the points of these higher spatial frequencies but our sensor can't. The sensor records only "discrete" portions of the image, not the continuous range, so these high frequency patterns (higher than the spatial frequency the sensor samples at) manifest as abrupt changes in the resulting sampled image. Hence why shifting to a sensor with a higher number of smaller pixels decreases aliasing.

When you have a color filter matrix, the problem of aliasing becomes much more severe since each pixel will receive light predominantly within a small range of wavelengths and each pixel of the same color is typically separated from other pixels of the same color by at least one pixel. This is the same as having a large spacing between pixels of a monochrome sensor. Light that falls between the pixels is simply lost, so you have a case where the sampling done by the sensor is much lower than the spatial resolution of the optical device.

Having an optical low pass filter helps because it blurs the entire image, effectively increasing the size of the airy disk so that light from some of the areas of the image that is normally lost is instead captured by a pixel. In addition, the higher spatial frequencies are filtered out by this blurring since it increases the amount of separation required between two points of the image to see them as separate points. Splitting the light from each point into four beams means you are literally projecting four different images onto a single sensor, with each image slightly offset from the others.

To my understanding one of the things an OLPF doesn't do is that it doesn't split the light from all points on the image into four beams with each beam landing exactly on one color pixel. A single point at a particular location on the image may have this happen if the system is set up to do so, but if you look at another nearby point you will find that the light is split into four beams with each beam falling on more than one pixel.

That's the way I understand the working of an optical low pass filter. As always, someone correct me if I'm wrong.
 
Last edited:
  • #21
mishima said:
Thanks, this is an amazing (super useful) conversation. I'm mainly interested in astronomical CCDs and using photon counts (photon flux) to infer stellar properties. Exactly how the photons reach the CCD is of course central, yet most (undergrad) sources ignore these practical components.

I can understand why they are ignored. When doing photometry and other work with astronomical imaging, targets are usually chosen so that the effects of aliasing are minimized. High-resolution telescopes and sensors are used for work where their resolution can be used to its fullest. Lower resolution telescopes and sensors are used in situations where they perform adequately for the task at hand and you don't need super high resolution and sampling. Given that the resolution and sampling details of each system is known prior to using it, you typically don't need to look into the details of how the light reaches the sensor unless you're in the process of designing a telescope or working on a project where that information is needed.
 
  • #22
@Drakkith That's how I see it too. There is no way that you could arrange for light arriving in what must be a cone from the lens*, could be split so as to ensure that red from one edge and red from the other side could arrive at just one part of the sensor array. The blurring (splitting the light by the birefringent layer) needs to ensure that light is spread over an area that's at least two pixels (i.e. triads of pixels)in width if alias sing is to be avoided. The colour splitting has nothing to do with this. This is why I introduced my comparison with (and distinction from) a colour tv tube and the phosphor dots.
It is interesting (but obvious when you think about it) that the problem becomes progressively less as the pixel density goes up and exceeds the resolving power of lenses.
I understood that digital camera lenses are designed with different optics because of the directivity pattern of the sensors is different from that of (omnidirectional) film elements and i helps to reduce vignetting effects at the edges.
*I have some personal evidence of this in that the occasional bit of dust on the sensor is always much more visible when using a small aperture than when using a large one - implying that spread of direction of arrival can be relevant at wide apertures.
 

1. What is an Optical Low Pass Filter (OLPF) and why is it important in digital photography?

An Optical Low Pass Filter, also known as an Anti-Aliasing Filter, is a component in digital cameras that is placed in front of the image sensor (CCD or CMOS) to reduce the occurrence of moiré patterns and false colors in a digital image. These patterns and colors can occur when photographing fine patterns or textures, such as a woven fabric, which can interfere with the camera's ability to accurately capture and render the image. OLPFs are important in digital photography because they help to produce sharper, more accurate images with less distortion.

2. How does an OLPF work?

An OLPF works by slightly blurring the incoming light before it reaches the image sensor. This blurring, or low pass filtering, reduces the high-frequency patterns that cause moiré and false colors. The filter is designed with a specific pattern of tiny, transparent dots or lines that are slightly offset from each other, creating a diffraction effect that blurs the image. This blurring is almost imperceptible to the human eye, but it is enough to prevent the occurrence of moiré and false colors in a digital image.

3. What is the difference between an OLPF and a UV filter?

While both OLPFs and UV filters are placed in front of the camera's lens, they serve different purposes. As mentioned, an OLPF is designed to reduce moiré and false colors in digital images. On the other hand, a UV filter is primarily used to protect the camera's lens from scratches, dust, and other potential damage. UV filters also block ultraviolet light, which can cause a bluish cast in photographs taken in bright sunlight. OLPFs are only found in digital cameras, while UV filters can be used on both film and digital cameras.

4. Are there any downsides to using an OLPF?

The main downside to using an OLPF is that it reduces the overall sharpness of an image, as it is designed to slightly blur the incoming light. This may not be noticeable in everyday photography, but it can be a concern for professionals or photographers who require the highest level of sharpness in their images. Some digital cameras allow you to disable the OLPF, but this can increase the risk of moiré and false colors appearing in the final image.

5. Can an OLPF be removed or replaced?

It is possible to remove or replace an OLPF in some digital cameras, but this should only be done by a trained professional. Removing the OLPF can result in improved image sharpness, but it also increases the risk of moiré and false colors in photos. Some manufacturers may offer different versions of their cameras with or without an OLPF, so it is best to research and compare before making a purchase.

Similar threads

Replies
6
Views
2K
  • Electrical Engineering
Replies
5
Views
3K
Replies
1
Views
9K
  • Optics
Replies
2
Views
906
  • Other Physics Topics
Replies
7
Views
3K
Replies
15
Views
6K
Replies
7
Views
3K
  • General Discussion
Replies
2
Views
6K
Replies
1
Views
997
Replies
2
Views
4K
Back
Top