Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Optical Low Pass Filter, how does it hit CCD?

Tags:
  1. Sep 15, 2014 #1
    I'm confused about optical low pass filters (found on CCD imaging devices). As I understand it is a double layer of birefringent material which in effect splits light into 4: (2*25%) green, 25% red, and 25% blue (for a standard bayer matrix). The part I'm not sure about is if all 4 fall on the same CCD element, or if they are directed to 1 element each.

    If the latter, are the 4 computationally combined as a pixel?
     
  2. jcsd
  3. Sep 15, 2014 #2

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Afaik, the optical filter is just a 'frosted' layer with the appropriate width of corrugations to blur the image appropriately. I can't think it would be in any way synchronised with the positions of the sensor elements. It should't need to be, as the cut off spatial frequency is less than half the frequency of the sensor array (it's there for anti aliasing, of course). There would be serious registration issues between the pixel array and the lpf elements. I can't think that it would be advantageous.
    Did you have a reason for your suggestion or were you just 'thinking aloud' (like we all do)?
     
  4. Sep 15, 2014 #3
    Well, I had been looking at simplified pictures like the first one on this page, and similar. It sounded like 2 birefringent plates for a total of 4 images, which would then correspond to the 4 elements of the bayer matrix. I suppose I'm trying to understand the size relationship between an element of the matrix and an element of the CCD.
     
  5. Sep 15, 2014 #4

    berkeman

    User Avatar

    Staff: Mentor

    The article at HowStuffWorks has a reasonable (but simple) introduction to the various ways you can get the color images from digital sensor arrays. Check out this page in particular:

    http://electronics.howstuffworks.com/cameras-photography/digital/digital-camera5.htm

    :smile:
     
  6. Sep 15, 2014 #5

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    But wasn't the original question about the spatial anti aliasing filter? By definition, it is much coarser than the pixel pitch.
     
  7. Sep 15, 2014 #6

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    I have read that Wiki about the birefringent filter and it appears to spread each incident 'ray' into a set of rays, around the main direction a discrete filtering process - very clever. From the description,I don't think that is particularly related to the colour sensor grouping; Each set of sensors is subsampling the projected image and the anti alias filter will present it with contributions from around the nominal central point - giving a LP effect. It is pretty crude - but effective. It is probably limiting the resolution more than necessary.

    I also found that the more modern Digital Cameras are able to do without any anti aliasing filter because the pixel density is higher than the spatial frequencies in the image arriving from the lens. There is a fundamental limit (diffraction) for the resolution of the optics (it's always LP filtered by the optics) so all that's necessary is to sample well within this limit. The digital processing can be a lot smarter than bits of birefringent material, placed in front. It should be able to squeeze even better resolution from the optics than merely increasing the pixel density.
     
  8. Sep 15, 2014 #7

    berkeman

    User Avatar

    Staff: Mentor

    I'm not sure. He seemed to be mixing concepts, so I responded to this part of his question:

    :smile:
     
  9. Sep 15, 2014 #8
    Right, so light goes through the low pass and is split into four. Each one of the four falls into a different element of the bayer matrix, and each element of the bayer matrix corresponds to one element of the CCD in a raw image. However, in a processed image, because of a demosaicing algorithm there is no longer a one to one relationship. Is any of that grossly incorrect?
     
  10. Sep 15, 2014 #9

    olivermsun

    User Avatar
    Science Advisor

    The birefringent filter is quite carefully chosen to fit with the color sensor grouping. Ideally one wants the ray at a point to be sampled by all color elements nearest the point, so that full RGB color information can be recorded.

    It remains a trade off. Good optics on current DSLRs are still quite capable of producing visible moire effects at "sharp" apertures. Digital anti-aliasing software has certainly improved, but as you know there is no "surefire" way of suppressing aliasing after a signal has been sampled.

    Further increasing the pixel density would be a fairly straightforward way to push the AA filter beyond the limits of the optics entirely.
     
  11. Sep 15, 2014 #10

    olivermsun

    User Avatar
    Science Advisor

    That's more or less correct.

    In practice the LP (these days) tends to be slightly "weak," so that there is less loss of resolution at spatial frequencies of interest, but with the disadvantage of possibly aliasing artifacts under some circumstances.
     
  12. Sep 19, 2014 #11

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Is it correct? It sounds to me that he is saying that one pixel gets just one pixel's worth of image - just distributed over the colour sensors. That doesn't correspond to any filtering - just splitting the light so that it falls on each of the colour sensors. I am not sure what that will achieve. . Surely the point of splitting the incident light is to spread it over several pixels to constitute spatial low pass filtering (blurring).
    Is that not just because the sensor resolution is nearer that of the optics - which has not changed since 6Mpx arrays, with all good SLR cameras?
     
  13. Sep 19, 2014 #12

    olivermsun

    User Avatar
    Science Advisor

    Each of the 4 colored squares in the Bayer pattern (RGBG) has a separate CCD (or CMOS) sensor site underneath it. The "raw image" just contains the readout from all of the (color filtered) sites. That's the 1-to-1 correspondence I was agreeing with.

    To get a full color image, you'd then have to combine information from neighboring sites in some sort of interpolation since you only measured G at 2 positions and R and B at one position each, but you want to output an image with RGB information at all 4 sites.

    Well, as you know, the light (that would have arrived) at a point has to be split so that it is sampled by all 4 color-filtered sites in order to record RGB information. The combination of splitting and the area integration of the sensor sites effectively gives a spatial lowpass filter. Avoiding color aliasing requires a stronger lowpass since the effective sampling rate for color is only 1/2 that for luminance. Also, the optical lowpass isn't an ideal filter—there's always some rolloff below the frequency where you actually want to cut off. In some cases people view this as an unnecessary "waste" of potential resolution.

    Well, aliasing definitely becomes less of a problem as the sensor sampling frequencies get higher. Then lens flaws and/or diffraction, plus any imperfect technique, tend to provide enough low passing, which case the physical LPF is redundant (and detrimental to resolution).

    The optics have slowly changed since the 6 MP days, but even with good "old" lenses you can still see aliasing artifacts in certain scenes shot with 36 MP full frame sensors or even the 24 MP APS-C sensors.
     
  14. Sep 19, 2014 #13

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Most of this makes sense to me but any LP filtering must involve spreading the received image over more than one pixel. The fact that there are multiple colour sensors and that there is some summing of the colour sensor outputs is not relevant to that because that processing is after the sampling (too late to remove artefacts). The anti aliasing must be before the sampling - either by the transfer function of the lens or by a diffusing layer on top. I cannot see how the light can be split selectively between the three sensors (except in as far as it is diffused in some way). I am presumably mis reading some of this but the OP seems to be assuming that only long wavelength light lands on the long wavelength sensor. I don't see how this can be achieved when light arrives at all angles - it's not like a shadow mask in an old colour TV display tube where RG and B phosphors are only hit with the appropriate electron beam. Can you resolve this for me?
     
  15. Sep 19, 2014 #14

    olivermsun

    User Avatar
    Science Advisor

    There's a Color filter array (CFA) in front of the sensor, so the sensor is only picking up light in the desired passband at each of the four locations (r/g/b/g).

    It is a little like having RGB phosphors, but in reverse!
     
  16. Sep 19, 2014 #15

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Yes - but no.
    The phosphors are hit by electrons from just one direction (the shadow mask and electron optics ensures this) so the systems are not comparable; there is no pre-filtering and the electron projection system is relatively 'dumb'. A colour filter just in front of each sensor will do the basic colour analysis but there is no directivity there and it has nothing to do with spatial filtering - which must be at pre-sampling level. This is my problem with what the OP appears to say and what you seem to be agreeing with. My issue has nothing to do with the colour analysis and would equally apply to a monochrome array, which would still need an anti alias filter. The birefringent system could work on a monochrome array as long as its output spans more than one pixel; the fact that, in a colour array, the different filters are offset by up to a pixel spacing, doesn't have anything fundamental to do with the basic spatial filter. The "how stuff works' link doesn't actually deal with pre filtering (afaics) but just talks about the matrixing.
     
  17. Sep 20, 2014 #16

    olivermsun

    User Avatar
    Science Advisor

    The light that passes through the camera lens has a relatively small incidence angle at the sensor, so there is some analogy.. Recent lenses designed for digital sensors tend to deliberately limit this angle.

    The color filters explain why, e.g., only red light is assumed to be detected at the “red” photo sites, which is something you seemed to question in your previous post. The arrangement of color sites is significant because it sets the 1-pixel width of the birefringent split that is needed to avoid color aliasing.

    Meanwhile, the pixel integration is effectively a filter with a null at the sampling frequency Fs. The 1-pixel-wide birefringent split produces a null at Fs/2. Combine them and you have a reasonable physical realization of a low pass filter. The downside is the rolloff below Fs/2.

    This could also work in a monochrome system, but since "jaggies" tend to less objectionable than color aliasing, the few monochrome cameras out there along with the Foveon X3 sensor, which has stacked color-sensitive photo sites, tend to eliminate the LP filter completely and just accept the artifacts in return for the extra response below Fs/2, relative to a filtered sensor.
     
  18. Sep 20, 2014 #17

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    I am getting there slowly.
    I understand about the wavelength selective 'colour' filters (obviously I didn't make my point well enough) but I still can't get the idea of the optics pushing all light from one direction in a scene into one direction of arrival on the sensor. It certainly doesn't fit in with the elementary lens stuff we start with - which gives a cone of arrival, stretching from edge to edge of a single lens. I know that there is an internal aperture in a multi element lens but even that subtends an angle of more than just a few degrees. I guess I need a reference to this that's better than a Howstuffworks source.
    A filter with such a small ' aperture' as you describe seems to be very limiting (i.e. the rolloff you mention). Think of the trouble that they go to to get good anti-aliassing filters in normal signal sampling. I guess it's all down to practicalities and what can be achieved under the circumstances but the roll off must be seriously relevant at high frequencies.
    I appreciate this tutorial, you're giving me. I'd give you a thanks if I could find the right button lol
     
  19. Sep 22, 2014 #18
    Thanks, this is an amazing (super useful) conversation. I'm mainly interested in astronomical CCDs and using photon counts (photon flux) to infer stellar properties. Exactly how the photons reach the CCD is of course central, yet most (undergrad) sources ignore these practical components.
     
  20. Sep 27, 2014 #19

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    I am not sure how relevant that is. The image will surely just be formed by straightforward wave diffraction (as in all optical instruments). The pattern formed by low power images (the statistics of sparse photon numbers) would still follow the basic shape of a higher intensity image (surely?).
     
  21. Sep 27, 2014 #20

    Drakkith

    User Avatar

    Staff: Mentor

    My understanding is that the double birefringent layer splits each point of the image into four different points spaced slightly apart. The light then hits the color filters and is filtered appropriately. Since real images are composed of a continuous set of an infinite amount of points, this has the effect of making some of the light from each point fall on multiple sensors instead of just one. In other words, the image is blurred slightly, with the amount of blurring dependent on the amount of separation between each of the four optical points.

    For example, if the resolution of an imaging system gives us an airy disk for light exactly equal to the size of each pixel on the sensor, then practically all of the light from some of the points of the image will fall only on one of these pixels. The number of these "special" points will be equal to the number of pixels on the sensor. Points of the image in between these special points will have their light split between two or more pixels. Because our resolution is limited, spatial frequencies higher than our resolution are lost since the sensor can't tell them apart no matter how small the sensor's pixels are.

    When the resolution of the imaging system gives us an airy disk significantly smaller than each pixel, then we can come across aliasing since our optical system can separate the points of these higher spatial frequencies but our sensor can't. The sensor records only "discrete" portions of the image, not the continuous range, so these high frequency patterns (higher than the spatial frequency the sensor samples at) manifest as abrupt changes in the resulting sampled image. Hence why shifting to a sensor with a higher number of smaller pixels decreases aliasing.

    When you have a color filter matrix, the problem of aliasing becomes much more severe since each pixel will receive light predominantly within a small range of wavelengths and each pixel of the same color is typically separated from other pixels of the same color by at least one pixel. This is the same as having a large spacing between pixels of a monochrome sensor. Light that falls between the pixels is simply lost, so you have a case where the sampling done by the sensor is much lower than the spatial resolution of the optical device.

    Having an optical low pass filter helps because it blurs the entire image, effectively increasing the size of the airy disk so that light from some of the areas of the image that is normally lost is instead captured by a pixel. In addition, the higher spatial frequencies are filtered out by this blurring since it increases the amount of separation required between two points of the image to see them as separate points. Splitting the light from each point into four beams means you are literally projecting four different images onto a single sensor, with each image slightly offset from the others.

    To my understanding one of the things an OLPF doesn't do is that it doesn't split the light from all points on the image into four beams with each beam landing exactly on one color pixel. A single point at a particular location on the image may have this happen if the system is set up to do so, but if you look at another nearby point you will find that the light is split into four beams with each beam falling on more than one pixel.

    That's the way I understand the working of an optical low pass filter. As always, someone correct me if I'm wrong.
     
    Last edited: Sep 27, 2014
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Optical Low Pass Filter, how does it hit CCD?
Loading...