Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

B Cameras detecting Infrared and making it visible

  1. Mar 9, 2017 #1
    Hi All,

    It is known that ordinary cameras can make infrared radiation which comes from remote controls visible at their own display. My question is about the explanation of this fact. Is the electronic process simple to describe? It seems that the infrared can produce photocurrents which are involved in the production of the RGB intensities.

    Best wishes,

    DaTario
     
  2. jcsd
  3. Mar 9, 2017 #2

    Charles Link

    User Avatar
    Homework Helper

    Instead of using an array of silicon photodiodes, an infrared camera will typically use an array of InSb (Indium antimonide) photodiodes, because silicon does not respond in the infrared (except in the near IR, where most thermal sources do not radiate appreciably until they become very hot.) InSb responds to the mid IR. The electronic process for both types of photodiodes is nearly the same. For longer IR wavelengths, HgCdTe (mercury cadmium telluride) photodiodes are sometimes used. There are a couple other IR photodiodes that can be used, (Ge, InAs, etc.), but the ones above are the most common.
     
  4. Mar 9, 2017 #3
    Ok. But I would like to know how the infrared radiation is transported to the visible light inside the electronic apparatus.
     
  5. Mar 9, 2017 #4

    Drakkith

    User Avatar

    Staff: Mentor

    It isn't 'converted' to visible light. The infrared radiation consists of photons of a lower frequency than visible light. Assuming these IR sensors are similar to regular CCD sensors, these photons are absorbed inside each pixel on the sensor and excite electrons from one area of the pixel to another. These electrons are then "read out" and the voltage/current amplified to produce a discrete value for each pixel. This value is what is ultimately used to produce the corresponding image, with lighter areas consisting of more electron counts than darker areas. An image is then created on a screen where the brightness of each pixel in the image roughly corresponds to the electron count.

    The process is obviously more complicated, but that's the basic idea. At least for CCD and CCD-like sensors. Depending on the frequency range you're interested in, your sensor may operate differently than a CCD sensor.
     
  6. Mar 9, 2017 #5

    Andy Resnick

    User Avatar
    Science Advisor
    Education Advisor
    2016 Award

    Silicon, the material commonly used to make photodetectors in cameras, absorbs light out to about 1 micron wavelength- beyond the visible portion of the spectrum. Similarly, the Bayer filters used by manufacturers to create color images may allow some long wavelength light to pass. Remote controls typically use diodes that emit light around 880 nm or 950 nm, so that light can be detected by cameras. Because of this, most camera manufacturers add a 'short pass' filter (or 'IR cut filter') to the sensor chip that reflects wavelengths longer than about 750 nm.

    Going further out into the IR: mid-wave (3-5 microns), and long-wave (8-12 microns) requires different materials for photodetectors: InGaAs, HgCdTe and PtSi are some common ones. They also have to be cooled....
     
  7. Mar 9, 2017 #6

    Charles Link

    User Avatar
    Homework Helper

    It may be worth mentioning that the creation of the image by the "focal plane array" of photodiodes in any camera is a simple optical process where a converging lens is placed in front of the array and an image (of the scene being observed) comes to focus in the plane in which the array of photodiodes(each photodiode is a pixel) resides. If the scene is faraway, the scene gets focussed at the focal length of the lens. For closer object distance ## b ##, it is often necessary to refocus the lens. You may have seen the formula ## \frac{1}{f}=\frac{1}{b}+\frac{1}{m} ## which gives the object distance ## b ## and image distance ## m ## for which focusing occurs (instead of being placed at a distance ## f ##, the array of photodiodes needs to be placed a distance ## m ## from the lens.) The photodiodes typically respond with an amount of photocurrent that is proportional to the incident energy/power on a given pixel=thereby, it is very easy to create a black and white type image that looks just like the scene being observed=and in the case of the infrared that will show contrast depending on the temperature and emissivity, (basically the radiated IR) of the different regions of the scene. ## \\ ## Perhaps it is also worthwhile to mention that, as with any camera, if you can read the pixels quick enough, you can make video from the images, where the frame rate is typically 60 Hz for video.
     
    Last edited: Mar 9, 2017
  8. Mar 9, 2017 #7
    I very much appreciate all the contributions. With regard to:
    "It isn't 'converted' to visible light." from Drakkith, I see no reason why the word conversion should be avoided, since we are facing a process where energy enters in infrared and appears as visible light at the end. It is certainly not the case of the "up conversion" as mentioned specifically in non linear optics (or quantum optics). However, I must add that I have used the vague word "transported" so as to avoid forcing the answer in any direction. Is there three photodiodes in these arrays, one for each one of the RGB scheme? Are each one of them covered with filters so as to respond basically to their particular range?
    Example: one very small photodiode in a CCD of a smartphone is the red photodiode, which has a filter that only allow small frequencies of the visible spectrum to pass. Other very small is a green one and there is also the blue one.

    Thank you all,

    Best Regards,

    DaTario
     
  9. Mar 9, 2017 #8

    Drakkith

    User Avatar

    Staff: Mentor

  10. Mar 9, 2017 #9
    Most applications probably use false color. Just mapping different intensities to different colors.
     
  11. Mar 9, 2017 #10

    davenn

    User Avatar
    Science Advisor
    Gold Member

    for a specialised FLIR camera, yes that is true,
    but in the case of the OP using an ordinary camera, this isn't the situation
     
  12. Mar 9, 2017 #11

    Charles Link

    User Avatar
    Homework Helper

    The simplest camera IR camera is just going to give black and white=basically gray levels that vary with intensity. Especially with digital type cameras, these can easily be programmed for "false" color, as @Khashishi pointed out. (e.g. high intensities could be given a blue color, and the highest intensities could be a bright blue, while less intense regions could be made red and very low intensities could be made a dull red, etc. This is an example of a "false" color display that could easily be programmed into the data from the pixels.) Only in a rather sophisticated IR camera would there be any sorting of colors to correspond to actual IR wavelengths or groups of wavelengths. That type of camera would normally require multiple detector arrays with different IR filters in front of each detector array to determine how much energy is at various groups of wavelengths.
     
  13. Mar 9, 2017 #12

    davenn

    User Avatar
    Science Advisor
    Gold Member


    you are not telling me anything I don't already know
    you seem to be missing the whole point of the original post and along with others making the explanation more complex than what the OP was wanting or needing

    Dave
     
  14. Mar 10, 2017 #13
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Cameras detecting Infrared and making it visible
Loading...