Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Simulate other lenses with Time of Flight (ToF) camera

  1. May 5, 2018 #1
    Hello,

    I have a small project in which I try to use a Time of Flight camera to detect movement in the Field of View.
    Now the current Field of View is really narrow (about 25° diagonal FoV), which is we I will have to simulate a bigger one.

    My professor did try to explain to me what changes, but to be honest I could not follow his explanation, since I'm a real beginner when it comes to optics. Now that I did some research on the theme I have a lot of questions regarding the theme, but could not find any suitable explanation yet, so here I am :)

    So what I do understand is that a Time of Flight camera emits light with a transmitter and has an receiver, which measures the time of flight of the transmitted photons to get distance measurements.
    Now to things I am not sure about.

    -If I would like to have a bigger FoV. Do I need to use a lens on both the receiver and the transmitter or only the receiver?
    -What properties change when I use a different lens in my system(luminous flux, power consumption)?
    -Is there a way to calculate how these properties would change?
    -and my final question is to the brief introduction that my professor gave me, where I made a picture of a formula which I did not really understand(I appended it as a picture). I know that the formulas are about two different lenses, with different FoV. But what exactly is calculated there? If it is the observed surface area of the different FoVs, shouldn't it be ## \varphi \propto (d * tan(\frac{\alpha}{2})*2)^2 ##, when d is the distance to the object and ## \alpha ## is the Field of View

    I know those are a lot of questions. But I hope someone can try to help me understand the theme a bit more.

    Thanks in advance
     

    Attached Files:

  2. jcsd
  3. May 5, 2018 #2

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    What do you mean by a "time of flight" camera? Are you measuring the time for a light pulse to cross a gap (that will be around one foot per nanosecond)?
    The optics can spread the pulse out if the distance travelled is not the same over the beam.
    Have I got it wrong?
     
  4. May 6, 2018 #3
    I'm using the VL53L1X camera of ST.
     
  5. May 6, 2018 #4

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    To detect something it has to be both in the beam of the light source and in view of the camera. I don't know what your 25 degrees refer to, but you'll have to consider both sides if you want a larger field of view.
    The flux for sure. The time will change, you'll have to re-calibrate the device if this is a relevant effect. You might need a stronger light source if the intensity goes down too much.
    Sure, you can simulate the optics.

    ##\sin \frac \alpha 2 \approx \tan \frac \alpha 2## if the angle is not too large, and for an "is proportional to" relation the factor 2 does not matter.
     
  6. May 6, 2018 #5

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    From the spec sheet, it wasn't clear to me what algorithm the camera uses but it must be able to cope with missing returns at low levels.
    I guess the optics wouldn't be too critical because the resolution (error is around 20mm) is non-demanding.
    Everything is on the chip, afaics so the emerging laser light will pass through the same lens. If you increase the field of view, it will involve both the illumination of the object and also the 'receiver gain'. I reckon that must mean doubling the distance would reduce the overall sensitivity to 1/16 (inverse square law would apply twice so 1/4 X 1/4). That would imply the maximum usable range would be 1/4 with the wide angle mod.
     
  7. May 6, 2018 #6

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    Where do you see a doubled distance?

    Spreading out the signal over 4 times the area but also collecting it over 4 times the area doesn't change the signal strength. It might increase the background, however.
     
  8. May 6, 2018 #7

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor

    From the datasheet it looks like the laser and the sensor have separate apertures. Page 26 has a diagram with EMT and RTN cones, which I assume stand for Emitter and Receiver, though I'm not particularly certain about that. What are your thoughts?
     
  9. May 7, 2018 #8
    I'm sorry I meant 27° and it is the the listed diagonal Field of View in the datasheet.

    I don't know if the light passes through the same lens for transmitter and receiver.

    So from what I understand using a wider angle lens would mean, I would definitely have to adjust for that one the receiver and the transmitter.

    can you guys recommend a program, with which I can simulate the changes on the light source with different lenses?
     
  10. May 7, 2018 #9

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    I could have put my post better. I was just pointing our the sharp drop off with a bigger range.
    This is true when the object takes up the whole FOV. As the object approaches a point, the ISL applies twice (1/d4). This accounts for the 'black hole' effect for objects in the background of a flash photo.
    I didn't understand the operation of the camera fully but I am assuming that the optics would need to blur (low pass filter) the image so that each of the (very few) sensors would see a section of the scene with few gaps or overlaps. Using a wide field conversion would need care if this is to be maintained (i.e not to lose small moving objects in the gaps in between.)

    Is that some attempt to compensate for the drop off of sensitivity at the edges? If one field is wider and tailored in the middle in some way, the overall sensitivity over the field could be more uniform. But the OP's figure of 25° and the FOV used in page 15 are significantly different from the 36° and 39° figures on P26.
    At 25° off axis, the return could be 40% lower than on axis. (People may argue it's 20%)
     
  11. May 7, 2018 #10

    mfb

    User Avatar
    2017 Award

    Staff: Mentor

    Well, apparently it is larger than the current field of view.
     
  12. May 7, 2018 #11

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    That unit is being suggested for use in robots to detect obstacles and to detect hand gestures. It's a bit late to avoid an obstacle that takes up the whole field of view. See the last of the three pdfs in the link to the device and also the User Manual. It implies you could spot a sheep in a field. :smile:
     
  13. May 7, 2018 #12

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor

    I haven't a clue.
     
  14. May 7, 2018 #13

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    One of life's mysteries.
    But the OP has quite a few hidden difficulties and the basic answer is that the illumination needs to have a similar angular field or you will waste light energy or have a dark surround to the scene. (Just as for a good camera flash with Zoom, linked to the camera lens). Geometry (as discussed) can tell you the resulting light level thrown back. However, once the angle gets wider than around 10° from the axis, there is significant drop of as the hypotenuse (to the edge of an illuminated plane will increase as 1/cos(Φ). This can be corrected for, of course.
    A "program", as such is not needed. All that you need to do is some simple trigonometry on your calculator.
     
  15. May 8, 2018 #14
    I'm sorry, I still don't really understand.
    What unit am I working with when calculating the light thrown back?
    And what do you mean with the drop of with an specific angle? Is there a law for that?
     
  16. May 8, 2018 #15

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    I have to wonder what your Professor told you about this and what he actually expected you to know already. The basics of what happens to the light are the same as for pretty much any optical system. The effect of distance when sources are fairly small is just Inverse Square Law.
    Details of the "throw back" (reflectivity) are not important for this calculation because the same thing applies for the unmodified system. All you need to know is the distance travelled by the light from the source and use ISL to find the light flux density from the centre relative to the edges. This is simple trigonometry and the Cos of the angle tells you the length of the hypotenuse. I can't imagine that a specific program has been written to do what you want; it such basic Physics and Geometry.
    A unit for power flux density could be Wm-2 but the spec will help you there.
     
  17. May 8, 2018 #16

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor

    Unfortunately optics is not usually a simple case of "find a law and apply it". Even the simplest optical systems are very complicated, with many different laws, properties, and even units to choose from and work with.

    To help us help you, do you know the desired field of view of your sensor? Do you know the desired distance to the objects whose speeds are being measured? When you say you want to 'simulate' a larger FoV, what exactly do you mean by that?
     
  18. May 9, 2018 #17
    That helps me a lot. Thanks for that, but I can't find anything to in the spec that gives me the emitted power of the camera.

    The goal is to see the differences between different lenses in the area of the existing 25° and 90°. More specifically, what kind of drawbacks are to expect by using a wider angle lens. The datasheet says that the camera has a maximum distance of 3m+ with the current lens. But If I start emitting with the same power as before there should be a considerable smaller maximum distance with a wider lens.

    And the other question is how would objects scale on the different field of views. How will I have to scale a person (what size should the person have) walking through the narrow FoV to be representative of the wide FoV. Can I just scale them according to the surface area that is spanned by the FoV:
    for 27° diagonal FoV and a distance of $$ 2m = 0,46m^2 $$
    for 80° diagonal FoV and a distance of $$ 2m = 5,63m^2 $$

    so a scale factor of $$ \frac{5,63m^2}{0,46m^2} = 12,23 $$

    or will I have to consider other properties.
     
    Last edited: May 9, 2018
  19. May 9, 2018 #18

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    @aghbar I see where you're at now. I mentioned a camera flash gun earlier and that's what you need to be thinking in terms of. The TOF camera is a very specific example of a much more general principle.
    If you want to get a bit more of an idea about context then you should Google Camera Flash Guide Numbers and find some nice chatty, semi academic articles about the subject. I think this could be great for your confidence in any calculations you do. You could impress your Prof, too!

    The calculations at the level you need are no worse than have been mentioned above but, when the object is large and when the TOF camera has an image that covers a significant area of the sensor, there can be a great signal to noise advantage. The limit is when the object takes up the whole FOV and then the energy getting back to the camera is almost independent of what an additional lens does to the FOV. (Mentioned earlier)
     
  20. May 9, 2018 #19

    Drakkith

    User Avatar
    Staff Emeritus
    Science Advisor

    For a given object at a given distance, the reflected power is going to grow smaller as the FoV increases since the light is spread out over a larger area before reflecting back. The object will have to be closer to the camera for the same signal to noise ratio, as Sophie mentioned. Another thing to think of is the change in the resolution of the camera. You're not increasing the number of pixels, so each pixel is going to see a much larger portion of the FoV than it did before. This may or may not be desirable.

    That looks plausible to me.
     
  21. May 9, 2018 #20

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    That tells you something about how significant it may be. So perhaps the absolute level doesn't matter very much and it's only the relative power that counts. What do you think? The reflectance of the target would be at least as important (black cats in coal mines etc.) and the maximum usable range is mentioned.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted